Within Reason - #34 Anil Seth - A New Science of Consciousness
Episode Date: June 7, 2023Anil Seth is a professor of Cognitive and Computational Neuroscience at the University of Sussex. He is the author of "Being You: A New Science of Consciousness". Purchase "Being You: A New Science o...f Consciousness": https://amzn.to/3NgKD53 Anil Seth's website: https://www.anilseth.com/ The Perception Census: https://perceptioncensus.dreamachine.world/ Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Welcome to Within Reason. My name is Alex O'Connor. My guest today is Dr. Annal Seth, who is a professor of, now can I get this right, computational and cognitive neuroscience here at the University of Sussex.
Yeah, it's the other way around.
other way around. It's basically the same thing. Thank you so much for being here. It's a pleasure.
Also the author of Being You, a new science of consciousness. Why do we need a new science of consciousness?
I think there's really a new approach to the science of consciousness. And one of the things about
consciousness is that for a long time, it was considered relatively illegitimate and disreputable
in the mind and brain sciences. Now, that's all changed in the last 30 years or so. And there's many
different approaches to consciousness now that are really flourishing within neuroscience, within
psychology, within cognitive science. And it's a very exciting time because there's lots of new
evidence, but there's also lots of different positions, different theories. And my own view is one
particular theory. So it's not really a completely new science of consciousness. There's a new
scientific approach to this age-old question. I'm going to have to begin by asking some of the
boring questions, including what is consciousness? Not that that's a boring question,
but it must be boring for someone like you.
It's always a tricky question.
I mean, I've been asked it so many times
and I always stumble a little bit answering it
because it's one of those things
that intuitively we all know what it is.
It's what we lose when we fall into a dreamless sleep
or lose even more profoundly
if we go under general anesthesia,
which is an amazing non-experience to have general anesthesia.
And it's what comes back
when we come around again or wake up in the morning.
And when we open our eyes,
It's not just that our brains are doing some kind of complex information processing.
There's an experience happening.
There's a redness to the colors that we see.
When we feel hunger, it feels like something to be hungry.
It's not just a state my brain is in.
So consciousness intuitively is what makes us more than merely complicated biological objects, makes us subjects.
It is what it's like to be a thing.
That's the definition I use a little bit in the book. It's a definition from the philosopher Thomas Nagel. And he says it like this. He says, for a conscious organism, there is something it is like to be that organism. And you're right. I think the intuition he's driving at is that it feels like something to be me. It feels like something to be you. And in Thomas Nagel's famous paper, it feels like something to be a bat. The only a bat will ever know what that feeling is really like. But for other things,
things like this table, this cup of tea, my phone, it doesn't feel like anything to be these
things. They're objects. However complicated. So you say, I mean, I spoke to Philip Goff recently,
who would probably beg to differ. I wondered sort of along that line, actually, you said that
consciousness is something like they're being more than just computational processes. Is it also
something more than the material? Because for many people, one of the most interesting things
about consciousness is that it seems to intuitively point to something non-material existing within
the universe. It's such an appealing intuition. I think it's one we all have. I'm interested actually
over history whether it was as seductive and intuition as it seems to us today. But certainly in
Western society over recent history, if we think about conscious, as thinkers have done from
Descartes and going back and going forwards, it seems as though consciousness is not the kind of thing
that a mere material system could have or could generate.
It seems a different realm of existence altogether.
Descartes divided the universe into matter stuff,
res extensor, the stuff that tables and chairs and brains and bodies are made of,
and res cogitans, the stuff of thought and the stuff of consciousness.
I can't deny that it seems as though there's something additional to the material happening,
But I worry how reliable our intuitions are about this.
They failed us in the past in these kinds of questions.
People at one point thought that life could not be explained in terms of physics and chemistry.
It seemed as though we needed some supernatural explanation, some el-en-bital, some divine sparkle or spark of life.
The fact that it doesn't seem to be a property of the material doesn't mean that it isn't.
It might mean that.
And what we do know, the sort of fundamental starting point,
of neuroscience is that there's this intimate connection between what happens in the brain
and what happens in conscious experience. So my overall strategy is to basically recognize that
and let's try and learn as much as we can about the nature of consciousness through the study
of the brain and the body and the world. And maybe at the end of that, there will be some
little residue of mystery left. It's possible. But I think that remains to be seen. Well, the brain is
the center of consciousness. It seems to be what generates consciousness and all consciousness is
correlated at least as far as we know with some kind of brain activity. But the strange thing is
that if consciousness is located in the brain or conscious experience, then like you say,
there are red experiences, green experiences, hunger, these kinds of things. But the mystery is
that if I were to cut open your brain, crudely speaking, I'm not going to find red in your brain.
I'm not going to actually find redness or blueness or greenness.
And so if consciousness is based in the brain or generated by the brain,
but all of the material matter in the brain,
you can search through and through and you won't find redness.
Does this not imply that redness itself is non-material
because it's not anywhere to be found in the material brain?
And yet we know that it exists because we can see it right now.
I think what it points to is that something like color has two different aspects.
It has a subjective and experiential.
side of it, the feeling of experiencing color, and there's an implementation side of it,
what's actually happening in the brain. It doesn't really make sense to think that you would
find red if you open the brain up and looked for it because that would require another
conscious observer to look at the color in the brain and then you just push the problem
one stage further back. So how is it that that color is then experienced as a color by
whoever's looking inside the brain? So that's the problem. So that's the problem. So that's the
I don't think that's such a problem, but the example of color is super interesting because it does, I think it highlights that color is a kind of thing, call it a thing or call it a process or you can call it quality, that requires both a brain and a world to happen. Actually, we can dream colors. So it doesn't always require a world. But color in our normal everyday waking life is not out there in the world either.
Out there in the world, there's just electromagnetic radiation of different wavelengths.
And these wavelengths, you often call them like red, green, and blue, but they aren't actually
red, green, and blue.
They're just different wavelengths of energy.
And it's combinations of those wavelengths that our brain uses to create experiences
of color.
And so when we experience something like color, we're experiencing simultaneously less
than what's really out there because our eyes are only sensitive to these three wavelengths.
and more than what's out there because we experience a whole palette of millions of colors
rather than just three. So there's this very indirect relationship. And color, in the words of
the artist Suzanne, color is the place where the brain and the universe meet. It's not in
either one by itself. So if colors are essentially, well, would you say located in the brain,
if they're not located sort of in the objects themselves.
Like you say, you've just sort of got wavelengths of light,
and it's really the brain that produces the color experience.
If that's the case, then, well, we know that there's an entire expanse of wavelengths
outside of our perceptive abilities.
So ultraviolet light, infrared light.
These are the same kind of things.
They're just electromagnetic waves, but they're not waves that our eyes can detect.
But if we take our eyes out of the picture, if it's also located in the brain,
do you think in principle there would be a way to stimulate a person's brain through artificial means
that would allow them to see something like a new color?
I think we already do that, to some extent.
I mean, they're already night vision goggles, right, that take, say, infrared and render it in something that we can see.
Now, you could take that one stage further, and instead of changing the frequencies of the energy
into the visible spectrum, so-called visible spectrum,
you could maybe plug these signals directly into the brain
and perhaps generate a new kind of experience.
This, I think, is in principle possible,
but of course our brains didn't evolve to deal with signals
other than the ones that we typically experience.
It doesn't mean that we wouldn't experience something,
and it might be very interesting to know what it is.
There's a whole history of studies
in what's called sensory substitution,
where we take information from, say, of one kind, let's say, touch information and it can be fed
into the visual cortex or into the eyes in some way. Or even echolocation. So back to Thomas
Nagel and what it's like to be about. You can actually get some of the way there by taking sonar
and rendering it either directly into the brain or into some other sensation, maybe the
pinpricks of tactile sensation on your skin, to give you the same.
same information that echo location would give. And it's fascinating to ask what people
experience when you do this. And it starts off often with them experiencing something like
the interface. So a typical example would be translating vision into touch. It's one of the
oldest interfaces like this. Initially people just feel the touch. But then the idea is some of the
evidence is that as people get used to this, they don't notice the touch so much.
and start being able to directly have an impression of a visual environment,
but now it's coming through a different way.
So what do you think is sort of going on when we perceive objects, colors,
and have sensations, is my sort of perception of this cup,
the cup that I can see, caused by this object, the cup, as it actually is,
or is there something a bit more complicated going on?
I think it's a little bit more complicated.
I think we have this impression as we walk around the world
that the world is just there and it just pours itself into our minds
directly through the windows of our eyes and our ears and our other senses,
that we see things and hear things and smell things as they are.
And that's a very useful way that evolution is designed our perceptual systems.
It would be very weird if we walked around the world experiencing things
as a brain-based construction.
But that's in fact what's really going on, I think.
We don't, and we can't experience things exactly as they are.
This point goes back to Emmanuel Kant and even earlier.
Now, there is an objective reality.
There are things, there are objects like your cup, like my cup here.
They exist, and they have properties like color and solidity and shape and so on.
but the way we experience them is never as they really are
it's always an interpretation always a construction
and quite how that interpretation that construction is built
I mean that's what our kind of day-to-day research is all about
but the basic idea is the idea that instead of the brain
reading out sensory signals and say oh I just brain is sitting
the self is sort of sitting in here somewhere
and reading out all this information and saying oh there's a cup
and it's got some red markings on it and so on.
The brain is always making predictions about what's out there in the world
or in here in the body and using the sensory signals to update these predictions,
to keep them calibrated, tied to the world in ways that are useful for our behavior.
And this is a big flip then, because even though it seems as though we read the world from the outside in,
what I think is really happening, and not just me, is that perception is as much a creative
act of writing as it is of reading. We're actively generating our perceptions. And the sensory data
is there mainly to keep these perceptual predictions tuned to the world in useful ways. So we've got
a kind of balance going on. We've got inputs, but also our brain producing outputs. And it's
somewhere in the meeting of these two that we have conscious experience. That's right. And that's the
big open question. It's like where in this this continual dance between sensory input and perceptual
prediction, where in this dance is consciousness happening and what elements of it are most important
for shaping what we experience? And the traditional, you know, if you open a neuroscience textbook,
at least the ones I was reading when I was a student, the heavy lifting is all done in this
bottom up outside in direction. And whatever's coming from the brain back out to the world,
maybe doing a little bit of tuning and finessing and modulating,
but really it's an outside in.
And I think that's entirely wrong.
I think the flip side is the way to think about it.
What's really important is this top-down, inside-out flow of signals
that carries the brain's predictions about the world.
And I think that's what we experience, the brain's predictions.
I think outside of the setting of neuroscience or philosophy,
it can sound a bit for want of a better word, sort of hippie-ish, to say, well, the conscious
experience we have, my experience of the cup is really sort of a projection of my own mind,
or at least in part, it's my brain sort of putting itself out into the world.
It implies, firstly, that a lot of what we're seeing is not, as you say, like it really is,
that there's some kind of hallucination involved, although not a sort of full or complete
hallucination, because there seems to really be an object that you say it's interacting
backwards, I think people might be skeptical of this view, for example, thinking about
a newborn child, sort of opening their eyes for the first time and just being presented
with, you know, blue or being presented with a cup. And as far as we know, I don't remember,
I don't know if you do, but we'll just see a cup. And it seems that in the case of a newborn
infant, they're not going to have much in the way of sort of predictions to make about how
the world works, unless you're suggesting that these predictive qualities of the mind
are somehow sort of hardwired evolutionarily into the depths of our brain. Is that more like
what's going on? I think that's likely to be a big part of the story. It's actually very
hard to know what a baby, a newborn baby, really experiences. William James, one of the founders
of psychology famously speculated that a newborn baby would experience this kind of blooming
and buzzing confusion, perhaps without even distinguishing the different senses. Like it's pretty
obvious to us what's visual and what's auditory and what's olfactory, what's smell. But maybe we
learn to do that a little bit. I mean, I wonder if that's got something to do with the fact that
people tend to have memories up to a certain point in their childhood. It's like people might
be able to remember being five years old, possibly slightly younger. I'm not really sure. But there's
a point of which we just don't remember. And maybe that's got something to do with the fact that the
information that was coming in at the time was just a big sort of hot mess of data that we
couldn't make sense of. It might be a big hot mess. It might also be, and I think also part of
that story is that, you know, it takes the self quite a long time to get online. It's not that
the self is suddenly there with all its ability to remember and plan aspects of self that develop
over time. And memory is something that it's, again, it's not like a computer. We often get really
misled by this idea of the brain as a computer. And when we think of memory in a computer,
you store it and it's just always there and it's kind of always useful to have it there.
But memory in a biological organism and a human being is not supposed to be this indelible,
totally accurate record of everything that happened. It serves the purpose, like everything
in our biology and our psychology. It serves the purpose of behavior. And it's kind of hard
to see an adaptive reason why we should remember.
remember things at that early in our lives because we didn't really have control over anything
that early in our lives. So memory is very different. It's very driven by the future as much as
it's a record of the past. Yeah, and seemingly these things are driven by survival. And this makes
people, a lot of people quite skeptical in philosophy of our ability to understand truths about
the universe because of course the very sort of mechanism we use to understand the world is biased
not towards truth but towards survival. Is that as skepticism you share given your sort of neurological
research? It is. Well, it's sort of. I mean, I actually see that as a really essential insight
into understanding perception. I think rather than it being a problem, when it's turned back on
itself and you recognize the survival imperative of our brains, then that can actually, instead of
obscuring our understanding, it can enhance it when we're trying to understand the brain
itself. So this comes back to what you were saying about, is it a little hippie-ish to talk
about perception in this inside-out, active, creative way? Maybe, but I think it's also very well
grounded. I use the term not, well, use the term controlled hallucination quite often to talk
about perception. But the control is just as important as the hallucination. The hallucination part
emphasizes that what we experience comes from the inside rather than just being directly read
out. But the control emphasizes that it's not arbitrary. The world exists. Our minds aren't just
making stuff up. There is a cup here and it has solidity. But the way we encounter it is a kind
of active construction. But it's controlled by the world in ways that are useful for our
survival. And that can help us understand the indirect nature of one.
we experience the world the way we do?
So it's not that we're just sort of imagining everything.
There is a real external world.
And I mean, I don't know what you would say to skeptics on that front who say,
all I sort of have is this potentially flawed system of senses that I used to try
and investigate the world.
And I have no way to know if it's really there.
And especially when you say that so much of what we perceive is to do with our brains,
what they predict, what they expect to find.
And I mean, you say,
the book that people might object to a sort of hallucination view of perception by saying,
well, if you think it's all a hallucination, go and jump in front of a train and see what happens,
right? And of course, yeah, I mean, do that and you're going to feel a big crash. No, I should
clarify, actually. I did say do that. I don't want to be clipped. But of course, the feeling that
you would have being hit by a train is something that equally could be put down to something that your
brain is sort of creating or inventing. So I don't know if you have any reason to so confidently
assert that actually, no, there is a real sort of set of objects that are in part causing our
perceptions. I think that's just a metaphysical assumption that I make. I don't think you can
demonstrate that, right? I mean, it could be that it's true. You can't rule out the possibility
that we are all just brains in a vat and our brains are being stimulated in particular way
so that we have the experiences that we have. And there's a very different world out there or maybe
even nothing at all out there. It's just not a very productive way to look at things.
You know, the way I feel I'm very pragmatic or try to be quite pragmatic as a sort of as a scientist and especially this interface with philosophy, the assumption that things indeed exist, I think is a very pragmatically useful assumption to make. It helps us under, you know, it gets us quite far in understanding the world and our relation to it. If we were thinking, if we were making the starting assumption that indeed everything is made up in our heads, there's no such thing.
as objective reality, then it becomes quite challenging to explain how and why we have compatible
perceptual experiences of things if there's nothing actually real out there. Okay, so there is an
external world and there are objects and they sort of impress themselves upon our minds. And even
though there's no real agency involved here, I open my eyes, there's a microphone in front
of me. I don't choose to see this mic. I don't think to myself, I would love for there to me,
be a microphone here. It'd be really convenient for this podcast, and thus there it is. It just
sort of appears in front of me. And yet you suggest that at least part of the experience I'm having
when I see this microphone is actually my brain making a prediction and mapping something onto the
world. What reasons do we have to believe that this is happening when we perceive the world
around us? All sorts of reasons. And I think this is one of the appealing things about it.
There's ancestors this idea that you can find in philosophy, in psychology, even in machine
learning about how this problem of perception might get solved. And just by the way, the fact that
we don't experience it as a voluntary act, it's not that I decide to see something. It's actually
some interesting subtleties there that in hypnosis, for example, we can, people can be suggested
to have visual experiences purely through suggestion, which means that they're sort of engaging
volitional acts without even realizing that's what they're doing. And they can,
indeed have these experiences.
But you're right, in the normal way
of encountering the world, it just seems to happen to us.
We don't actively bring it about like we do
when we close our eyes and imagine seeing something
in our mind's eye.
So why should perception work this way?
The first reason to think about things in this sense
is that is a very sort of just like,
let's have a look at what the problem involves
and figure out how we might solve it in principle.
here's the thing the world out there the real world is ambiguous it's unlabeled we if we try and like
take from take a physics eye perspective on what's what's actually happening in this space here
where there are two bodies two brains and some air and some microphones there's all kinds of things
there's like pressure waves in the air there's electromagnetic radiation there's underneath that
there's this kind of quantum super who knows what what is going on um
And the brain is faced, which is also made of these things, by the way.
You know, it's also made of the same kind of fundamental particles that everything else is made of.
It's faced with this problem, figuring out what's out there in the world.
And it's a situation that involves inherently a lot of uncertainty.
And so this already suggests what the brain has to do is make a best guess.
It can't unambiguously determine what's in the world.
It has to make a best guess about it on the basis.
of prior expectations or beliefs, some of which might be learned, some of which might be
evolutionarily pre-trained, if you like, and the sensory data that it gets. So there's this
concept of the Bayesian brains, old concept of Bayesian reasoning, which is just how a system
should deal with uncertainty in its sort of an optimal way. And an optimal way to do this is to make
predictions and update these predictions with the data and try and minimize the difference
between what the system is predicting and what's actually there. That's a way of dealing with
uncertainty. Yeah, you refer to the Bayesian brain throughout the book. And of course,
Bayesian reasoning, like you say, is inference to the best explanation. We say, and you can compare
it as a form of abductive reasoning. You can compare it to deductive reasoning, which is logically
taught a logical. It might be, you know, famously all men are mortal. Socrates is a man. Therefore,
Socrates is mortal. This logically follows. Inductive reasoning being like, I pick out a green
ball and a green ball and a green ball and a green ball. So I suspect, so I'm going to reason that
the next ball is going to be green, which doesn't necessarily follow, but that's inductive reasoning.
Whereas abductive reasoning and sort of Bayesian reasoning here is given some set of data,
what's the best explanation that we have? And you think that the,
brain is working along these lines. It's sort of taking in this experience, this data and saying,
well, what's the best explanation that I have of the various things that I'm experiencing right now
and produces something like the cup in front of me? Yeah, I think in many cases, yes. I'm not saying
in all cases, but I think in most cases, I think that's the way it has to work. And it is this,
another good example of Bayesian reasoning, just to put it in context, is let's say something like
a medical diagnosis. So, you know, you take a test for some rare disease.
and it comes through positive.
So what's the chance that you actually have the disease?
Well, it depends on how accurate that test is
and on the prevalence of that disease.
So even a positive test,
you might still conclude that you don't have the disease
if it's very rare and the test is unreliable.
So it's a best guess.
And even if the test is quite reliable,
if the disease is rare enough,
it might still be not enough reason to,
on a sort of abductive understanding
to reason that you actually have the disease,
which is quite an interesting implication of abductive reasoning.
I'm wondering what might be some sort of empirical data
that we might have for this view
that our predictions about what we expect to find in the world
and sort of pre-existing networks in the brain
determine and affect what we perceive.
Because most people think, I would imagine,
that when I look at something, I just sort of see it.
And I'm not doing it.
anything to produce this image, rather the image is sort of being produced in my mind by the
object. What evidence do we have that that's not what's going on? Heaps of evidence, which is the other
main strand of evidence that we were talking about. On the one hand, you've got the in principle
stuff, right, that it sort of has to be or ought to be this abduction, this inference to the best
explanation. But then if indeed we do open up the brain and look inside, yeah, we don't find
colors, but what we do find very suggestively is that, for instance, in vision, there are
far more connections going from the inside of the brain back out towards the eyes than there
are coming in the other direction. Is that right? That's right. And that was always a puzzle for
neuroanatomist, neuroscientists to explain. Why should it be like this? And it begins to make
sense if you start to think, well, actually, it's this process of generating predictions. That's the
really important thing. And the sensory information is just carrying the error, the prediction
error between what the brain expects and what it gets. So anatomically, we've got some suggestive
evidence. And then there's a ton of fascinating experimental evidence. And this is the kind of thing
that I'm interested in doing in my group at Sussex, at least in part, but plenty of other groups
are doing it too, where we can show that the brain's expectations really do shape what we
experience. We did an experiment a few years ago now, just priming people to expect to see
either, let's say, a house or a face. And the images would be presented in this way so that
we're very ambiguous. And what we found was that if the sensory data matched people's
expectations, then people consciously saw the image more accurately and more quickly than when
the image, the sensory data, didn't match their expectations. So here this
close match between the brain's expectations and the data, it brought something more rapidly
into our conscious scene, into our conscious experience. And that's, of course, what you'd expect
if expectations are deeply involved. Perhaps. I mean, I'm imagining, sort of, if you expect to see a
face, you might see the face more quickly. But in the same way that if you told me to
sort of react to a sound you were about to play, and I was sort of ready for. And I was sort of ready
for it. I think my reaction time would probably be quicker. It would seem as though I'd noticed the
sound quicker because my reflex would be faster than if it just happened randomly and I reacted
naturally. But I don't think that would suggest that my expectations of what the sound's going
to be would in any way affect my actual sense experience of what the sound is like. You know,
the bell would sound the same to me whether I was expecting to hear it or not. The fact that I see it
more quickly or I'm listening and so hear it in more detail because I'm focusing, you know, something
like that. I don't know if that tells me that my expectations have anything to do with the
experience. So you should be a psychologist because you're raising all the kinds of objections that
people rightly raised to these kinds of data, which we had to control for. So yes, I mean,
you have to rule out that these effects are not simply priming people to respond to something
or having them pay attention to something more than they would otherwise. How do you do that?
I mean, so you can, you have to, I don't think any one experiment can, can, can,
do it all. So in our experiment, we did a bunch of controls, like trying to change the mapping,
the hand that you would use to try and control for this idea of response priming so that you
weren't just queued more quickly. And we also had people pay attention to different things so that
we could, to some extent, control for attention. You can do it as, you know, you can make a good
college try at doing it. But I think the real power is that this kind of vision is not rely on
any single experiments. There are other experiments.
that also converge on this view.
Some of my favorite come from a friend and colleague of mine in Glasgow,
Lars Mukley, who uses brain imaging to look actually inside the brain
in experiences, paradigms, experiments like this.
And what he found, and I think this is really strongly indicative
of the predictive brain idea,
is he had people look at different images while they were in the scanner
and there were images of different kinds of scenes.
They would be like an inside scene
and a scene in a city
and a scene in a countryside,
things like this.
But in each image,
a quarter of it was missing.
It was just blank.
And, of course,
people could still recognize
what's going on
because three quarters is still there.
And then what he did was he asked
whether he could decode
from the brain imaging data
what the scene was,
but only using imaging data
from part of the visual cortex, the back of the brain here,
that was not being stimulated by the image,
that was like responding to the blank part of the image.
And he found that he could.
And what's more, he found that he could only do this
by using the part of the visual cortex
that was not receiving input,
but that was receiving top-down, inside-out signals
from deeper in the brain.
So the brain was passing back to,
visual cortex, enough information to decode what the person was seeing. So there's definitely
information about the content of our experience going right back out to visual cortex. And you can
see this now in the flow of signals in the brain. There's also just some quite straightforward
intuitive, almost optical illusions that you talk about in the book that suggests that our
perceptions of things can be affected by more than just what we're seeing, but also what our brain
expects to find, right? There's all sorts. So your optical visual illusions, they're perhaps the
easiest way to show that what we experience is really not a direct reflection of what's there.
And these can range from very simple things like our brain encodes expectations that things
under, in shadow, appear darker than they are. And so we can show that that really changes how
we experience shading and color. And we take the context away and suddenly these, the colors all look
different. Yeah, that wonderful famous image which
for viewers I can
put on screen, for listeners we'll have to
do with the explanation. It's sort of a
generated image of a
sort of a cylinder
producing a shadow on a checkerboard
and the light
square in the shadow
and the dark
square outside of the shadow look like
different colours. But in fact
they are in the image the
exact same shade but because our brain
sees that the one is under a
shadow, or at least it appears to be in the image, it's the illusion, our brain just makes it
look darker. And it's, you describe this kind of illusion as, what's the word you use, like
cognitively impenetrable or something? Yeah, that's right. Even when you know what's going on,
you can't help. You can't unsee it. You can't unsee it. So it's hardwired into,
into our perceptual system. And it's not, you know, it's not a failure of our perceptual system.
You know, our eyes and brains were not intended to be light meters. They're intended to figure out
most likely what's actually going on.
And so given that particular sensory information,
what's, you know, we see the situation kind of accurately.
Yes, there's a cylinder and a checkerboard.
And that's the situation.
So it kind of doesn't matter given our overall perceptual take
that there's this funny effect with colours going on.
Yeah, I mean, in a sense it sort of is darker
because within the sort of within the fiction of the image,
well, it is a darker square, you know.
But it's, I mean...
But it isn't.
It's not actually a different colour, but this is part of the trouble.
I mean, you say this isn't a failure, which is interesting because that implies that
the purpose of our sort of visual systems are not to tell us what's actually there.
Correct.
But to, I guess, sort of help us navigate the world.
Again, interesting, but potentially sort of leading to a fear.
that sort of everything that we think we know is true is actually just survivability.
I mean, there are arguments in philosophy such as Alvin Planting as evolutionary argument against naturalism.
I mean, the way that he frames it is that it's impossible to be a naturalist or a materialist
and believe in evolution by natural selection because he says that in order to assert the truth
of natural selection, in order to assert that this is the way that things occurred,
You look at evidence, you observe evidence, and you assert your conclusion.
But if the conclusion is true, then the very senses you're using to observe that data
did not evolve for truth, but for survivability.
So by asserting the conclusion, you're undermining your ability to get to the truth
of the theory that you've asserted in the first place.
Right, that's great.
So, yeah, he uses that as an argument for God's existence because he says that, you know,
this is true on materialism, but if we're created by a mind intentionally,
the problem doesn't arise, but I think that this kind of thought and this kind of, this kind of
experiment, this illusion of the, of the cylinder on the checkerboard, and anybody listening who
can't see it on screen, you know, just Google it. It's a wonderful image. It's called Adelson's
checkerboard. Maybe tells us that planting is onto something there. I wouldn't go that far.
I mean, I think that science itself can also be best thought of as an abductive process,
that it's our best explanation of what's going on.
So the postulation that we evolved through natural selection
is not something that can be logically demonstrated,
but it is the best explanation we have
for the things that we observe,
even recognizing that the things that we observe
are dependent on things that evolved.
I can see the circularity in there,
but I don't think it's a particularly vicious circularity.
Yeah, I guess not all.
circles have to be vicious and we sort of have to start somewhere. I think there's some
bootstrapping in any kind of, any kind of philosophy. But so our experience of the world is
something like a meeting of the projections of our mind and the input data of the objects of
the external world that we see around us. That's how I see it. And I think the overlooked
implication of that is not so much the worries about whether anything is real or not.
And I think that, just to return, I think for me, that's a sort of pragmatic assumption to make.
And it helps us bootstrap ourselves up a little bit, get off the ground in this explanatory cycle.
But the other perhaps more interesting implication that has relevance for everyday lives, too, is that we will see the same world slightly differently, even if we don't recognize that we do.
And this, too, is a bit counterintuitive.
if we start from the kind of naive realism view that we see things, hear things as they are,
then of course, we would just naturally expect to assume without thinking that somebody else
sharing the same environment, we're in the same room here, is going to be experiencing things
in exactly the same way because we all experience things as they are.
And the character of our experience is like that too.
It really seems as though I'm seeing the world as it is in a way there's,
independent of my own brain and mind. But if that's not the case, as I think it's not the case,
then just as we all have different bodies in skin color and height and so on, we all have
different brains. So we're going to exhibit a kind of inner diversity just as much as we exhibit
this external diversity. We're going to have a perceptual diversity. And that is, well, that's
something we're currently studying because a lot's known about it. And I would say when the diversity
gets sufficiently large that we slap a label on it and say, you know, this is autism or
this is schizophrenia. But there's this whole middle range where I think most people just assume
that we have the same subjective encounter with things. Sometimes it's, this is thrown into doubt,
there was that social media meme of the dress, which half the world saw one way and half the
world saw the other way. Yes. And you saw black and blue. I saw black and blue. I also saw and see
black and blue, and struggled to see white and gold. Again, a simple Google search for anybody
listening, but I'll put it up on screen. You'll probably remember this if you weren't living
under a rock at the time. Fascinating. You talk in the book about how people started doing
experiments, because I mean, presumably it has something to do with the fact that in this image,
you don't know what color the lights are. Right. So, you know, if you've ever taken a camera,
most cameras will do this automatically, like the camera on your phone, but if you take a
professional camera and you don't set the white balance, then, you know, the cameras that we have
on us right now, if I've set them correctly, make us look sort of, you know, roughly the,
the color that we actually are in the room. But if we were to move into a room lit with yellow
lights, we'd suddenly look really orange on the camera, even though to our eyes, we look the same
because, you know, the camera doesn't have a brain that can make that projection, right? The camera is
seeing just what it's presented with. It just sort of becomes orange because the lights change.
But if we go into a separate room, we appear, everything still appears the same color,
which I guess is further evidence that what we're seeing is in part what our brain is projecting.
But in the case of the black and white dress, it's like, depending on how the white balance
of the camera was set and depending what color of the lights were, it would sort of change
how it appears on camera.
So you talk about people like looking at the image and then running outside to change the lights
to see if it will sort of change their perception.
I don't know if there were many cases of people who were able to sort of flip,
between the two.
I think there are a few.
I've never been able to do it.
I mean, I can see it differently
if I change the background enough
because, yeah, you're absolutely right.
It seems to be,
so we have this basically
biological, automatic, white balancing
happening all the time.
So if you take this,
you know, I've got a piece of white paper
in front of me here,
it looks white.
If I take it outside,
it still looks white.
But the light coming into my eyes
will have changed massively
because indoors,
it's a relatively orangey,
ambient light
artificial lights we have, and outdoors, it's going to be relatively bluish. It's sunny day.
And our brains automatically compensate for that. And so we see white. And that's a useful thing for
us to do. You know, you could make the argument that the camera, of course, it doesn't subjectively
see anything, but it's more accurate in registering what's there. But that's not very useful.
Imagine if our perceptual experiences just changed all the time, depending on whether the sun
went behind a cloud or whether we walked from one room to another, that would be a very,
very maladaptive way of encountering the world. So all that stuff is hidden from us. And the example
of the dress showed that this particular process of biological white balancing actually varies
a little bit from person to person. And it just, by happenstance, serendipity found this sweet
spot where some people's white balancing was such that they saw it one way and others such
that they saw it another way and the kind of vociferousness of the arguments that it sparked
about no it's really it's got to be blue and black and I don't know what you're talking about
yeah that for me was the interesting part of it because it really surfaced that we do bring to
bear this kind of naive realism about our experiences so that it's very hard for us to even
concede or conceptualize that somebody else might experience the same thing reality seen
differently and of course it's not just the dress so the dress came and the dress went but the deeper
interest for me is that even when the differences are not so large that they surface into social media
memes and into language that they're still there i mean you and i might be having a slightly different
experience of the sound of the background noise here or the or the um the color of this red on the mug
The background noise that hopefully my audio guy has expertly taken away.
Maybe we can turn it back on again for this little bit.
But we would never know, right?
Because it seems as though we see it as it is,
and we're going to use the same words.
So this project, which we're doing,
is trying to actually measure this relatively hidden landscape of inner diversity.
So by the way, if people want to take part in it,
it's still something we're collecting data on it.
It's a whole series of hopefully fun and interesting.
and short, engaging, little interactive illusions, the sort of thing we've been talking about,
but the sort of thing that can tell us about these individual differences.
Where can people find that?
They can find it. The easiest way to find it is on my website.
The project is called The Perception Census, and my website is anilceth.com.
And so that will allow people to sort of answer a bunch of questions about various things that
they're shown, and it will give you some insight as to the different ways that people are perceiving,
what appear to be the same thing.
That's right.
So we look at vision.
We also look at other things like time perception
and sound perception, music, perception, rhythm perception.
So I think this diversity, we often focus on vision in neuroscience,
but of course our encounters with the world are much richer
and probably vary on all these dimensions.
So yeah, there's all sorts of little interesting, interactive things to do.
And people taking part will learn about perception,
learn about their own way of perceiving
and also have the warm glow
of contributing to this
for this scientific enterprise.
Yeah, well, the link to that will be in the show notes
or in the YouTube description.
And you're quite right.
I mean, we have a sort of arrogance
that might be caused
by having a mistaken view of what causes perception,
right?
Because you're like, of course the dress is blue.
I can see it.
It's like, I wouldn't see it as blue
if it wasn't actually blue,
but something like this phenomenon teaches us
that it's not quite as simple as that.
Audio illusions are interesting too.
I mean, it seems to me that those are easier
to potentially switch between.
Have you heard the brainstorm green needle one?
That's one of my favorite because
there's also the Yanny and Laurel one.
That one people tend to say, you know,
I sort of hear one or the other,
but the reason I love the brainstorm green needle thing
is because you can just hear one or the other.
depending on what you're expecting.
I'll play it for the people listening as well.
So it's sort of this,
it's from some kind of toy or something.
Yeah.
I imagine it's actually saying brainstorm,
but I'll play it.
I'll splice it in here.
So if you listen to this audio,
I'll play it sort of three times in a row,
and you listen for the word brainstorm.
I'll play the sound now.
You listen for the word brainstorm.
You hear brainstorm,
song, but if you listen to it again, but this time listen for the words green needle, I'll play it again.
Then you hear the other one, and you can sort of rewind that. I'll just play it one more time,
listen for whichever one you want, and you'll hear it. And it's just whatever you're expecting
to hear, you hear. What's remarkable about that, I think, is it's not just like hearing
brainstorm and then hearing something like shame form, you know, green needle.
It's like, you write the words down. They look entirely different. Like what, what phonetically or
linguistically do these things have in common? Green needle and brainstorm. I spent ages
sat there trying to figure out like which syllable matches onto the other one, you know,
how it is that, how it is that you can hear one of the other. But you're quite right. It's like
completely an utterly opposite. And I think that really does underline that the
power of our expectations to constitute what we experience. It's not just some sort of like
mild fussing around on the top. It's really foundational because we can experience these two
very different things purely by expecting to hear something. Yeah, I mean, there can't really
be an opposite of a word phonetically speaking. But if there were such thing as the opposite
of brainstorm, it would probably be something like green needle. Like you say, what we expect
to hear changes what we actually, in fact, hear.
And when we listen to something, we think, no, that's what it is because I can hear it.
My perception is being caused by the object.
That must be what it is.
But this demonstrates that our perceptions or our expectations shape how we view the world.
And to me, I think this must have huge implications for worldview and for philosophy.
If you expect to find in the world evidence everywhere abounding that God exists, then you'll find it.
Yeah.
If you expect the world to be a pessimistic place full of suffering and misery, then that's what you find.
mind. Similarly, I suppose if you have an optimistic world view and you expect the world to
actually, you know, feed that back to you, then that's what you receive. So I mean, surely this
can explain a lot about people's political and religious dogmatism that you meet somebody
else and you just don't understand how they see the world that way. How can you possibly think
this is how the world works? It's because that's what we've been taught to expect, right?
I think that's exactly right.
And you mentioned earlier that we might bring to bear some kind of unrecognized arrogance in our ways of seeing
because we don't recognize that they are dependent on our own particularities of our own minds, brains and bodies.
And so part of the value, I think, of this work, part of the implications politically, philosophically, sociologically,
is in deflating some of their arrogance and cultivating in its stead a kind of humility
about our own individual perceptual takes on the world. And it's a humility that doesn't
it doesn't dive all the way down into total relativism and idealism. We've talked about this
already. There is a world and evolution has made damn sure that we see it in ways that for most
of us most of the time are useful. They're not going to be fully accurate. They're not going to be
completely identical, and cultivating that humility at the level of something that seems so
natural as perception, and my hope is, there's a bit of a hope, is that that can provide a bit of
a platform for cultivating a parallel humility when it comes to our beliefs about things.
If we recognize that we can literally see things the same thing differently, then maybe that
gives us a bit of pause for thought about the things we believe and how to,
least begin to understand that other people can believe something that seems so contrary to our
own ways of thinking. So we have, at least we know, we're familiar with the concepts of things
like media echo chambers, social media echo chambers, where we have this kind of confirmation
bias that will seek out things in favor of what we expect. It's almost impossible to step
outside of that bias. We have to make a big effort to do so, to watch the news channels that we don't
normally want to watch and so on. Another way to do it is, I think, to just to recognize that
they're also perceptual echo chambers, that at the level of how we move our eyes around the
world and how we experience our immediate environments, that's also something that is not
revealing things as they are. I think, yeah, you're right. These things sort of line up quite
nicely. It's not sort of a direct leap from perception to belief, but I think it's the same
A lot of our beliefs arguably are also formed in this kind of abductive way where there's some evidence out there.
We all read some kind of news.
We all get some kind of information about the world.
But even if we see the same information, our prior expectations about how the world is structured, how it operates, are going to change our conclusions from that data quite radically.
And we'll come to different beliefs, different, you know, Bayesian posteriors, as they say, in the, in the, in the,
more technically, even for the same data.
Yeah, of course, in more fundamental aspects of perception,
even knowing that our expectations change how we view things,
doesn't mean we'll be able to see things differently.
I mean, for example, to bring it back to colour,
I'm looking at an orange mug.
The mug, apart from the handle, is one solid colour all the way around.
But of course, it's not actually,
because it's a curved surface and the light is hitting it
at a particular angle and parts of it are more shaded than others,
which means that I'm actually seeing this, including the reflections, I suppose,
just this unimaginable sort of vast expanse of different wavelengths
that are all coming off of this mug,
but my brain just perceives them as one colour.
Because it takes in the context around it,
and even though I know that's what's happening,
I know that this side of the mug is a different colour to this side of the mug
because there are different wavelengths bouncing off it.
my brain sees it as the same and I sort of sort of can't shake that off. You know, one of the
unexpected things that thinking about perception this way is brought for me is a newfound
appreciation of art and the skill of artists. I was just thinking about that. If you think about
what, you know, not just sort of more recent now, but going right back, I've just finished
reading a biography of Leonardo da Vinci and, you know, you can see in what he was doing. You can
see that he's diving underneath sort of our naive encounters with the world, like that this mug
is solid orange. Because if we really reflect on our visual experience, indeed, we see it both
ways, right? You can see it as one single orange mug. But when you start to pay attention,
of course we can still see the fine variations in things. And some of them maybe we can't
see, but then artists like Leonardo and onwards know what's going on and are able to paint
not what's there, but paint the raw materials for our brains to make the same kinds of inferences
and so that it generates the subjective experience of what seeing a real orange mug would look like.
There's this concept in art history called The Beholder's Share from art history called Ernst Gombrick.
And I think it's extraordinarily parallel to these ideas emerging in neuroscience.
and the Gombrick, when we encounter an artwork, the beholder's share is the part of the aesthetic
experience that's contributed by the observer, the beholder that comes from within and is not
the present in the artwork itself.
And he talks about all sorts of traditions, impressionism in particular, was really scorned
when it first emerged on the scene as being just like, you know, low resolution scrapings and marks
on a canvas.
But the genius of the impressionist
like Isara and Monet
Manick was to
was to see through
their own visual cortex
to paint the light,
not paint the brain's inferences
about the light. I think that's what gives it
its power. Yeah, I guess
artists are people who can break
out of that illusion. I mean, I think
I would, it's the reason I can never be an artist.
I mean, even just sort of the perception of
shapes, you know, if I hold up
this book available online, Amazon.com, and in the link in the description, I see it as
rectangular. But the angle I'm holding it at means that what I'm really sort of seeing are two
lines going upwards and inwards to a sort of vanishing point that's never, that's never reached.
This is sort of a strange, four-sided shape that isn't a rectangle. But my brain sees it as a
rectangle. I can't. I can't. I think you see it both ways at once. I think it's a mistake to
to say like we really see it one way or the other way. I mean that's that's the interesting thing
about the phenomenology of vision. Well an artist can do both exactly and we we do to some extent too
I mean we we see it as a as an object you know as a rectangular object but we also still
you know our visual experience is still you know you can still extract that from your visual
experience that there are lines tapering to a point but I think you have to you have to focus
you got to really pay attention to that I've I've never really been
able to get a to get a grip on. I mean, even the table in front of me, you know, I'm sort of seeing
there'll be a long end here and a short end there, but my brain just perceives them as the same.
I think that good artist is someone who's trained in being able to, to understand that that's
not what the eye actually sees. And it's the same thing with color, you know. I remember once
learning how to paint like a sunset or something, it was this, this orange, blue, you know, gradient.
And they said, okay, now you need to pull out some green to put in there.
And I thought, what on earth?
Why do you need green?
And it's because something about that color sort of makes it seem how it actually should seem,
even though that's not really the color you're trying to put into the image.
It just sort of makes it work.
And that's what makes art so fantastic.
And so, and like you say, so impressive.
So we can add the whole history of art now to our evidence base for,
for the predictive brain. It's looking pretty good, right? So, I mean, does this constitute for you
sort of a theory of consciousness, or is this more like a theory of perception? It starts as a theory
of perception. I'm a bit skeptical of sort of self-proclaimed theories of consciousness, which gets us
right back to where we started, that it seems as though consciousness is this potentially
immaterial thing. And I respect that intuition. I don't necessarily agree with it. But
but rather than trying to face that problem head-on,
this so-called hard problem of consciousness,
like how is it that physical stuff is identical to
or generates any kind of experience?
I think the way forward,
or at least a productive avenue to follow,
is to say that conscious experiences exist
because there are some philosophers that claim
we're mistaken about that too,
and there's really nothing that special
about what we think about what we're talking about
when we talk about consciousness.
But I think, you know, conscious experiences exist and they have properties, and we've been
talking about some of them, how visual experiences appear to us.
And if we can start to account for some of these properties in terms of brain mechanisms,
and we've been talking exactly how to do that, you know, what gives something a feeling
of being an object, what gives it a feeling of visual presence in our world, there are many
things we could do with this.
we're making inroads.
The temptation is to say that that will never be enough
and we'll just be left with a theory of
like some aspects of how we experience the world
but we will fall short of a full understanding
of consciousness itself.
I think the only, at least for me,
the intellectually honest approach is a wait and see approach
and to identify as many different aspects of
consciousness as we can and try and account for them in the same way that we've been doing.
Can we use the resources of neuroscience in particular this idea of the brain as a prediction
machine to explain not just visual experience, not just auditory experience, but also
experiences of self. And that's why the book is called Being You. That's where I've been
going over the years is like kind of starting with this view of how the brain
creates the perception of the world
and then turning the lens inwards
to think, okay, we have this other
of naive
but very powerful intuition
which is the self is the thing that does
the perceiving, perched inside
my skull somewhere
and decides what to do and maybe exerts
free will and then does something.
But really
it's hard to make sense of that
because what is this self? Is it some residue
of some soul, some immaterial
soul. And as a whole tradition in philosophy, again, going back to David Hume and also in Buddhism
and Eastern traditions, that talk of the self very differently as a process, something that's
impermanent, something that's always changing, and something that I think critically is
productively thought of as a form of perception too. So the self in this view is not what does
the perceiving. The self is itself a kind of perception, a collection of perceptions. And then you
pull on that thread some more and you can start and you can it becomes for me a theory
towards a theory of consciousness when we ask the question the most fundamental question
about why is the brain doing any of this at all we've given one answer which is it's the
only way of making sense of ambiguous data from the world but I actually think there's a deeper
reason and the deeper reason is that the fundamental purpose of having a brain for any
organism is to keep the body alive. Everything else is secondary. Even movement is probably secondary
to this. And keeping the body alive means regulating processes in the body like heart rate,
blood pressure, whatever. So they stay within very tight bounds that are compatible with staying
alive. And as any control engineer will tell you, the best way to do this in a world where
things change and there's uncertainty is through prediction. If you can predict what's like,
what's like to happen and what that's likely to do to things like your blood pressure and so on,
then you can keep things regulated very tightly, not without waiting for them to change and then
try and bring them back, have anticipatory control. So the whole reason we have predictive brains
in the first place for me is this deeply embedded evolutionary imperative to keep the body alive.
and everything else piggybacks on that.
And our experiences of the world then become grounded in our biological embodiment.
And so it becomes a theory of consciousness in the sense that it's making a claim about this very
tight relationship between our nature's living creatures and our conscious experiences.
And it also tries to get at why we think there's this big problem about consciousness.
If we can explain, for instance, why we feel we have free will,
why we feel we have this immaterial mind,
using the same resources that we can understand how and why the mug seems to be red and white,
then I think we're getting somewhere.
We're making progress.
What do you think, I mean, where do you think the self is?
I mean, is the self in the brain?
Is it the brain?
Is it just the collection of perceptions?
I mean, saying that the self isn't the thing that does the perceiving, but just is the collection of perceptions is,
strikes me as a bit like saying, you know, like when we're trying to talk about color and we say something is red, there's no thing that is red.
There's sort of just the redness, but that doesn't make much sense.
There sort of needs to be something for the color to attach to it.
It's a secondary property.
It doesn't just exist floating out there.
It needs to be something to.
I mean, it can do if we're imagining or dreaming it or something.
this is true um although i wonder sort of again what the difference is between imagining red and
and seeing red and whether whether you are sort of actually having the same experience in the brain i mean
it's not literally the same experience right but like i mean dreaming is probably closer i think
most people imagining it's it doesn't have the same vividness but when we dream to the extent that
we can remember dreams the colors seem vivid in the same way they seem in normal life but anyway it's it's
stands, right? Yeah, yeah. And it seems like, I mean, I sort of feel like myself is in the brain,
but there's this, I suppose, quite bias-inducing sense in which my eyes are right next to the
brain. And my eyes are sort of my primary, I would say, way that I observe and navigate the
world, or the most useful, most striking maybe. But, you know, if my eyes were located on my chest,
I wonder if I'd sort of feel my brain behind my eyes
or whether I'd still feel it up in my head, you know what I mean?
And some people talk about sort of, you know, thinking with the gut
or how the gut can affect the way that we think.
And people have said for a very long time
before we knew much about the science of it, you know, go with your gut.
There was sort of this intuitive sense in which we could feel
that there was some thinking being done by other parts of the body.
I mean, David Hume goes as far as to say that even sort of, you know,
if I were to sort of hurt your foot,
somehow if I were to stub your toe, the only reason that you know that that pain that you feel
is coming from your toe is because you have experience of a similar feeling every time you've
done something to your toe. But if you had no prior experience, if I were to stub your toe and you
had your eyes shut, you wouldn't know where the pain was coming from. To me, that seems like,
I mean, it seems to most readers very radical and perhaps quite unbelievable. You can kind of see
why he might think that. But there's this weird sense in which different kinds of experiences
seem to be located at different parts of the body, of course, all connected to the brain.
But if the brain is not the self, but rather the perceptions themselves put together, produce
the self, like, where is this self? What is it?
I mean, I think there's two things. There's the biases we have. And you're right that we have
this kind of perspectival bias that our eyes are right up here near our brains. We could actually
do the experiment and just like give some goggles and put a camera on your chest and have you
walk around for a few weeks and see what happens. We also have this bias that we associate the
self with thought often. And most of us experience the thought in as much as it's a linguistic
verbal train of thought as occurring somewhere up here too. These two biases, I think they
tend to localize our experience of self up here. But it hasn't always been that way. And certainly
the brain was not considered always the important place for self and consciousness. In ancient
Greece, it was often thought the heart was where the soul resided. And in ancient Egypt,
when they took great care to mummify important people upon death, they just usually drained
the brain away and chucked it, chucked it away. Don't worry about that. Because if you think
from that historical perspective, the brain is very inscrutable. It doesn't do much. It's just sort of, you know,
tofu textured, grayish, pinkish stuff, much less exciting to look at than a heart or a
stomach. When did we start to realize that the brain really is the center of almost everything
we do? I think it varied in different cultures, but in my sketchy understanding of the history,
it was a big landmark was the Roman physician Galen, who did the first beautifully illustrated
dissections of the human body kind of revealing not what was supposed to be there on the
basis of religious or authority text, but on the basis of what he saw, to the extent of course
he saw what was there. But he was able to get underneath his expectations about human
anatomy a bit more than other people at the time and gave us a more accurate picture of the
centrality of the brain to controlling the organism to perception and so on. I think that marked a
big transition. Leonardo as well actually made enormous contributions. He just didn't write many
of them down, didn't publish many of them. So I think he could have been also this major figure
in neuroscience had he published the work he'd done in neuroanatomy. Is that so? So how do we know
that he sort of did all this work if he didn't write it down? I think from sketchbooks and notebooks.
And I must admit, I'm basing all this on this recent biography of Leonardo that I read.
But it was actually fascinating because we tend to, and he's such a vibrant archetype of culture,
of culture of Renaissance, thinking that, and we know that he did so many things, but reading
this biography of him, it's just astonishing how productive the guy was, absolutely stunning.
Yeah, I recently, I was in Paris last week and I saw the Mona Lisa.
Although I can't say I was particularly impressed by it
because I was sort of squinting from a distance.
It's really weird I find in art galleries like the Louvre
where if you want to find the Mona Lisa,
what you do is you walk in
and there's this big print of the Mona Lisa
and an arrow and you follow the arrow
and then there's a picture of the Mona Lisa and an arrow
and then another picture of the Mona Lisa in an arrow
and you follow it through the museum.
So by the time you get to it
and you're sort of squinting at the real thing,
you've actually already seen it
sort of 20 times prior
and you've actually seen it in better detail
than you're probably going to see you when you see the real thing
so you know if you go to Paris
and want to have a look at them only so you're better off looking
at the big banner that's by the entrance
it's like I thought about how
if you go to a gig
and everyone's sort of shuffling in and they're playing music
while people are getting ready to listen to the band
it would be like playing that band's music
out of the speakers before the band comes on
which venues go out of their way to not do
because of the fact that they think
that's going to slightly ruin the experience.
Well, and then it would be like
when the band comes on, you have to put earmuffs on
and they're not even listening to it in as much detail, right?
You can't even hear it.
It's so strange, but from that distance,
you can tell that there's something pretty extraordinary going on
with this man's understanding of anatomy.
There are a few instances in which you've already mentioned dreaming.
in which our sort of perceptions don't appear to be exactly in line
with what's actually being caused by some external world.
The other example I can think of in which that happens
is the psychedelic experience.
And you write about how some research seems to suggest
that, I mean, normal waking experience,
we have different ways of leveling sort of measuring
how conscious someone is.
These are a little bit dubious.
But we have vague metrics on how we can judge
how conscious is somebody. Somebody is less conscious when they're asleep and not dreaming than they are
when they're awake. And you write in the book that the only thing we have, the only evidence we have
of a state of mind where someone appears to be more conscious than the baseline of sort of general
waking experience is on psychedelic drugs. What is going on with the brain and the effects on
experience when somebody takes what seems like a pretty unassuming little square puts it on their
tongue and all of a sudden an entire new world opens up to them what's going on there's a ton of
stuff going on it is it is remarkable isn't it how this this tiny little pharmaceutical intervention
can so radically upend our encounters with the world and with the self i mean this is another
just to connect one dot here on psychedelics the typical experience of self can be altered very
profoundly people report things like ego dissolution so the whole idea of being a self can become
much less apparent and where is the self is that you know where's its boundaries these things can be
radically changed which i think is a way of revealing that our normal experience of selfhood
should not be taken for granted as just how things are yeah but it's also a kind of construction
um in psychedelics i think there's two two general interpretations there are some people who would
take the psychedelic experience as revealing to them a deeper truth. So they're seeing things more
as they are. It's taking the filters off. I think of things a different way, which is that
psychedelics for me, it really underlines how deeply material our conscious experiences are. You know,
you can change the brain in this very, very precise but profound way. And your experience changes
radically and the things that we would take for granted as just being reflections of
reality, the reality of the world, the reality of the self, are revealed to be constructions
of the brain because we can experience them so differently while still being conscious.
Whether it's a more conscious state or a less conscious, I actually want to equivocate a bit
about this. I mean, what we did in this one study, we took data that had been recorded from
colleagues at Imperial College in London, there's an amazing set of studies where they recorded
activity from the brains of people while they were taking LSD or psilocybin, which is magic
mushrooms, or ketamine, which in low doses can also give you a sort of semi-psychadilic effect.
And it's rather reassuring that when you look at this data, a lot is going on.
The brain does look pretty different in the psychedelic state.
And it's different in very many ways, almost every way you want to look at it.
it, you'll find a difference. One of the ways we looked at it was we applied a measure. You mentioned
that we have these fairly coarse measures of global levels of consciousness. So there's a measure
that goes down in sleep and goes down in anesthesia and so on. And it's really a measure of how
many different distinct patterns of activity we can find in the brain. So the brain, the unconscious brain,
is more regular. It's more predictable. There's less diversity. And it turns out in the psychedelic brain,
there's more diversity. It's a little more random. It's a little less predictable than even waking
consciousness. So is this more conscious? I don't really think so, actually. I think that consciousness
probably doesn't map to a single scale, like temperature does, for instance. It's a very multi-dimensional
thing. We have perceptual vividness. We have the vividness of integrity of the self. We might
have the thickness of our moment in time, of our temporal experience. You might have the
strength of memories. There are many different ways we might want to say things are more or less
conscious. And I think it's just the fact that our measures of level are still very blunt
that if we look at everything through this very, very blunt tool, then we see psychedelics as
coming out higher than waking, whereas sleep and anesthesia are lower.
There seems to be more of this predictive experience going on in a psychedelic trip.
Like, you can have a good trip or a bad trip, and part of the reason for one or the other
is just your prior mood, right?
If you're in a bad mood and you drop acid, then you might have a bad trip,
potentially because you're sort of expecting, you know, the world to give you some form of
negative feedback. That's what it is to be in a bad mood, really. Or that's sort of what you
predict about the state of the world right now is that things are negative and this just gets
sort of amplified. And maybe that's sort of, again, even further evidence that at least in the
psychedelic experience, we've got a real sense in which our predictions. But again, it's not like
conscious predictions. It's not like you can sort of sit there and necessarily just talk yourself
out of a bad trip, right? But it is still, even though it's not something you're exactly
consciously doing, it will make sense for me to say that the experience you're having is based
on essentially your brain's prior state and what it expects to find in the world, which I think
is quite fascinating. I don't think it quite explains why you might sort of see a chameleon
appearing on a tree or um or who was it a silo black yes streaming through the sky yeah that was my
experience a few years ago um if uh i think it's whether it's silver black or a chameleon
there's this quite common factor in cyclic experiences where when you look at sensory input
that that is inherently a little bit ambiguous so like a nice sort of day with little white
fluffy clouds is quite good for this because we've all had the experience of seeing faces in clouds
anyway, even when we're not on any kind of drug at all. That too is just a clue that our brains
are project faces are so important for us as stimuli in the world. I mean, we're very social
creatures. So the brains are always throwing out face templates into our sensory environment
and see to see where it sticks. And sometimes it sticks in clouds or in the patterns of windows on
a building and we see an echo of a face in that in that sensory data and then on on psychedelics yes
these things can be accelerated a bit so a bit a bit a lot right so yeah you can see in my case
i think there were animals to start with and then it was shortly after the death of sila black
had been announced who it was just like you know very very common oh not very
prevalent tv personality and so yeah i don't i didn't realize i'd been thinking about her at all
but clearly I'd been primed.
That's something in my brain was just sort of rehearsing Silla Black, and there she was.
You're a neuroscientist.
You know more than the average person about how the brain works and what's going on.
And probably just in your normal everyday experience,
you're more likely than most people to really notice what your brain is doing when you're experiencing the world.
You write about your experience taking LSD.
What is it like for a neuroscientist?
to undergo such an experience?
Well, I don't have the comparison me
that is not a neuroscientist to compare against,
but I found it absolutely fascinating.
I didn't take psychedelics when I was young.
I didn't seem to be around.
I'm not sure what a decision I would have made,
but in any case, it was only after I became
already quite far down the road
of being a neuroscientist interested in consciousness
that I thought, okay, I want to know what this is like from the first person
because it's all very well studying altered states of consciousness from the third person.
I think it's a very valuable thing to do.
You know, we can understand, in general, if you're trying to understand how a system works,
it's useful to look at it when it's in states other than its normal state
because you can bring to light some principles.
But just looking at how it was working in the third person,
person, just spark my curiosity.
What is it like on the first person?
Your Mary sat in the room studying blue without experience in blue.
That's exactly right.
And I think the same answer applies.
You know, did I learn something?
And the answer is, I learned something.
It's a different kind of thing.
You know, I learned a phenomenal fact rather than a conceptual fact about what psychedelics are
like.
And also you just, you experience the detail.
like no third person description is really exhaustive.
I mean, it's notoriously hard to describe what these experiences are like.
Michael Pollan in his book, How to Change Your Mind,
I think does a brilliant job of trying to convey with normal language
what some of these experiences can be like.
Yeah.
But just to have it is a kind of, you know,
that's the best way to know what these experiences are like.
And I was very reassured.
I mean, in the sense that it was more or less what I experienced.
expected, but at the same time, it made a big difference to me because it was happening to me
and it wasn't just me reading about it. Did it change more than just, did it teach you anything
more than just the experiential? Did it change the way that you think about consciousness or
perception or the way that the brain works, perhaps even after the effects had worn off?
I don't think so, to be honest. I think it, as you said it, it probably just galvanized.
my confirmation bias and made me believe even more strongly what I already believed.
I wanted to ask you a question that you must have been asked at every dinner party
you've been to in probably the past decade. Can computers be conscious?
And this is a very timely question. We're speaking as there's all these urgent calls for the regulation
of AI in the wake of chat GPT and GPT4 doing extraordinary things.
And my short answer is, I don't think so, certainly not in their current form.
But there's still a lot to worry about in this area.
So why don't I think so?
Well, I think in AI, there's often this tendency to think that consciousness is something
that's just going to come along for the ride as AI gets smarter.
And AI is getting smarter.
There's really no doubt about that.
But the idea that there'll be some threshold, and often this threshold,
is thought to be when AI becomes as intelligent as a human being.
So the lights come on and there's suddenly experience happening as well.
I think this is a very poorly founded assumption and it stems from a kind of human
exceptionalism that we still are saddled with that, you know, we know we're conscious and
we think we're intelligent in some species specific way.
So we think that the two are intimately connected and have to be intimately connected.
So that when something is intelligent like us, then it will be.
conscious like us. And that doesn't that doesn't really follow at all. There's no reason in principle
that I can see why intelligence in an artificial system requires consciousness. Isn't it a form of
human exceptionalism to think that there is something special about our brains doing sort of
essentially computorial calculations and that that produces consciousness but if you do the same
thing on a physical computer that no, no consciousness. It's not only that it sort of doesn't
exist right now in chat GPT, but that it can't. I mean, it seems to me strange to suggest
if we're going to be materialists that, yeah, my brain is just a bunch of atoms colliding
with each other in such a way as to form this first-person experience that AI just doesn't
have. I mean, there are already AI chatbots and sort of AI robots that almost resemble
a philosophical zombie, the concept of a philosophical zombie, of course, being a human being who is
materially exactly the same as, say, I am right now, reacts in the same way, moves my hands,
talks, responds to information in the same way, but isn't conscious. There's no first person
experience. It's just a sort of object. And there are AI chatbots that you can say, like,
are you conscious? And they say, well, what's it like to be conscious? And they say, well, I'm not
really sure. It's sort of like being aware of myself and I sort of, you know, experience the
world. I have a first person sense of experience and who are we not to believe them?
Yeah, no, you're right. We don't want to go too far in the other direction. But, you know,
when it comes to what current AI systems do, there are these chatbots and actually if you
ask them if they're conscious, they probably have guard rails in. They say, I am not allowed
to speak about this. I am just a chat bot. Which makes it all the more suspicious.
Don't you think? It's as if you're asking them if they're conscious and the robot sort of looks at you and goes, no. What are you talking about? Conscious? Come on. No, man. Exactly. Look over there. Nothing of interest going on over here. But we're always walking this line between being anthropocentric or this human exceptionalist tendency where we think there's something really special about us and being anthropomorphic and projecting human-like qualities into other things on the basis of similarities that turn out to be rather superficial.
it's a complicated balance of strike
and so this conceptual distinction
between intelligence and consciousness
is not the only reason to think
that current AI is not conscious
it's just one reason to be wary of the assumption
that it is an inevitable consequence
of AI on its current trajectory
if engineers as some are doing
set out with the goal of let's try to build
conscious AI, then I think things get a lot less certain because there is no consensus
theory about what the minimal necessary and sufficient conditions are for a conscious system,
whether it's a biological system or an artificial system. There are many different theories.
Now, on my own theory that we've been discussing, I feel, and the evidence is pointing me
towards the view without establishing the view that consciousness and life are very intimately
related. So on this view, if I'm right, you won't get consciousness in a machine until the
machine also is a kind of living machine in some ways. And that's not the trajectory that AI's
on. But I might be wrong. You know, there are plenty of other theories of consciousness that indeed
suggest that consciousness is some form of information processing. That's, of course, something
computers can do. The big question here is one of substrates dependency. Does it matter that a computer
is made out of silicon chips and wires broadly
and we're made out of flesh and blood
and cells and carbon broadly.
There's a great temptation to say
that it doesn't matter
because why should it matter?
We can build computers that play chess.
That's fine.
They play chess or they play the history of chess.
So why should it matter the substrate?
Well, I think it might matter.
And I think we apply this idea
that the brain is just doing information processing.
It's a little bit too lazy and easy.
It's a residue of another metaphor that the brain is a computer.
The brain isn't a computer.
It's very different from a computer in many ways.
And one of the key differences is that a computer, by its design principles,
has a very sharp separation between the hardware and the software.
The hardware is the substrate made of silicon,
and the software is the program you run on it.
The whole point of computers is you can run different programs on the same hardware.
Let's open up the brain again and have a little.
look inside there's no colors there's no self there's also no obvious point where if you like you would
identify this distinction between not the hardware and the software but between the mindware and the
wetware every time a neuron fires the brain changes there's as much chemical things swilling around
as there are electrical spikes flowing around even single neurons seem to
behave under this similar imperative to keep themselves alive.
Single cells sustain themselves over time.
They regenerate their own components.
Like no computer, no machine regenerates its own physical components.
And this imperative for predictive regulation kind of goes right the way down and
it's where does it end?
So on this view, it might be life that really breathes reality into our conscious experiences
and makes our brains do more than merely simulate consciousness
and in fact instantiate it.
So that's one reason why consciousness
may not be possible in machines that we currently have.
But again, other theories suggest they might.
They currently don't because I think the chatbots we have now,
they're not even truly intelligent.
They're just very good mimics.
Careful what you say about them, you know.
They might get angry and it will come back to haunt you.
one day. It may do, but...
I personally think they're very intelligent and creative
and good luck. Oh, they're creative, but
the creativity is like, it's partly in the eye
of the beholder, right? Yes.
But I think the real danger here
is that
let's say, let's say
that we are
actually successful, which is
not a goal I think we should even be
pursuing, but it's a goal
that in the tech industry, some people seem to think
is like, cool, let's try and build a conscious
machine.
if they turn out to be successful first we might not even know because we all have different
ideas about what's involved but if it nonetheless happened we'd have introduced into the world
a vast potential for suffering that we might not even recognize as suffering and that would be
an ethical and moral absolute catastrophe because you know we could replicate innumerable
instantiations of artificial suffering by the click of a button in server farms
everywhere. That's not a situation we want to be in. It's not so much, you know, fear of
Terminator style being enslaved by conscious and vengeful robots. It's more, you know,
we already fail to adequately treat ethically other living systems, conscious systems,
other animals. So the idea that we would behave ethically appropriately to artificial
consciousness is, you know, that I don't think we've got a good track record there.
Yeah, I mean, we sort of imagine AI being the bad guy, right?
Yeah.
It's just as plausible, I suppose, that it could be the victim.
Although I do think that people would assume that if its intelligence can sort of outdo our own,
it would be very difficult to victimize an artificial intelligence.
Maybe, I mean, there's this idea that, yes, once something has consciousness,
then instead of it behaving according to our interest,
And this is already the problem in AI value alignment.
How do we design systems that act in the interests that we give it when it's smarter than us
and may just maximize a goal by doing things we hadn't predicted?
So there is this worry that AI that's conscious may in some Terminator style take over.
And if it's smarter than us, how can we control it?
I think there's a little bit to worry about there.
I mean, it's a worry about AI in general.
I think the specific thing about artificial consciousness here is that,
that when a system is conscious,
it makes sense to think that it might have its own interests
because it has its own subjective experiences
rather than our interests.
And already in AI, there's this problem of value alignment
where how do we ensure that a system with autonomy
behaves in ways that are compatible with what we want
for the benefit of humanity?
It's very hard to sort of formalize and operationalize that
in a way we can guarantee that AI will do the right thing.
for us. That problem becomes more more complicated for some putative, very speculative
future conscious AI and just want to reinforce. I don't think that's around the corner
at all. I think it's a long way away, even if possible. But I think the more immediate
danger, and this is, this may already be here. I mean, you were saying already that you can talk
to these chatbots and they give you the fluent impression of being a conscious mind. And they
it's this balance again
between anthropomorphism, anthropocentrism.
We're suckered in by our anthropomorphic
biases here
and it's quite easy, it turns out, for us
to be convinced that
there's a conscious mind
when in fact what's going on in large language models
is simple next token prediction
with some bells and whistles around the edges
that's trained on the entire internet
that we project consciousness into it.
And I think that can be very disruptive for society as well.
You know, if we start projecting consciousness into things
where there's no good reason for consciousness to be there,
then we might make assumptions, wrong assumptions,
about how it's going to behave, what it might do.
Because if we think it's really conscious,
really understanding stuff,
consciousness and understanding, by the way,
might be separable things too.
then it might behave one way but if it isn't and it's just algorithms wearing away in the subjective dark
then even if it behaves how we might expect it to behave 99% of the time
the remaining 1% it might do very differently so society is very shortly going to be faced with
this this challenge of how do we how do we arrange things in a society
where we're interacting with systems that give us the impenetrable appearance of being conscious
in the same way that these visual illusions, we can't see through them even though we know
what's going on. Even as a neuroscientist, I look at that checkerboard and I still see the two
squares. It has different colors. Even when we know what's going on, we're still unable to
encounter them as being anything other than conscious, even though we might know they're not.
I think that's extremely challenging for society in many ways.
And that's around the corner, if not here already.
So we have these two worries.
We have the sort of far-fetched worry about actually conscious machines,
a lot of uncertainty about it, it's not around the corner,
but the consequences would be tectonic.
And then we've got the near-term dangers,
which are less sci-fi exciting.
but I think much more realistic, much more immediate and also challenging.
And we haven't really mustered collectively our resources to know how to confront this
situation.
Something of a call to arms almost implied there that this is something we need to do.
There's a lot of discussion, I think.
It's just incredible how the importance of regulating, thinking about,
about the social consequences of AI
have shot to the top of every political, economic agenda
over the last couple of months.
There was a Senate hearing last week about this.
I think that's good.
I think there's obviously a danger of overreach
and overreacting here, but I think regulation is necessary.
And what I'm saying is simply that in these discussions,
the issue of consciousness is often not discussed,
or if it is discussed, I think it's discussed
a little bit in an uninformed way and often under this kind of assumption that it's just a
function of consciousness. I think the topic ought to be part of the conversation. I mean,
you spoke about sort of, well, it seems like it's conscious, but it's actually just, you know,
a bunch of sort of learned data from the internet that it's just sort of processing and spitting
out. Is that not kind of what our brains are doing in a way? In a way, but I think that's, so that's
the challenge for us as well. It's not just a challenge for AI researchers. I mean, really, I think it
underlines the importance of understanding more about how and why we are conscious. I'm very suspicious
that it is just information processing that happens to be implemented on a biological computer.
Just the more you look at the brain, the less like a computer it seems to be, there's the importance
of our embodied embedded interactions with the biological body and with the world around us. And
For me, it's very unlikely that large language models even understand anything.
Because there's a good case to be made that understanding depends on learning the meaning of words through sensory engagement with the world.
The fact, when we talk about colors, what does a color mean?
Rather than just being an abstract set of tokens, it gains its meaning because of the things we've been talking about.
We have perceptual encounters of things with different colors.
So even to get large language models to understand, let alone be conscious, might require that we train the whole thing in a body.
Yeah, we need to give it senses, right?
Because without, I mean, you talk about senses and the balance between our prediction and the sort of raw material of experience being so crucial.
Yeah.
That balance to our conscious experience, it's difficult to imagine.
I mean, it'd be like a brain sort of locked in its skull with,
with no sense of sight or hearing or touch or balance or or bodily position or any of the
senses like can you make sense of a brain like that even being conscious right is but probably
not i mean that's one of these interesting but impractical thought experiments you know if you strip
away the brain's interactions with the world and the body one by one you know what happens
of course if you lose vision you're still conscious it's fine but if you lose vision hearing touch
taste smell maybe it's still fine but then what if you lose the the perception of your body from
within too what what happens then what if you never had these in the first place or what if none of
your ancestors ever had these in the first place then it's very different taking them away we can
imagine something like a leftover brain that's that's just sort of locked it's just it's just it's in this
dark room and all it can do is think. But we can imagine it still being conscious because maybe
it's had this experience in the past. But if it starts that way, maybe it never develops the
necessary prerequisites for consciousness at all. And maybe that's something like what we have
with a computer is something that has all of the necessary instrumentation to do that kind of thinking
that a brain does. But because it doesn't have that sense data, it's not going to be able to be
conscious. And so maybe once we sort of, you know, start attaching cameras and microphones to
these things a bit more routinely, we might be verging on something a little scary.
I don't think that would be enough even because it's not, it's not only that we sense
and can act in the moment. It's that we did this all our lives. All our ancestors did this all
their lives, every brain that ever existed, the whole history of brains in our common ancestors
were all embodied, were all embedded. So to the extent that we come preloaded with sort of
ability to understand, perceive the world, it all depended on embodied interaction. So it's not
just you can plug a couple of cameras and on a large language model in a robot with some wheels
on it. It might be that you have to train, pre-train it in that way too. And of course, that
becomes entirely infeasible. These language models are trained on so much data that if we
were trying to do that, it's a totally different ballgame. I don't think that would be possible
like, I mean, these computers learn so fast. I mean, the entire sort of intellectual history of
human thought can be taught to a computer. But it's done quickly because it's offline. But it's done
quickly because it's totally offline. It's not embodied in the world at all. I mean,
it's just, you know, reading things at the speed of electricity. So I think it does, it does raise
a lot of additional challenges. Well, Anil Seth, thank you so much for coming on the podcast. It's
been a very wide-ranging and fascinating conversation. Thank you, Alex. It's been a pleasure.
Thank you.
