Within Reason - #152 Why AI Will Never Be Conscious
Episode Date: April 20, 2026Get Huel today with this exclusive offer for New Customers of 15% OFF with code alexoconnor at https://huel.com/alexoconnor (Minimum $50 purchase).For early, ad-free access to videos, and to support t...he channel, subscribe to my Substack.Anil Seth is a British neuroscientist and professor of Cognitive and Computational Neuroscience at the University of Sussex. A proponent of materialist explanations of consciousness, he is currently amongst the most cited scholars on the topics of neuroscience and cognitive science globally.Read Anil's essay, "The Mythology of Conscious AI". - TIMESTAMPS00:00 - The Difference Between Intelligence and Consciousness03:55 - What’s Stopping the Replication of Consciousness in AI?17:01 - Can You Separate What the Brain Is From What It Does?22:20 - Is Conscious Experience Just Predictions From the Brain?26:48 - Why Do We Project Consciousness Onto LLMs?37:27 - Can Consciousness Exist Without a Body?42:25 - Why We Liken the Brain to a Computer52:11 - Is There An Evolutionary Reason For Consciousness?56:29 - Studying Unconscious Perception?1:01:21 - Is Consciousness Unified? Split-Brain Patients1:15:10 - Attention and Consciousness1:19:04 - What Would a Conscious Chatbot Even Look Like?1:25:13 - Consciousness as a Controlled Hallucination1:34:19 - Do Scientists Actually Study “Consciousness” At All? - CONNECTMy Website: https://www.alexoconnor.comTwitter: http://www.twitter.com/cosmicskepticFacebook: http://www.facebook.com/cosmicskepticInstagram: http://www.instagram.com/cosmicskepticTikTok: @CosmicSkeptic - CONTACTBusiness email: contact@alexoconnor.comBrand enquiries: David@modernstoa.co
Transcript
Discussion (0)
Anil Seth, welcome back to the show.
Thanks, Alex.
What is the difference between intelligence and consciousness?
Both resist the kind of precise consensus definitions.
But I think intuitively, they are fundamentally different kinds of things.
If we think about intelligence, it's about doing something.
It could be about solving a complex problem or solving any kind of problem.
But in general, it's about the capability to do things,
functionally, behaviorally. Consciousness is primarily about feeling and being rather than doing.
It's in the words of Thomas Nagel more than 50 years ago now. He said, yeah, it's 1976, I think.
That for a conscious organism or a conscious anything, there is something it feels like to be
that organism. And he doesn't mean feeling that it has to be necessarily emotional or anything
like that, simply the idea that there's some sort of interiority, there's some kind of experiencing
going on. It feels like something to be me, to be you, but probably not to be a chair. And I think
this distinction between feeling and doing is intuitively right. It doesn't mean they're unrelated.
I think they may very, they're certainly related in things that exist, in living creatures,
for instance, but they may not have to go together in general. Mm-hmm. Mm-hmm. You
You just wrote an essay, a prize-winning essay.
Congratulations, by the way.
The mythology of conscious AI.
And the reason I opened with that question is because you felt it important to tease apart those two concepts.
And that's got a lot to do with the fact that in our search to understand consciousness,
to figure out what's going on and why there is anything it's like to be anything,
we look to what we know, which is that I know I'm conscious, you know your conscious.
and we know that there's intelligence, some degree, in both of our brains, I like to think.
And so we kind of make this assumption that maybe they always come together in that kind of way.
But you start talking about this to try and pull those concepts apart a bit, right?
It's one aspect of the discussion.
So I think the overall framing for this is to try to reason through what has become quite a confusing topic,
which is whether artificial intelligence could be,
conscious. It seems to be intelligent, at least in some ways, but there are various views and a lot of
kind of, I think, somewhat unexamined views about whether AI can be not only smart, but also
conscious. And one part of the discussion is to examine our own psychological biases. Why might we
even think that an AI system could feel as well as do intelligent looking things? And part of that is
indeed to recognize that just because things go together in us, like intelligence and consciousness,
doesn't mean they have to go together in general.
Across all living creatures, I think there is this strong relationship.
So it's certainly the case that for certain kinds of conscious experience, you need to have
certain kinds of cognitive capability.
So, for instance, if you're going to have the experience of regret, then you need to have
the cognitive competence to envisage alternative courses of action and their possible outcomes.
If you don't have the functional capability to do that, then you can only really experience
something like disappointment or maybe even sadness. So the relationship goes that way,
but it's not clear that in virtue of being cognitive in that way or intelligent in some way,
that you have to do that in a way that is accompanied by consciousness.
So you said this has become quite a confusing topic.
It's a bit of a sort of buzzy issue.
I'm almost sort of hesitant to talk about it.
In fact, when I've done episodes about consciousness,
I've kind of avoided the topic of AI consciousness because it's like asking whether we
live in a simulation or something.
It's like, it's people are talking about it.
And there's this kind of nothing super interesting to say about it.
People kind of go, oh, like, you know, maybe who knows?
we wouldn't be able to tell.
But you've written a really interesting essay,
not just sort of going like,
well, look, you know,
how are we going to be able to tell if it's conscious,
but specifically talking about the nature of consciousness
and its relation to living organisms.
And I think that,
weirdly, you kind of want to start by pulling apart the two concepts.
You want to say, let's not assume that all consciousness
and all intelligence works the same.
But the place you end up is kind of where our intuition start with,
which is that, yeah, like, to get this consciousness,
you need to be essentially a biological organism.
I want to maybe just talk through how we sort of track that trajectory.
So where to start?
I mean, the idea of AI consciousness is the idea of mechanistic consciousness,
that whatever it is our brains are doing to produce consciousness can be replicated with a machine.
And there are thought experiments to this degree of like taking over.
every single neuron in your brain and just like replacing it with like an electrode or a transistor
or whatever the relevant analogous part would be. And that if I did that for one of your neurons,
just swap them over. Probably everything would function. You wouldn't really notice any difference.
And slowly over time, it feels like we should be able to, in principle,
swap the biological electro signals in your brain with mechanical electrosignals.
But presumably at some point you think this would sort of break down or it wouldn't work
in the first place, or there's some kind of barrier between those two ways of trying to produce
experience. And I just want to know why you think that's the case. I think that's a good
place to start. But let me just make one sort of nuance to how you set this up. And this is that the
idea of AI being conscious is not just that it's the result of the operation of some
mechanism. It's usually the more specific claim that the kind of mechanism, that the kind of mechanism
in question is implementing computations and it's in virtue of those computations that consciousness
happens. This is in the philosophical assumption of computational functionalism rather than
functionalism in general. Functionalism in general is the idea, it's quite a liberal idea really
that consciousness is a property of the functional organization of a system like a brain
and embodied embedded brain. But that can encompass things like it's in.
internal structure, its cause of architecture, can be pretty specific.
Computational functionalism is a subset of that, and it says that it's the computations that matter.
And typically, when people think about computations, they think about computation, in a sense, inspired by Alan Turing,
which is that computations in the Turing sense are independent of the material, independent of the substrates.
there's a sort of sharp division, if you like, between the software and the hardware.
The algorithm is what matters, and the algorithm is a mapping from one sequence of symbols
to another sequence of symbols, and it's that mapping that matters.
So it's quite a strong claim, really.
And that's the claim that I think is really, it starts to, you know,
it looks like it's quite on shaky ground, the more you examine it.
And one way to examine it is indeed this kind of neural replacement scenario that you mentioned.
So this is often, it's often to put on the table when people want to justify the idea that consciousness could be implemented in silicon just as readily as it can be implemented in carbon.
And as you put it, the thought experiment goes back to, I think David Chalmers popularized it, but it's got ancestry before that, is indeed, I'll tell you,
one neuron, I replace it with a perfectly functionally equivalent silicon alternative. And it has to be
kind of perfect for the thought experiment to carry through because if it isn't, then you can
immediately say, oh, well, those differences, they may matter. And I can do this for one neuron.
You know, neurons fire spikes. Why not? And almost certainly nothing will happen to your consciousness.
If I do it for 10, is anything going to happen? Probably not. Then I can do it for a million,
86 billion. And so this is pushing the intuition that if it doesn't matter for 10,
why should it matter for any number, therefore substrate independence. What matters is the
sort of functional organization. Firstly, I think it just begs the question. And Ned Block pointed
this out, another philosopher. He said, well, you know, maybe in fact it would be the case
that if consciousness depended in some way on the actual stuff, then as you,
you replace more and more neurons,
you would experience consciousness fading
or consciousness may fade,
you may not even notice as the subject of that.
So it does kind of beg the question.
But I think there's a more fundamental problem,
which is that like many thought experiments of this kind,
the more you look at it,
the less conceivable it is.
It's really, really unlikely that you can,
perfectly replace a biological neuron with a silicon alternative.
In fact, you can probably only replace it with another neuron and probably the same one.
So there's one example which I really like is that there are many examples you can choose,
but one example is that some neurons fire spikes to clear the waste products of metabolism.
So how do you replicate that in silicon?
You've got to give the silicon a kind of metabolism to, and that's not the kind of thing
that silicon can have. So that thought experiment to me, it underpins a fundamental difference,
I think, between brains and Turing-based digital computers, which is that in brains, you just can't
separate what they are from what they do in a way that's sort of enshrined into how we think
about the kind of computers that we're familiar with and the kind that are running all the
server farms that underpin all the language models we talk to. Yeah, that's so interesting because
It's like, when you, you've got to think about how far you want to push this, because somebody will say, no, no, no, but you don't understand.
Like, we're going to just perfectly replicate the neuron. And then, yeah, we'll perfectly replicate the body as well if we need to come up with metabolism, sure. And it's like, okay, but to what degree?
So, like, on a macro level, you look at a neuron and it's got inputs and outputs. So you replicate that with silicon, but it's not quite complex enough. It doesn't have all of the inner working. So you go down onto a, onto the level of the parts of a neuron. And you replicate each one and you stick them to.
together, but it's still not quite there. Well, let's just keep going deeper until what? You get to like the molecular level. And it's like, oh, I'm just going to create a perfect replica of a neuron. At a certain level of resolution in order to do that, you would just be creating a neuron. Exactly. It wouldn't even be, you'd be below the level of recognizable, like, silicon. It would just be electrons and protons that you're sort of ordering in some way. And so it's like, yeah, maybe you could create a silicon creature, but you kind of have to,
a bit more specific and be a bit more detailed.
And what you're actually doing is not building a computer and calling it conscious,
but just building this Frankensteinian biological being and calling it artificial
just because you've created it with your bare hands.
Which I think is pretty reasonable.
I mean, there's a whole other branch of sort of artificial things, which is synthetic biology.
And we have people creating brain organoids in synthetic biology labs.
which are clusters of cells, which are brain, they're neurons.
And they're usually derived from human stem cells.
So they're in fact human neurons.
And they wire up together and they start to show activity.
They don't really do anything very interesting, at least not yet.
But indeed, they're made out of the same stuff.
Now, it's hard to say whether we can't say for sure that that is a necessary condition.
but if you're creating something that is literally made out of the same stuff,
then that whole uncertainty goes away.
You no longer face the question of whether you could do it in silicon
because you're not doing it in silicon anymore.
So the position that you land on,
and mythology is a weak term and a strong term,
because it's not saying, you know, this is complete nonsense,
but it's definitely saying that there's a bit of weird stuff going on
that we need to iron out.
Do you land on the position of like, based on my understanding of consciousness and the way that brains work, AI just cannot be conscious?
Or is it a more reserved like, let's slow down a bit here.
Like, maybe it's possible, but the general conversation around it is a bit confused.
Like, how strong is the condemnation of this view?
I'm very skeptical, but I think in discussions about what kinds of things are or could be conscious, we need a certain residual humility.
Sure.
I think simply because we just, we don't know.
We know, so I'm a phenomenal realist as well.
I think conscious experiences exist.
So there is something about, so from that starting point,
I think there's something about the embodied, embedded biological brain that is sufficient for consciousness.
The game is to figure out what that is.
Now, computational functionalism of the sort that would license claims that language models
or other kinds of digital silicon AI could be conscious,
is making one specific bet
that below the level of the algorithmic description,
things don't matter.
But there are many other things that might matter
that are much more tied to a biological substrate
than an algorithm,
which by definition is divorceable from it.
So I think that anyone who says that AI could,
is conscious or impossible for it to be conscious is overstating what can be reasonably said.
But having said that, I think digital silicon AI is so unlikely.
And I think the reason I think it's so unlikely is partly because there are many reasons
why we are likely to over attribute consciousness to these things that say more about our
own cognitive biases, these conflations of intelligence and consciousness and the specific
seductions of language as something that draws us in.
And so I think that says more about us and about the system, but more fundamentally,
because of this deep way in which you can't disentangle what brains are from what they do,
what it's a property some people call generative entrenchment, this makes it very unlikely
that the computational level, indeed if there, even if there is one in the brain,
brain. That makes it very unlikely that that's what matters, that that can carry the load of everything
that brains do. Yeah. If you put those together, then I think there's good reason to be very,
very skeptical that AI, as we know it now, is capable of consciousness. Yeah. We'll get back to the
show in just a moment, but first, on days like today, when I'm working and recording multiple episodes,
I sometimes just like forget to eat, which would be bad enough if I wasn't already really bad at keeping on top of my nutrition.
But also, I kind of don't want to spend any time trying to make some food.
And luckily, in situations like this, Huell, today's sponsor, come to the rescue.
That's H-U-E-L.
This is the black edition.
And this bottle contains a complete meal.
400 calories, 35 grams of vegan protein, 27 vitamins and minerals, and coming in at under $5.
This is also the chocolate flavor, but Huell comes in all kinds of flavors.
and different types as well. There's the classic edition, this black edition, there's a light
addition, and of course the powdered form too, whatever you prefer. I drink Hewle almost every day. It's
extremely convenient, it's gluten-free, there are no artificial flavorings or colourings or sweeteners,
it's high protein, low sugar, low cost, and if you go to Hewle.com forward slash Alex O'Connor
and use the code Alex O'Connor at checkout, then as a new customer, you'll also get 15% off.
complete nutrition while saving your time and your money. And with that said, back to the show.
I want to talk about those biases that you talk about in the piece, the three sort of facts about
human psychology that make us sort of want to think of AI as conscious. But I'm also really
interested in what you said and what you wrote about what something does versus what it is.
I get a lot of stick for wanting to know what things actually are more than just what they
actually do and it causes a bit of a fuss sometimes when I really become a stickler on the
point. But when you say that in a brain, you can't separate what it is from what it does,
what precisely do you mean by that? I don't think I mean it in this metaphysically loaded sense
in what is the essence of the universe of reality. We can get to that. But I don't think
I'm making any kind of claim that's that strong or that controversial.
I'm completely happy in this conversation to think of what something does as related to what it is.
And if you ask what it is, you can put it apart a bit more from at least a methodological reductionism.
And you can explain what it is in terms of what it does at lower levels.
It's just that the more you do that, the less flexible you become with respect to realizing those same kinds of functions.
in different kinds of materials.
So in, you know, it just takes something very, very uncontroversial, like, I don't know, building a bridge across a river.
Only certain kinds of materials are up to the job.
Yeah.
Yeah, you just can't build a bridge out of string.
You know, it's not going to, maybe you can actually, if you wired anyway.
It's not the ideal substrate.
You can't build it out of cream cheese or water.
Yeah.
So there are some things.
whose properties depend on on the material that they're made of.
Yeah.
And it may well be that biological systems are subject to many of these kinds of constraints.
Yeah.
So that's it's not a radically controversial claim at all.
It's not saying there's something magical about the substrate,
magical about about carbon.
I think there's kind of weak and strong versions.
of saying the stuff matters.
One is saying that what matters is what it does,
but what it does is only realizable in certain kinds of material.
The computational view tends to push back against that by saying,
well, look, the whole point about computation is that you don't worry about what it is.
You know, many things can do it.
But that has to be justified.
I think the problem is you can't take the computational view as a starting point,
an assumption for granted. You've got to give good reasons for why computations should be sufficient.
So that's that's a relatively on controversial view, just saying that, okay, consciousness may
still depend on the functional organization of a system, but that that functional organization
cannot be realized, implemented, instantiated in non-biological systems, or at least certainly
not in arbitrary materials like silicon. Maybe other things can do it, but maybe
many other things can't.
There's a stronger version which I am kind of attracted to,
but I'm not at all sure is right,
which is the stuff matters in a more fundamental way,
in a similar way to, let's say, how the molecular structure of H2O matters
for properties of water.
And here, you know, I think that it's not an unreasonable position.
there is something that that sort of explains why there's phenomenality at all to our brain,
our embedded, embodied brains.
And that factor may be something like the energetics of metabolism,
the self-producing autopoetic characteristic of living systems.
I don't know if that's true, but I think it's really worth considering.
It's as least as plausible in my mind as the idea that it's a matter.
of computation.
Yeah, because a bridge can only be made out of certain materials, but it still does rely on a
particular functional complexity and a particular organization of that material.
But to say that the way in which this material is organized gives rise to something like a bridge
is not to say that that's the only thing you need.
You also need the material that allows those properties to actualize a bridge, right?
You can't make it out of cream cheese.
And in a similar way, like, you know, the functionalist or someone who thinks that a computer can create consciousness is a bit like someone who says, well, let's just take, you know, every like bit of wood or concrete and replace it with a bit of cream cheese.
And instantly, it's just, that's just not going to work unless you really go down to the level of like, well, let's replace every atom in the cream cheese with every atom in the, in the concrete.
But then you've not got cream cheese anymore, right?
Then you're talking about something different.
So I think you're probably going to then need to have some understanding of what it means for something to be biological, what it means for something to be like a living system.
And that's only going to go down to a certain level of complexity.
So once you go deeper, you're just doing like atomic physics.
But something about being alive is crucial to consciousness.
But is that more because of like the materials that living things are made out of, like carbon?
Or is it more to do with the sort of functional reality of what living systems do?
Or is it both together?
I think this is the really exciting open question.
So I think, and it's the third part of the general case against conscious AI as we know it.
The first is the collection of biases that we have.
The second is the skepticism that computation or substrate independent computation is sufficient.
But the third is equally important and more challenging.
in my mind, which is, you know, the first ideas about this, this view being biological naturalism,
that consciousness is a property of living systems.
I think some of the first expressions of this were a little bit blunt, if you like.
It's just, it's almost an identity.
No positive reasons, just consciousness is a property of living systems.
That's that.
And I'm not satisfied with that kind of explanation.
I think you've got to give some positive reasons for why,
What is it about biological systems that matters for phenomenality?
And, you know, I think there are no knockdown arguments for why this has to be the case.
But one story, which I give the outlines in this essay that we're talking about and longer versions elsewhere,
is a story that runs from thinking about the brain as a prediction machine in which the contents of
conscious experiences can be understood as underpinned by brain-based predictions about the causes
of sensory signals.
So I usually have called this the concept of a controlled hallucination, something like that.
And that can be thought of computationally because it's all about Bayesian inference,
but it doesn't have to be implemented computationally in the brain.
I don't think it is implemented computationally.
I think it's done in different ways.
and the story tries to draw a line all the way through from this process,
which tells us things about why conscious experiences have the characters they do,
have the phenomenal properties they do,
all the way down to processes like metabolism and autopoesis,
where living systems, by their very nature,
have to maintain and produce, not only reproduces,
not only reproduce, but produce themselves, keep on regenerating their own material basis
and maintaining themselves out of equilibrium with their environment, out of thermodynamic
equilibrium with their environment. That is what it means to be, or one part of what it means
to be alive. If you're in thermodynamic equilibrium, you're no longer alive. And that too can be
understood as a kind of process of minimizing the surprise.
of sensory data through this quite difficult to unpack but very interesting free energy
principle from Carl Fristin and so on.
Yeah.
Now there's a lot in there and a lot of steps, not all of which are very clear.
In fact, I think the most, the trickiest step is to go from the idea of free energy
in the free energy principle, which is all about, or at least primarily when applied to life
about thermodynamic free energy
of the sort that
metabolism is involved in so real
energy work being
done to transform
substrates into energy that can be used
by the cell to reproduce its parts
connecting that to this kind of more
information theoretic free energy
that corresponds to prediction error
when we think about the brain as a prediction machine
now you can line these up
but I still I'm not
personally not satisfied yet with
how this works.
Okay, I see.
Okay, so I want to talk for a moment about,
you mentioned this briefly earlier, and you've written about this too,
about the way in which specifically like LLMs
have just burst open the door to credulity about conscious AI.
Because we've got artificial intelligence doing all kinds of different things.
you know, there's in a self-driving car, like chess computers, whatever,
but very rarely are you playing against a chess computer and think, gosh,
it's so good at, you know, knowing how to checkmate me, it must be conscious.
There's something about language models that make people go,
there's something funny going on.
And, you know, if the brain is making predictions,
if the brain is sort of working like a prediction machine,
people might say that's kind of how LLMs work.
They don't actually speak, exactly.
They don't use language.
They just predict what you would like to see next, what word you would like to see next.
And then they sort of successfully produce it.
There's something a bit like eerie maybe about the language thing.
And you write about how it's a little bit suspicious that something that's so obviously
is just trying to mimic human behavior has made us go, oh, maybe it's conscious.
Yeah, I think that's.
that's part of the mythology aspect.
There's a sense in which some motivated reasoning is going on here.
We're still very human exceptionalist in our outlook, I think.
And we can think of kind of the march of science and philosophy
as repeated attempts to challenge or at least nuance this human exceptionalism
because there are things that are distinctive about human beings.
But, you know, we're not at the center of the universe.
We're not unrelated to all other animals.
and I think cognitively and in terms of consciousness, we're part of nature as well.
Language has been one of those things that we think sets us apart.
This is actually under pressure in many directions, right?
It's under pressure from LLMs because they, I think it would be unfair to say they don't have some form of language.
Unkind.
Unkind.
I don't mind being unkind to the AI, but we shouldn't be for other reasons for our own psychological health.
No, I know what you mean.
And other animals, there are all these interesting, actually, applications of AI now to decode non-human animal communication.
So I think language is going to be itself a little bit on unstable ground as something that motivates our human exceptionalism, which is part of this mythology of AI.
But let me come back to the point.
So you mentioned chess playing programs and people not projecting consciousness into a chess playing program.
that's probably true
although when deep mind had AlphaGo
there was this famous move
I didn't remember people thinking that AlphaGo
was conscious when it played Move 37
or whatever the number was
but I still think
there might have been a temptation to project
some kind of mind into there
just a very uncanny one
The example I think is easier
is Alpha Fold
which is another AI system
developed by deep mind, which helps you predict the structure of proteins.
Very, very useful.
I've never heard anyone describe this or worry that it might be experiencing things
when it's doing its protein folding algorithmic acrobatics.
But under the hood, it's very, very similar to LLMs.
You know, it's a bunch of neural networks and transformers running on silicon hardware,
tuned up in different ways, trained on different data.
but if somebody thinks that GPT is conscious but alpha fold isn't you know I'd be really interested in what
what justifies that distinction it's kind of hard to to justify yeah and it seems to just be sort of
that it passes the Turing test right that it like is indistinguishable for us from a conscious agent
but again that's a very sort of it looks this way towards us rather than towards the thing and it's it's so
easy to be to be fooled by these kinds of things.
That's right.
That's why, you know, I think if we, if we are too tempted by seeing things through the
lens of what a conscious human being is like and bind consciousness, language,
and intelligence too close together, this can lead to both false positives and false negatives.
You know, we can see consciousness when it's not there in the statistical acrobatics of LLMs,
and we might miss consciousness where it really is.
as we've done in the whole history of human treatment of non-human animals and as we might now do in synthetic biology and organoids.
Yeah, and of course the consciousness that's in the atoms that make up the universe that I know you have a lot to say about.
But I think we do sort of both rely on this intuition in different directions in that like, you know, where I want to go off on my weird tangents about idealism and panpsychism and all that kind of stuff, I often begin by intentionally just trying to isolate consciousness.
Before we can study anything and look at its nature, you've got to isolate the thing that you're talking about.
And, you know, consciousness is not the same thing as the content of conscious experiences.
Like, you know, I'm seeing things right now.
I'm seeing a chair.
But like the visual of the chair is not the same thing as consciousness.
It's one of the things that's happening to my consciousness.
I like to think of it as.
And so I like to ask people, you know, if I close your eyes, if I took away your eyesight, would you still be conscious?
And the answer is yes.
And you can take away hearing and they're still conscious.
you can take away memory and they might still be conscious.
And whatever you're left over with is the thing that consciousness is.
Now, for me, that allows me to say, and that means consciousness is really simple and it could
abound in the universe.
But it also allows us to say, therefore be extremely careful in saying that, well, because
this computer has memory, maybe it's got cameras and it can see things, maybe it's got a sense
of humor.
I think the tests to run is whatever quality you're looking at in an AI system to say it's
conscious.
If you took that quality away from a human and they would still be considered conscious, then it's not a good metric to use, you know?
It's a good argument.
I like that.
I think it's, I think you're right that if you can do that, you certainly can't treat the presence of that as a sufficient condition.
Yes, quite.
It might be interesting things about jointly sufficient conditions and so on, but I think it's certainly a good question to ask oneself.
Yeah.
And I, you know, this sort of experiment, this is one.
I quite like actually, unlike neural replacement thought experiments, this idea of stripping things
away because it does help us at least try to get out of our human-centered perspective.
Yes, a little bit. And you know, there's an old, I think it's a Greek version of this called
Avesenna's man. Yeah, yeah, I know. The floating man. Yeah. So the floating man, again, you strip away.
I think it's more about self than consciousness. But yeah, he's trying to, to, it's sort of a
a prefiguring of the of the Cartesian, you know, I think therefore I am type thing, of like taking
away all sense data essentially. And the reason he's floating or flying is because he's not
supposed to feel anything. There's no touch. There's no sight. There's no nothing. And yet you would still
just have this thing called like awareness. And yeah, it's trying to isolate. And I think in a way,
Descartes was kind of trying to do the same thing, but in a slightly less sophisticated or
helpful way for this particular application, which is like, what is the thing?
Descartes's like, well, maybe my sight is deceiving me, maybe this chair isn't really here,
but what's left over as like the nub of experience itself?
And yeah, Avercena does the same thing.
And I think whatever we're left with, it's not going to be like recognizably human.
It's not going to be a necessarily human or...
I think almost, yeah.
Certainly not. That would be, I think, very strange. That would be basically to sort of deny that consciousness could be expressed outside the human case, which I think would be a very wrong claim. I was thinking of Avercena's man yesterday because I was in a flotation tank. Oh, cool. Yeah.
And I do this now and again, and sometimes it's relaxing. Sometimes it doesn't go so well. And yesterday, for various reasons. I don't know why. It didn't go so well. And of course, your eyes are closed.
It's quiet in there.
Nothing perfectly quiet.
It's pretty quiet.
You're floating.
So you do kind of lose the sense of your, at least your body configuration,
kind of not so sure where my hands are.
But other senses come to the fore.
And yesterday in this flotation tank, you know, just my,
the sound of my heart beating just became very, very dominant.
So we have all these senses of the body.
Yep.
And I think this is relevant firstly because it's just, it's hard to,
it puts pressure on the idea that we can just strip away these senses
because other things will bubble up.
Yeah.
But what I think it points to,
so where I land,
when I try to imagine what would be left,
where I land is something like the feeling of being alive.
Right, I see.
What does that mean?
it's a bit it's a bit unclear but it's it's it's what i imagine might be there when you strip
away indeed all extra receptive perception all visual auditory content or cognitive content too
so not the cartesian rational thinking yeah that's left that that your mind quietens so this
is some overlap with states of pure awareness in meditation and and some psychedelics so you strip
that away, you strip away the experience of emotion, of specific emotions. You strip away the
experience of the body as an object in the world. You strip away the first person perspective.
Memory is gone. Intention is gone. Can you strip away, is there anything left and could you
strip that away and still be conscious? My intuition is a good way to think about that is this
basic feeling of being a living organism without shape, boundary, specific content,
but that it's kind of persisting in time in some way.
I mean, Descartes famously to establish his mind-body dualism, keeping them separate.
The way he gets there is just by saying, like, well, I can conceive of myself existing without
a body. And I can certainly conceive of my body existing without my mind, just like a dead
corpse. And although they might be tied up in various ways, the fact that they are conceptually
different means that they are different things. They might coincide, but they're literally
conceptually, you know, separable. And that's basically his sort of principal grounds for saying
that they are different types of things. I want to ask you if you can imagine yourself without a
body. And most people I think sort of go like, yeah, sure, I can sort of imagine floating around in
the void. But I mean, given everything that you know about consciousness and your understanding
of how it works, can you actually make sense of some kind of conscious being that has
no embodiment whatsoever? Or is that actually something that you now, given your views, I sort of can't
imagine. Embodiment in the sense of having a physical substrate. Let's say that. Or having a body
interacting with an environment? No, just having a physical.
Okay, because a different, I mean, yeah, I just mentioned that because one can think of, let's say, an organoid.
Yeah.
As it has a physical substrate, but it doesn't have senses or motor effectors or legs or any, doesn't have limbs.
So it's kind of disembodied in the sense that we might use embodiment.
It has no sort of contact with the outside world or anything.
But no, I mean like not even not even having that.
Okay.
Can you imagine, I mean, even like conceptually, can you imagine just sort of existing in,
in the void with no physical...
I think conceptual imagination of this sort is too low a bar.
I don't say much storebreak.
Yes, you know, I mean, they're concepts.
I can create a concept in which there is a disembodied consciousness.
Yeah.
Can write stories about it.
But I think this is, you can't really use that to carry much weight.
Yeah, sure.
It's a problem with these kinds of conceivability arguments in general, I think.
So it's a bit like, can I imagine a Boeing 7447, well, they're out of service now.
Can I imagine a Boeing 787 fly backwards through the air?
Well, yes, of course I can imagine it.
I can picture it.
Yeah, right.
But the more I know about aerodynamics and engineering, you know, nomologically, so given the laws of physics as they are, you can't have planes that fly backwards.
Yeah.
So that's where I land.
Yes, conceptually, I think it's possible to imagine these things.
things, but if I want to understand how things are in the world as it is with the laws of nature
as they are, then the more I know about the brain, the body, how intimate these connections are
and how explanatory they are as well, then it becomes less, it doesn't really mean anything
to say that I can conceive it. There's more to life than finding the perfect car, but finding the
perfect car can help you get the most out of life.
Like the SUV that handles everything from drop off to off road, and the car that hulls groceries
and hockey teams, or the van that's gone from just practical to practically family.
Whatever you want, wherever you're going, start your search at autotrater.ca.
Canada's car marketplace.
Visit BetMGM Casino and check out the newest exclusive, the Price is Right Fortune Pick.
BetMGM and GameSense remind you to play responsibly.
19 plus to wager.
Ontario only.
Please play responsibly.
If you have questions or concerns about your gambling or someone close to you,
please contact connects Ontario at 1-866-531-2,600 to speak to an advisor.
Free of charge.
BetMGem operates pursuant to an operating agreement with Eye Gaming Ontario.
Yeah.
And I think that also the reason I asked you that was because there's a level at which you can conceive something.
and then upon learning new information, you suddenly can't.
Like a child can imagine a plane flying backwards,
because you just picture it in your head.
But once you know how planes work,
once you know how lift is generated,
it's like you actually kind of now can't imagine the plane flying backwards.
I mean, you can literally imagine a physical object moving across the sky,
but you can't imagine it flying backwards,
because flight is a physical concept that is applicable to the laws of physics as we know it,
And once you understand on a deeper level, like, it's just not even something you can properly conceive of unless you're just sort of intentionally fooling yourself.
And if you know or are convinced that consciousness is a product of living systems, then maybe sort of Descartes was just wrong.
You shouldn't be able to conceive of such a thing, at least in the real world, you know.
I think that's, yeah, the more you know about it, the less likely it becomes that these things are separable.
this goes to the neural replacement thought experiment as well.
If you don't know anything about neurons,
it's very easy to imagine that you could replace one
with something made of silicon.
Or if you've kind of reified the metaphor
of the brain being a computer
to think that it really is a computer,
then you might also think that you can replace a neuron
with a silicon thing
because computationally you might be able to say
they could be equivalent.
But the more you know about,
how real neurons work, what they do, how their collective behavior, self-organizes and
assembles and changes over time. Then, just like a plane flying backwards, it becomes less and
less conceivable for the world in which we live, for the kinds of things in the physical
world which we have. Yeah, we were just talking about Matthew Cobb before we started recording. I just
had him on my show, and he's got this book, the idea of the brain, which is a history of the way people have
thought about the brain. And he said to me that he wasn't really sure how to structure the book
until he realized he could do it by what was the leading analogy. Like what did, how did people
think about what the brain was? And like, it's astonishing that basically humans have always
just picked the most complicated thing that exists and just been like, brains are like that.
You know, for some ancient writers, a brain is like a complex sort of musical instrument.
Or some, or for some people, when like mechanical, what's the word with pumping air through tubes and stuff?
Hydraulics became a thing.
The brain began to be seen as this like hydraulic system and air was being pumped through the veins and stuff.
And then we discover electricity and we're like, oh, the brain is electrical.
and we invent computers and we're like, oh, no, no, it's not like air going through the veins.
It's like, it's kind of like zeros and ones and it's like bits and it's, and we start confusing
the map for the place because we start forgetting that this is just an analogy to help us understand.
And we'll never know what the next big technology is going to be.
But, you know, I like to reflect on whoever it was in like the 1800s or 1900s who said,
oh, there's nothing more to invent anymore.
Like, we're done.
I can't possibly conceive anything.
I think that whatever comes next, we're probably just going to say,
No, no, no, a brain is a bit like that, but it's...
I mean, it's already happening a little bit, isn't it?
I mean, even though computers are still involved with AI.
We have AI, but we also have things like the internet,
these very distributed self-organizing systems.
So the metaphor is beginning to alter,
even if it hasn't completely given way to a new technology altogether.
But yeah, Matthew's book is great.
I think it really makes the point that we always have to be careful
not to confuse the metaphor with the thing itself.
This is like Whitehead's fallacy of misplaced concreteness as well.
We just will get ourselves into trouble.
It doesn't mean that the metaphors aren't valuable.
And I think the metaphor of the brain as a computer is a particularly tricky one
because it's not just that we're taking a complicated technological system
and trying to find the ways in which it might be analogous.
The idea of the brain as a computer also draws in all these theorems from computers,
to science and physics,
about the generality of computation as well.
And there's an ongoing debate about what physical processes count as computation.
Is everything computational?
We have like John Wheeler that everything is computation.
It's fundamental.
So it's a bit of a trickier one,
but I think it's based on two specific events that when put together give rise to a strong,
a strong temptation to think that computation is all there is. And the first of these was from
Turing, who basically came up with the idea of an algorithm of Turing according, of computation according
to Turing, this idea of mapping sequences of symbols to other sequences of symbols. And he showed
this was very, very powerful with the idea of the universal Turing machine, a machine, an abstract
machine that could implement any algorithm.
But what does that mean?
When you say algorithm, what are you talking about?
Exactly this.
An algorithm is something that maps one sequence of symbols
through a series of steps to another sequence of symbols,
like a mathematical recipe.
That's an algorithm.
People might use it more loosely,
but that's technically, in my understanding, what it is,
mappings from symbols to other symbols.
You can do a lot with that,
but it's not clear that everything is a matter of mapping symbols to other symbols.
And indeed, Turing's classic paper on this,
which is now 90 years old,
was motivated by the idea of the halting problem,
the idea of showing that there was some,
even within computer science,
there's some functions which could not be solved algorithmically.
So there are limits even there.
But then many things in the real world are not algorithmic,
things that are continuous, things that are inherently random being some examples.
So there's more to the universe than Turing computation.
And then the second event was Walter Pitts and Warren McCulloch,
right there at the beginning of AI and of cybernetics,
who showed that simple neural networks,
if you just had these computational abstractions, if you like,
of artificial neurons that sunned up their inputs and generated an output,
those could implement logical operations
and given other things like storage and whatever,
they could be touring complete.
They could implement any algorithm.
And you put these two things together,
and they're both really important observations,
and you understand that if you strip away everything about the brain,
apart from idealize it in terms of this network of neurons
that are connected
that just sum up their inputs
and pass on outputs.
You can get everything
that Turing computation
can give you.
And that's very,
very powerful.
And I think that's why
for decades now,
people have sort of thought
that we don't have to worry
about anything else
in the brain.
We can throw it all away
because at that level of abstraction,
we get everything
that Turing computation
can offer.
That's a useful
to know, but Turing computation can't do everything, and it's very unclear that, you know, it can
realize all the properties of the brain. And when you look at a brain, you realize that, in fact,
it's all, it's, you can't separate what it is from what it does. That really puts pressure on the
idea that touring computation is all that matters. We talked about this earlier. I mean, you was sort
of talking about separating what the thing is from what it does, and we talked about bridges and stuff,
But you say in the piece that with a computer, you can separate what it is from what it does,
but with the brain you can't.
And I'm still a little bit unclear on like what the difference is and what you mean specifically.
Like maybe you can explain why you can with a brain or can with a computer and can't with the brain.
It's partly a kind of theoretical thing and it's partly a practical engineering thing.
So the practical engineering thing is computers are useful because they're built this way.
You know, we can run the same program on different computers.
They do the same thing.
We can run different programs on the same computer,
and they'll do different things.
They have this kind of perfect interoperability,
which makes them useful.
And that emerges from the fundamental principle of Turing computation,
which is, you know, people argue about it,
but there's a general consensus that computation, in this sense,
is supposed to be, at any rate, substrate independent.
Yeah.
That once you define the algorithm,
doesn't matter.
That's all you need to know what the system is going to do.
So long as the stuff that's implementing it is capable of implementing it, that's enough.
And brains, firstly, evolution didn't design brains for that kind of interoperability.
If I took the functional dynamics of my brain and they don't have to work in your brain.
They won't work in your brain.
But there was never any evolutionary pressure for that in the first place.
In fact, there's evolutionary pressure the opposite direction
because maintaining this sharp separation of scales,
insulating one level of dynamics from another,
is energetically very expensive.
If you just think about a digital computer,
it takes energy to make sure that ones remain ones
and zeros remain zeros.
That's one reason that AI systems are so much more energy inefficient
than real brains.
If you want energy efficiency, which evolution cares about, and you don't need this sharp
separation of scales, you're not going to get it.
You know, you're going to get systems where what it does is much more entangled with what
it is because evolution doesn't require there to be a separation.
Even, I think even more interestingly, and this is something which I think is a very exciting
line of work that's opening up at the moment,
there may be functional benefits for systems
that are entangled in this way,
that is scale integrated in this way.
They may be able to do things
that are much harder to do
in systems where you try and enforce
a sharp separation between different levels
of dynamics.
Yeah.
If you look at brains,
they seem to be like this.
In their normal conscious states,
they sort of sit in what we call this subcritical regime
where small differences can make differences
at larger scales and vice versa.
So there's a lot of kind of dependencies across levels of description,
which I think is going to turn out to be pretty important
in understanding how brains do what they do,
whether it's consciousness or cognition.
Do you think, I mean, you were talking about
like evolutionary pressures to make the brain do particular things, just more broadly,
it might bring to mind the question of evolutionary explanations for consciousness.
Like, when I've spoken about consciousness, and people sort of just say, well, you know,
there's got to be some evolutionary benefit.
Like, that's why consciousness exists, because by being conscious, you know, you avoid pain
and you're sort of doing things that help you to survive.
and two thoughts come to my mind when I hear this. Firstly, if something is not possible, it doesn't matter if it's evolutionarily beneficial. You know, if matter cannot produce consciousness, it would also be very evolutionary beneficial if I could make two and two equal six, like when I wanted to. But that's not going to happen even if it's beneficial, right? But more to the point, it seems to me that any kind of evolutionary service that consciousness can provide,
could also be provided by unconscious reactivity, by the sort of philosophical zombie type figure
or something that doesn't have a complex brain such that it produces experiences,
but just that naturally moves away from heat or follows the sun like a flower does in the garden
without requiring this thing called consciousness.
Do you think that there is a good evolutionary reason for consciousness?
I ask that in the vein of saying that I think it's kind of evolutionarily redundant.
You think it is evolutionally redundant?
Or you think it...
I think it is redundant.
I can't think of a service that consciousness would perform evolutionarily
that couldn't be performed without inner experience being present.
You know what I mean?
Yeah, I get that.
I'm sympathetic to that kind of view, though I don't know it to be the case.
So I still want to reserve in part of my mind room for the possibility that there may be certain functions,
certain things that could be evolutionarily beneficial,
which do indeed require conscious experience
that cannot be done via unconscious mechanisms.
I'm not sure if that's true,
but I don't think it's excluded.
This doesn't mean that it's redundant in those cases where it exists.
It may be the case.
And I think, so this is a more likely perspective
from my point of view, that conscious experiences have functions for biological organisms.
I'm not really on board with epiphenomenalism.
It just seems like a very strange perspective.
If we think about what conscious experiences are like in general,
they're easy to interpret in terms of having useful functions for us.
They bring together, in the most general sense,
They bring together a large amount of organism relevant information across many different modalities in a sort of format.
You can call conscious experience a format here.
That is centered on the body that involves aspects of the body over time and that sort of immediately suggests opportunities for action and response in terms of intentions and urges and so on.
It's a very counterfactual.
And empirically, a lot of things that we do do seem to require consciousness.
I mean, this is the active empirical study of the difference between conscious and unconscious perception.
Yes.
So even if you can come up with a system which could implement these functions without being conscious,
then I think for the kinds of things that we are,
for the living creatures that we are,
evolution has sort of used the available resource
of the potential for experience that living systems have
and drawn on it to create expressions of consciousness
which are functional for us.
You just said unconscious perceptions.
I'm just intrigued as to what those are and how we study them if they're unconscious.
So this has really been one of the work courses of empirical consciousness research in neuroscience and psychology.
Now the idea is maybe you try to match as closely as possible, the signals coming into the brain,
but you try and create conditions where in one, in one,
condition, there's no corresponding conscious experience and in the other condition there is.
A classic example is something like masking in perception.
So you can, you know, you can show an image, but if you surround it with other images in the
right way, then people don't perceive, don't consciously perceive the image in the middle.
But it's still present, it's still presented to their visual systems.
So in that case, you might be tempted to say the perception was unconscious.
If you show an image really, really, really briefly, that might be the case as well.
A colleague of mine Axel Clearman's in Belgium is he's developed a tecistoscope.
So a device very simple.
It's just able to show images for, I think, certainly millisecond, possibly even microsecond resolution.
Right.
A more, an example we use in our lab is we use something called continuous flash suppression.
So we'd show an image to one eye.
And then in the other eye, we'd show this kind of continually changing Mondrian of colored squares.
And the person in that situation will only consciously see the changing colored squares.
Does it make a difference which eye you put into?
Sometimes, yes.
if people have dominant eyes so it can do.
Yeah, okay.
But the point is that you can create a situation where some sensory information is present
to that person's brain and you can show that it, you know, makes some kind of difference
yet they don't report consciously perceiving it.
Yeah, I see.
That's what I'm getting at.
Yeah, I just, I don't know, I find, to find out if someone has consciously perceived,
something, you kind of have to ask them, right? And I suppose I'm just still a little bit like,
I've got a bit of whiplash from split brain patients and their ability to like draw an image
that they've seen without telling you that they'll tell you, no, I didn't see anything, but they obviously
did because they'll draw it in their hand. And that almost feels to me like when you, when you ask
the language part of their brain, their left hemisphere, did you see it? They'll say no. And if you
ask the right hemisphere, it says yes, by drawing the picture. And so I'm just, I don't know,
It makes me suspicious to the extent to which, as long as someone says, no, I didn't see it, we can kind of count that as, so to speak, unconscious.
Yeah, no, I mean, you diagnose the problem beautifully.
It's a really tricky area.
Yeah.
And they're still very lively.
There's debate about whether anything counts as unconscious perception.
Yeah, right.
But one of the issue, the thing you mentioned about verbal report is absolutely key.
And, you know, it could be that.
that as in a split brain patient,
if you verbally ask someone
whether they consciously saw something,
they might say no,
but let's say you asked them to draw it
or to push a button,
something like that.
You'll get a different response.
So there's a complication there.
Typically in these experiments,
they require on pushing buttons
or something like that
rather than people verbally saying
what they saw
in ways you can kind of counterbalance
across hemispheres and such a thing.
But yeah,
it gets really tricky.
There's a conceptual issue here too,
which is that
It could be that a conscious experience is still happening, let's say in your visual cortex or supported by your visual cortex, but that you don't have cognitive access to it at all.
The philosopher Ned Block kind of argues for this possibility in this distinction between phenomenal consciousness and access consciousness, which is the idea that cognitive action.
whether it's through verbal report or pushing a button or whatever it might be is optional and you could still have conscious perception even without access to it.
That's really, really tricky to get an experimental handle on because if you don't have cognitive access to it, then how would you ever know that it was going on?
You've got to have some other reason, other justification.
Yeah, I'm increasingly suspicious as the concept of the subconscious.
I'm glad to hear you say that that's like a normal thing that people are discussing because like, I don't know, I think typically we think if there's something which what I call my like self, my sort of reflective, linguistic, communicative self can't like access, we call it sort of subconscious.
But then I don't know.
I think it depends on the degree to which you believe that consciousness must be sent.
and smooth. You know, some people interpret this split brain related stuff, alien hand syndrome,
all of that good stuff, as essentially showing that there are kind of two centers of consciousness
that are in communication with each other in such a way that I guess you kind of just experience both
at one time. And similarly to how like my eyesight really sort of appears to me like it's one
thing. Like I'm not sort of aware that I've got two eyes right now. It's only by sort of taking one
a way that I can kind of get a grip on it. It just sort of mixes together, but it's uncontroversial
that I've got two eyes. There are two things seeing right now, but it's just sort of mixed
together. Some people think that the brains are doing the same thing. What is your view on the locus
of consciousness? Is there one? Is there two? Is there many more? Is it sort of a silly question
to ask? I think it's a silly question to ask. It's a very, very hard question to answer.
And there's a lot of debate about the unity of consciousness, which I think gets at the heart
of this and the split brain case is one of the key pressure points.
So some people tend to think of the unity of consciousness as being something axiomatic
that there can be only one like Highlander.
Because it's very hard to imagine two separate.
Consciousness being disunified.
What might that even mean?
It's not so much to say there's two, A, two conscious agents in.
in one body.
I mean,
that could happen in a split brain.
That seems possible.
I mean,
the evidence is really mixed,
by the way,
with split brain patients.
It gets very interesting
when you look at it in detail.
It's like actually,
there seems to be problems with
integrating things across the midline
rather than detecting things at all.
And there's all these subtleties
about cross-kewing.
Like one half of the brain
can see what the other half of the body does
and might make inferences based on that.
I would love to talk about that.
I want to allow you to finish what you were saying.
but I would love to talk about that.
And but yeah, to the unity point, I think that if we, so I always tend to try and not be, I mean,
I'm frustratingly non-committal about these things.
Like consciousness seems unified to most of us most of the time.
I would not want to build a theory that required that to always be so because I think
it might again be right back to our discussion of human exceptionalism.
Now, there might be cases where conscious experiences becomes interestingly distributed, non-unified in certain ways.
Octopuses give me a good provocation here where maybe their consciousness is decentralized in a way that just isn't the case for animals like us who are highly cephalized.
Cephalized.
Well, all our neurons are in the head.
Our neural architecture sort of, if consciousness evolved, if we think that's a useful perspective,
for us. It's evolved in such a way that we experience consciousness from a particular perspective, which is somewhere in our heads.
So what's that word? Cephalized, I think it's a word. I think it means that most of our nervous system is inside, is in one place in our head.
So I wonder where Cephal comes from. Is that with like a C?
Yeah, C-E-P-H. I mean, some people say Cephal rather than Cephal. So.
Yeah, no, I just not heard that word. Cephalopod. It's a Cephalopod, which literally means brainfoot.
Oh.
And they have more neurons in their limbs than in their central.
Oh, so it's like brain foot.
Brain.
Right.
Yeah.
I mean, it's basically, it's a good description.
I was like, if we're, whatever the word of cephala, cephalo or something.
Well, we have a lot of cephalization.
So our nervous systems are concentrated in the headspace.
That's why you're confused because I thought cephalopods, but cephalopods don't have it concentrated.
No, there's also the pod.
Yeah, etymology.
I'm having, I'm having Adamalexic, the etymology nerd on soon.
I'll ask him about it, but it's hardly, you know, either of our fields.
But, yeah, I mean, the first time we spoke on this show years and years ago,
we talked about the extent to which we experience ourselves as up in the brain as well.
And I'm always sort of, I sort of wonder, is that because that's where the brain is?
Is it because it's where the senses are?
You know, we talked about strapping a camera onto, like, your belt,
and then, like, putting on a VR headset and living, experiencing the world,
from your belt buckle and whether you would,
the sense of where you are would move down to,
to your stomach.
And I kind of suspect that,
that maybe it would,
I'm not sure,
but it's weird to think about,
like where I am,
I feel unified,
but at the same time,
I'm,
as long as I pay attention,
I can separate out the feeling of the chair
from the visual experience that I'm having.
I can sort of,
I don't know,
I can,
I can pay attention
to the fact that maybe, you know, maybe I feel a bit hungry and maybe I feel a little bit like,
you know, I'm a bit uncomfortable, I'm going to shift position. And I can kind of separate
out those experiences. So I've never quite known exactly what it means to say that consciousness
is unified. Because like true unity of conscious experience would seem to imply like no delineation
whatsoever. There's just experience. But I'm, I'm already being able to separate out various
aspects of my consciousness. Yeah, I don't think it, I don't think unity implies
lack of differentiation
of what's happening within consciousness.
I think most uses of the term unity
in this context
suggest that all of these things are happening
within a unified conscious field.
You can pay attention different parts of it
and they can have different characteristics.
But there's no sort of,
there's no way in which
the consciousness of your,
of redness in the environment is,
is happening completely separately from consciousness of your other parts of your visual field.
It's all bound into one.
But yes, it has structure too.
I think that's fine.
So talk to me about this mixed evidence about split brain patients and consciousness
because like the pop science and pop philosophy version of this is like,
whoa, how cool it is that if you take a split brain patient who's had the connection severed
between their two hemispheres, their brains seem to act independently.
You can show them a word to only one side, and they'll tell you they didn't see it,
but they'll be able to draw it.
And some other really weird stuff, like you can, I think, like, if you show somebody
the words like, I'm trying to think of an example here, but like, it's not a great example,
but like the word market and super, you know, and they're on one side, they'll sort of
separate out the concept.
So they'll draw a market and then they'll draw some representation of super,
whereas healthy brains would put them together into supermarket, stuff like that's going on.
Stuff like that.
I mean, a classic example would be, you know, you show the, it's not each eye.
Yeah.
You have to visual hemifields rather than eyes, but anyway.
Yes, the left visual field goes to the right.
Exactly, exactly.
But you can still talk.
If you set it up rightly, you can still talk in terms of left brain, right brain.
And so one thing you might do is recognizing that in most cases, language is lateralized to the left brain, the left hemisphere.
You can show, let's say, an image to the left hemisphere and then something else to the right hemisphere and then see what happens.
And I think a classic example right back to Mike Gazzanager and so on is that you show something relative to.
prosaic to the left verbal hemisphere and then you show something kind of funny to the right
hemisphere and the person starts laughing and then you ask them why they're laughing and what they'll do is
they'll kind of confabulate a reason why what was shown to the left hemisphere might be funny yeah but it's
got nothing to do with that's because there was something funny shown to the the other part of the brain
so that they have these amazing examples of dissociations of course the most amazing thing about
split brain patients is that under almost any circumstances, it's impossible to tell that anything's
happened at all. I find this really quite astounding. You need these very specific experimental setups.
But it does get a bit complex. And firstly, a lot of these things tend to resolve over time.
Very few split brain operations completely segregate the hemispheres. These days, they're not done
that much because medication, thankfully, for epilepsy, has improved.
A former postdoc of mine now a professor in Amsterdam did a series of experiments on vision, showing that in a split brain patient, actually, that they were, each hemisphere was able to detect visual stimuli anywhere in the visual field.
But what they couldn't do was integrate across it.
So this is a bit like your supermarket argument.
It was the integration across the midline that was not possible.
Yeah.
So I still find it fascinating.
One possibility, Tim Bain, one of my colleagues, talked about sometimes, is that it may not be a stable thing.
Maybe under some circumstances there's one agent and under other circumstances there are two.
Again, putting pressure on an assumption we might make, which is that the number of conscious agents hosted by a skull has to be the same.
Well, octopus is great for that too because like their limbs will kind of like do their own thing.
It almost seems like sometimes an octopus's arm will go over here and then it will be like, oh, you know, what's that?
As if the arm is informing the brain.
But if there's a predator that comes by, suddenly the legs all just like shoot into space, into place and they sort of swim away real fast as if when needed, they can sort of act as one mind.
But when not, they can sort of separate out a little bit.
Maybe something similar goes on in the brain.
You said that some of these cases resolve over time.
Well, this is now I'm kind of reaching very, very into the recesses of my memory at the moment.
But I think you see the strongest examples like the one of the person laughing when you show something funny.
I think these are most prominent quite soon after surgery.
A lot of these things that are most fascinating about cases of,
of brain surgery or brain injury tend to sort of normalize a little bit.
So another famous one is neglect where people seem to be unaware of what's happening
on one half of their visual field after brain damage.
That too tends to kind of amediate a little bit over time.
I see.
It doesn't mean it's not real.
It just means that it's one of the reasons these things are quite hard to study.
Yeah, yeah.
The brain is quite incredible.
Does the other, do you know about the craniopagus twins?
I don't think so.
This is the other, is kind of the opposite of split brain, which I think is almost more fascinating.
Sure.
So this is a case, you know you can get conjoint twins.
So twins born, they cannot be separated.
Usually joined somewhere in the abdomen or the hips, so they may share parts of a digestive system or something like that or even a heart.
There's at least one case of conjoint twins who share a brain.
and they cannot be surgically separated
and they've been around for,
I don't know how old they are,
but I think they're in the teens now.
Wow.
And what is going on there?
Well, what is it like?
Are they individuals?
Do they?
They seem to be.
Yeah.
Yeah.
So I've never met them.
I've only to read about them secondhand.
Yeah.
But they seem to have somewhat different personalities.
Uh-huh.
But they also seem to.
to share things. So one will be able to taste what the other one eats, for instance. Yeah, that's
weird. Are they like aware of each other's thoughts? Is there even a meaningful distinction
between their thoughts? They seem to be, as you might imagine, extraordinarily synchronized
in their, in their behaviors. So I don't know what their thoughts are, but they, and I've just
seen these videos and feel bad because this is just me watching videos of these things but but they
seem to complete each other sentences yeah so that seems to indicate that maybe they do sort of share
at least the you know the contours of a single yeah thought but then again basically they're
getting pretty much the same sensory input all the time and they have the same sensory experiences
going back yeah throughout their whole lives that's it's incredible isn't it i mean i i never know how
to make heads or tails of these people.
And in fact, there's an interesting analogous question to be asked about AI, which I'll
ask in a minute, but first, just because it came to mind when you were talking about
showing one thing to one eye and not the other, I really like Ian McGilchrist.
And one of Ian McGilchrist's most important concepts is the concept of focus.
And he says that focus isn't really, it doesn't seem to be like a brain activity as such.
I don't quite know how he characterizes it.
But he talks about how, like, if I played two sequences of, like, random numbers or words into each of your ears, so different sequence in each ear, and I just told you, just pay attention to the right ear, then afterwards you could recite, you know, the series of numbers, but you wouldn't be able to tell me what the left one was.
Whereas if I asked you to focus on the left hand side, you could do it vice versa.
And that just, it's extremely simple.
Like, yeah, obviously, yeah, I'm focusing on one and not the other.
But that just baffles me because I'm like, well, what's the difference?
What is the thing that's changing when I just decide to, I'm not moving.
There's no new information, really.
I'm just making this decision to sort of, I'm just pushing my attention over here and pushing my attention over there.
Well, that's it.
That's the difference.
But like, attention.
That just seems such a strange.
Like, what do you think attention is?
Like, do you think attention is just another kind of brain activity?
Do you think it's a sort of pre-brain activity, like a qualifier as to how the brain activity occurs?
You know what I mean?
Like, it seems weird for me to say that the attention or focus is just a brain activity
because it seems to be the means through which I determine what brain activity happens.
You know what I mean?
Yeah.
I think both, I think that's, it would be strange if it didn't.
I don't think it would have the functional role that it has.
I think attention is one of those words.
it's only slightly less mysterious than consciousness.
But I think it is less mysterious.
I think it's less sort of metaphysically loaded.
But it's one of those things that seems fairly natural.
Like I pay attention to something.
But when you ask what that really means,
it gets very difficult, very, very quickly.
But there are lots of ways to study.
I mean, it's one of the classic paradigms.
Indeed, as you said,
you can pay attention to different streams of sensory information.
And that will make a difference to what you can remember,
to what you do and so on, many things.
It is, I think, something that affects conscious experience.
We're more likely to be conscious of things we pay attention to.
I don't think it's a complete gait.
You know, some people might take an extreme view and say,
we're only conscious of that which we're paying attention to.
I think this is probably overstating it.
But I think conscious contents have become dominated by,
by attention.
And in the brain, you can think of many ways that might play out.
You know, the more extreme views might take it as a sort of really strict gating mechanism,
that the only things you're paying attention to get through the gates
and can influence other parts of the brain.
I prefer to think of it using the overall framing of this brain as a prediction machine idea.
That attention is, firstly, it's not just one thing, it's many things.
It's kind of many ways of adjusting the sort of the gain, the signal to noise ratio.
How much does the brain update its inferences about what's going on based on some sensory data?
Paying attention to something is equivalent to saying this sensory prediction error, this piece of sensory information is worth, well, I was going to say worth paying attention to you, but that's a bit of circular, is going to have more of an impact on the perceptual inference.
on this sort of continuing unfolding process of prediction error and prediction error minimization.
Yeah.
So on this point of the unity of experience, I mean, let's go back to the topic at hand
that you have written about and has obviously caught a lot of attention and impressed a lot of people
and won a prize for its eloquence.
Well, I say for its eloquence, also for its scientific and philipalienes.
philosophical rigor.
And that's AI consciousness.
And we've just been talking about how sort of weird it is to think about where the locus of,
the locus of focuses,
where unified conscious experiences might exist and how they interact.
And we,
we were on a panel hosted by Brian Cox,
a question of science,
which was,
it was good fun.
It was a little,
I remember sort of walking in and Brian sort of being like,
oh,
you know,
let's not talk about this pancycyc and stuff.
And I was like,
okay because I'd never met him before so I was just sort of like hey um so gonna gonna maybe gonna maybe do that anyway if that's all right with you it was good fun and one of the things that I said and it was kind of tongue in cheek and I didn't mean for people to laugh but they did but it's got a serious point in it is like when we talk about AI being conscious what do we really mean like I know you kind of think it's probably not going to happen but the thing you're imagining is not happening I'm like
what really are we talking about? Okay, so chat GPT becomes conscious, cool. But as I said on this panel,
you've got chat GPT on your phone, maybe, I've got it on my phone, it's on, we've all got our own
sort of accounts, I'm having a conversation here, even within my account, there are multiple
streams of conversation. And yet at the same time, you know, like open AI's like software and
computing power is not stored in my phone. It's stored somewhere else and my phone is connecting
to it. And I'm like, what would it even mean? Would people sort of imagine each individual
conversation is like a new conscious being waking up, unaware of the other ones? Or is it like
a conjoined twin where they're their own thing conversations, but they're kind of connected and can
kind of interact with each other? Or is there like one chat GPT consciousness that exists wherever
open AI, San Francisco, whatever, is based? And we're kind of
interacting with avatars or like as if the conscious agent is like sending messages back and forth,
you know, like, how can we even think about what it would look like?
I think this is a great observation.
David Chalmers has written a nice piece about this.
I think it's called what we talk to when we talk to a large language model.
Nice.
And on the one hand, I think all the things you say just,
highlights the trouble we get into if we get captured by the superficial fact that these things
are linguistically competent.
Under the hood, they're massively different.
And it's not only now a question of silicon and algorithms, but also, yeah, is there an identity?
What is the identity over time?
What's actually talking to you?
Is it, is it not the foundation model because that's stable?
Is it an instance?
Is it a server farm?
The server farms?
Yeah, right.
You can put a query in it.
It could be processed in Arizona.
half in Arizona and half in New York and maybe the other way around the next time.
So it's what's stable.
Also, and I think this is key, you can leave a conversation for a day or, you know, a year.
Right.
And then just come back to it.
Maybe you're not a year because I'll have changed it.
But you come back to it.
Yeah.
And from the seeming perspective of that conversation, that doesn't make a difference.
Yet time.
Time is fundamental both to how biological brains work and to our.
experience to the nature of conscious experience for human beings. So seen that way, I think
noting these differences just should highlight the danger of reaching assumptions about AI
consciousness on the basis of their fluency with language, their ability to pass the Turing
test. But on the other way, I think it's really valuable to sort of set aside, at least
temporarily not forget about, but temporarily set aside the question of whether they are in fact
conscious and ask exactly how you put it. For the sake of our argument, that's, that's just
ponder what it would be like if they were. And I think in doing this, we can, we can probe all
these issues like the unity of consciousness that we take of as, that we, that are so hard to get
out from underneath of in the human case.
Yeah.
And then maybe it sort of gives an insight into a wider space of possible minds.
These minds need not be conscious minds, but there is very likely a very large space of
possible kinds of minds.
And we are in one region of this space.
Maybe biological minds in general are in a larger space, but they are still somewhat
constrained by the fact that biological minds evolved within bodies, probably almost all of the
time. Maybe think about fungi or something like that, but most of the time. But there may be
other kinds of minds which don't have those constraints, which could be very different. And the
comparative approach is always useful. And I think we can do that without making any imputations
that these strange new minds need to experience anything.
Yeah.
I mean,
I think that the extent to which it's difficult to make sense of what we would even mean
by conscious AI could be an indication that the thing we're talking about is like a little bit senseless, maybe.
But it's also going to depend on what your theory of consciousness is
and what your sort of hunch about the nature of consciousness is.
and you said earlier that you believe in this idea of the controlled hallucination,
which you've talked about a bunch,
and you talk about it in being you,
your book,
and I suppose it might just be helpful for people who are hearing you for the first time on this show
to just briefly sort of overview where you're coming from
when it comes to consciousness before you start thinking about AI.
Yeah, so the overall perspective I have on consciousness
is to be very pragmatic really.
Like I,
when asked I describe myself as a pragmatic materialist.
Right.
I don't know that materialism is true.
And I don't want to be a person who's sort of out there in the front lines
defending materialism against all carmness.
But I think it's a useful,
I've always found it a useful perspective to adopt
because it gives us the resources to at least chip away
at the problem of consciousness.
And by chipping away, I take something like this, what I call the real problem approach.
So, you know, consciousness exists, phenomenal realist.
It has many different properties.
Vision is different from emotion is different from audition, is different from memory and so on.
What I'm interested in is a kind of unified way of thinking about different kinds of conscious experiences
that explains their relation to each other,
that explains their character,
and that progressively might explain something like
their dependence on a particular kind of substrate,
whether it's biological or not.
The controlled hallucination view is an expression of this.
It's really a gloss on a pretty old idea,
which is perception as inference,
which goes back to,
well, in philosophy back to Kant at least,
and the idea that there's a numinon and a phenomenon,
And then in psychology to Herman von Helmholtz, he was the first person to talk about perception as a process of unconscious inference.
So the brain is engaged in trying to infer the causes of sensory signals.
But all of that goes on under the hood.
It's not even unconscious perception.
Those are the unconscious processes that underpin perception.
And what we're aware of is the brain's best guess.
the brain's most what the brain comes to as the most likely interpretation of the causes of the sensory signals that it gets.
Yes, that's why I call it a controlled hallucination.
And it kind of flips what used to be the textbook view of perception on its head.
You know, is it's classical to think about perception as a process of reading out the world.
Yeah.
In this outside in direction.
Information comes into the eyes and is read out by the brain and you get deep.
and deeper and it reads out more and more complex things.
This view goes the other way and it says the content of our experience,
even though it seems to be out there and we sort of passively register it,
no, we actively generate it and the sensory signals are used to calibrate these predictions
so that they're useful for the organism.
Yeah, and some of the classic examples of this are things like the checkerboard illusion,
you know, which Alex will put up on screen right now.
And it's, you know, the two squares which look like they're different colors.
But square A and square B on your screen are the same color.
And you can prove it with various ways.
And if you don't believe us, you know, take it into Photoshop and check it, whatever.
And even when you know that, you see them as different colors because your brain is essentially
constructing a narrative.
Like, why would you want to perceive them as, you know, the apt to.
actual shade of the pixel that's lighting up on your screen.
It makes sense for your brain to interpret the world in a way that makes sense narratively.
And so there's a sense in which, yeah, we are constantly generating the world.
The first time we spoke on the show, we talked about like the audio illusions, the Yanni
Laurel thing, brainstorm green needle thing.
And some of these, like, whichever one you're like looking out for or listening for is the
one that you hear.
Like you're literally able to construct your perceptual reality.
I feel like every time I've heard you talk about the controlled hallucination, I've come a bit closer to kind of understanding where you're coming from.
But at the same time, underlying all of this for me is this presupposition that there must be experience.
I think if you say there is this thing called consciousness, now let's talk about what it's doing and what its nature is.
Yeah, there's lots of evidence to suggest that we're sort of the consciousness is as much outward as it is inward.
That's cool.
But do you see this controlled hallucination view as also like an explanation of what consciousness sort of is in the first place?
Or just why, granted the mystery of it existing at all, it presents itself in a particular way?
Much more the latter.
I don't think it really tells us what consciousness is.
I have this ongoing discussion with panpsychist philosophers like Philip Goff, who I know you know.
And he will always say to me, in fact, I saw him last week, he will always say to me,
things like everything that you, you know, you materialist-oriented scientists come up with
about the brain and its relationship to consciousness is completely compatible with something
like panpsychism, probably compatible with idealism and other things as well. And I think that's
probably true. I have no reason to doubt it. But the question for me is, is a pragmatically
materialist view. Well, I don't know what, what things really are. I mean, as you said many times on
podcasts, we'll guess that, you can take a chair apart, you find molecules, you take molecules
apart, you find atoms, you take those apart, protons, quarks, quantum fields. What actually is,
is something I'm certainly not qualified to answer? Is it mental states? Is it stuff that has,
in addition to physical fundamental properties, qualitative conscious properties as well.
I don't know.
But the thing is, I think, a pragmatic materialism that says at the level of analysis that we currently are interested in, or I am interested in, which is brains and made of stuff.
And I tend to be more interested than most people in what they're made of too, not treating them again as just computers that happen to be made of me.
am I
am I moving the needle on explaining properties of the phenomenon
or am I barking up the wrong tree entirely
and making matters worse
so imagine a kind of controlled experiment
you take a bunch of scientists who
have a sort of panpsychist outlook
and you take another bunch of scientists who have a materialist outlook
both sort of pragmatically how
just, yeah, they don't really know, but that's their predilection.
And then you let them loose to construct their theories and do their experiments.
And then you come back in 50 years.
Yep.
And you see from some view from nowhere, metaphysical view from nowhere, which group has come up with better, again, criterion not defined, better account of consciousness, made more progress.
I don't know the answer to that.
but my suspicion is that at least from where we are now,
we get further with a materialist outlook.
And I mean, this is what I said, and why do I think that?
Well, this whole view, for instance, of perception as controlled hallucination
goes some way to articulating that.
You know, we start with a view that our experience is some sort of outward in,
read out, and then we realize that maybe something to do with internal generative models and prediction.
And then we realize that that's,
That principle, which is based on a sort of materialist account of neurons and their dynamics, we realize, oh, we can understand emotion through those same principles.
We can understand the experience of free will within that same framework.
We begin to draw together apparently disparate phenomena in ways which have a kind of inner coherence and which have predictive and explanatory power.
we may still come up short.
In fact, we do come up short
when addressing the question of,
well, why is their consciousness at all?
But we understand more about the phenomenon.
And for me,
I wonder whether the more we do that,
the less perplexed will become
about the fundamental mystery
that the different metaphysical frameworks
are trying to grapple with.
Yeah, maybe.
I think the reason why I've been sort of unconvinced, unimpressed by this view of consciousness that you have is maybe because of the fact that I'm sort of, I'm mistaking what your project is.
It's a little bit like, you know, if we were talking about music, you know, there's a piano over there and you could start playing major chords and minor chords and you could start talking about like how music works.
You could start saying that there's like a harmonic series and like there are kind of, you know, you could do it in terms of strings like Pythagoras and there are sort of vibrations and, you know, the ones which are the right sort of frequency of vibration compared to others, create the happy sound and the sad sound.
And you can talk about all this stuff.
You can talk about artistic input.
You can talk about how playing various ways produces different sounds.
And you could do all of that and have this whole music theory.
You construct the whole of the, like any musical theoretical canon and never once have needed to know or address the question of like, well, what is music?
Oh, it's vibrations in the air.
That's what's actually happening.
It's like, you know, the air is being vibrated and that is hitting your eardrum.
You wouldn't need to even address that to have what some people would say is like a theory of music, you know, a complete, robust theory of music.
but I want to know, you know, what's causing that musical note.
I want to know that it's vibrations in the air, right?
And I feel like with a lot of so-called theories of consciousness,
they're like these theories of music that don't address what the music actually is.
And I wonder the extent to which you see your view of consciousness
as being one or the other or both,
or whether you would say that you're sort of leaving that question
of what it actually is to the side.
Amazon presents Jeff versus Taco Truck Salsa,
whether it's Verde, Roja, or the orange one.
For Jeff, trying any salsa is like playing Russian roulette with a flamethrower.
Luckily, Jeff saved with Amazon and stocked up on antacids, ginger tea, and milk.
Habaniero? More like habanier, yes.
Save the everyday with Amazon.
without sacrificing fun?
That's the Volkswagen ID4.
All-electric and thoughtfully designed to elevate your modern lifestyle.
The Volkswagen ID4 is fun to drive with instant acceleration that makes city streets feel like open roads.
Plus a refined interior with innovative technology always at your fingertips.
The all-electric ID4, you deserve more fun.
Visit vw.ca to learn more.
SUVW
German engineered for all
I think there's a
if I was going to be unkind to myself
it's sort of kicking a can down the road
on that question
but I don't think it is quite like that
because I think the road changes
and the can changes as you do this
and this is
because
what we think of as the mystery
of consciousness is
is a bunch of related mysteries
there are things like
what does free will exist
what is the nature of free will things like that.
I think the more we can sort of dissolve those mysteries
or wrap them into a single framework, a single approach,
we do make progress.
I also want to know how do we fit consciousness into our fundamental view
of the universe?
I don't know what actually exists.
You know, I think that stuff exists, but what stuff is, I don't know.
It may well be mental states.
I don't know.
But I think there's the kinds of theories and experiments and explanations that we can construct come about mainly through thinking, through following the scientific method in general has been about explaining phenomena in terms of the physical attributes of things.
even if those physical attributes turn out to be non-fundamental in some way,
there is a suspicion that this will not work when it comes to consciousness.
And I just, I'm suspicious of that suspicion.
Yeah.
You know, I want to wait and see and see what's left,
just as in we might strip away all the contingent aspects of conscious experience.
Let's strip away all the answerable questions about consciousness that follow,
from thinking about its relationship to what we think of as physical processes and see what's left.
Maybe there will still be a very deep residual mystery, but maybe there won't.
I mean, the historical analogy which I use with caution here, because I know it's imperfect, is life.
Yeah, right.
That life was considered, well, I mean, beyond physics and chemistry, that there may have to even be something non-naturalistic, which seems strange talking about life, but something supernatural.
El en Vitao, something not accounted for by the laws of nature as they were.
And of course, we don't understand everything about life,
but that sense of mystery has receded.
And we can now see that many things we think of as definition of life
can be accounted for through biochemistry, physics, so on.
Yeah.
Quantum mechanics to some extent as well.
So the sense that there was something incredibly resistive to this kind of explanation didn't persist over time.
Now, life is not the same thing as consciousness.
Consciousness seems to have this different mode of existence.
But aspects of it are kind of describable.
We can describe different kinds of experience.
We want to answer questions about which kinds of systems have consciousness at all.
all. And I think this perspective of understanding how consciousness in humans is related to the
physical systems that we are and then generalizing bit by bit outwards is the right way to
answer these kinds of questions. And back to AI, you know, we need to come up with a way of talking,
a way of thinking about is AI conscious because that underpins a lot of moral and ethical
decision making. It's already affecting.
and people's behavior.
And from a materialist, a lightly held materialist view, you know, we can start from the assumption that it is something about, there's something about the materiality of an embodied embedded human brain that matters.
Maybe it's the computation, I doubt it.
Maybe it's something else.
We now have a research agenda about, okay, what aspect of our materiality matters and why.
And I think coming up with justified, testable examinations of that question, hopefully answers to it will help us say the things that we really need to say about AI, but also about non-human animals, about all kinds of situations.
So maybe this does speak to the fact that there are slightly different agendas here.
I think a science of consciousness really needs to address itself to these immediate questions as well as,
the ultimate questions of what consciousness really is.
Yeah.
Well,
the essay is the mythology of conscious AI.
It'll be in the description.
Congratulations again.
It's a great read.
And it's well needed,
I think,
because there isn't much in the way of,
like, level-headed,
let's say,
discussion about AI
and sort of trying to calm people down on this front
and not from the perspective of, like,
you know,
avoiding doomerism and AI takeover
and terminated robots.
Just sort of put that on.
on the shelf for a moment. Let's just think about computation, consciousness, how it all works.
It's fresh air. So I'll make sure that's linked to the description. As well as being you,
your book has been around for years now. We talked about it the first time we met, but there's
there'll be more in there on the sort of controlled hallucination related stuff. And all set,
thanks for coming back. It's been fun. Thank you, Alex. It's been a pleasure.
