Big Technology Podcast - What Is Consciousness And Can Machines Achieve It — With Anil Seth
Episode Date: October 19, 2022Anil Seth is a professor of cognitive and computational neuroscience at the University of Sussex and author of Being You. He joins Big Technology Podcast for a discussion of AI sentience grounded in ...science and research. In this conversation, we discuss the definition of consciousness, what it would take for AI to achieve it, and whether researchers should keep trying to get there. Stay tuned for the second half, where we cover AI avatars in the metaverse and, yes, simulation theory.
Transcript
Discussion (0)
LinkedIn Presents.
Welcome to the big technology podcast,
a show for cool-headed, nuanced conversation of the tech world and beyond.
Now, if you've been listening to this show over the past couple months,
you know, we've talked a lot about the power of machines to, you know, potentially think for themselves.
We had Blake Lemoy and the Google engineer who said that its Lambda chatbot was sentient on, you know, a few weeks back.
And then, you know, soon after that, we had Gary Marcus, who came on and talked about how some of Blake's assertions, in his opinion, were somewhat ridiculous.
All throughout these conversations, we've talked about consciousness as sort of a given, which was obviously an oversight.
We really needed to define what consciousness is before we can ask whether machines are capable of it.
And that's what we're going to do today.
We're going to have the leading researcher on cognition and computational neuroscience in the world.
Anil Seth is here with us.
He's a professor of cognitive and computational neuroscience at the University of Sussex,
the author of being you, he and I have spoken about this recently.
And I'm so excited to bring you this conversation.
Anil, welcome to the show.
Hi, Alex.
It's good to speak to you again.
Yeah, it's just been a few days since we were speaking in Amsterdam.
That's right.
And I knew after that conversation that we had to do this long form.
So, you know, let's not start slow.
There's a temptation, I think, to build up and then start to ask the big question in these
type of conversations.
But let's start with the big question, then we unpack it as we go.
So can machines be conscious and how do we know if they are?
That's two big questions.
The first big question, of course, it depends on what you mean by machine.
We are machines.
We are biological machines.
very, very complicated ones, and we are conscious. I'm taking that as read. We have conscious
experiences. If by machine we mean something maybe made of silicon, maybe a bit like computers
as we know them now or might be in a few years, then I'm very skeptical. I'm very skeptical. I can't
rule it out. And I think having a strong opinion that it's either impossible or somehow inevitable
are both unsupportable assumptions.
I think we have to be carefully agnostic.
The only systems that we know of in nature
that have conscious experiences are living systems.
And computers and AI as we have it now
and as we will have it in the near future
is not living.
It's very different from a biological organism.
There are many things, I think,
standing in the way of making the claim
that machines like computers, like robots, can be conscious.
The question of how we would know is even trickier, perhaps, if that's possible.
And the problem is, this is tricky not just for machines.
It's tricky in general.
We can only be 100% confident that I am conscious.
I can only be 100% confident that I am consciousness.
Even believing that you are conscious is a little bit of a
inference. Now, I'm very, very confident in that, too, pretty much 100%. But the further you go
away, we only ever have direct knowledge of our own experience. And the further we go away from
that basis, the less stable these inferences become. And many of us have quite differing
intuitions when it comes to other non-human animals, cats, dogs, other mammals, primates.
Many people would agree have conscious experiences of some sort or another. But what about
insects. What about spiders? What about fish? What's about octopuses? What's about bacteria? Things get
very, very tricky there. So this is not a simple question because we cannot observe consciousness
directly. We can only infer it indirectly from how a system behaves, what it says, what it does.
Okay. And now this is probably a good moment to ask you what consciousness is, in your opinion.
I think when you entered, when you entered the field, it was pretty ill-defined. It was sort of
like people saying, well, it's, you just know it when you see it, but there are some definitions
now. So, you know, as we get into, and I'm going to ask you about, you say it's unlikely but
not impossible. So I'm going to ask you where the not impossible side of that comes in, but I think
it's first important to sort of talk about what consciousness actually is.
It is good to start with a definition. And actually definitions that servers have been,
have been around for a long time. It's more that consciousness was considered a bit disreputable to
study from within psychology and neuroscience until, you know, around the early 90s.
Yeah, why was that?
Well, and that's for a number of reasons.
It's in psychology for much of the 20th century, it wasn't any consciousness that was
kind of outlawed from research.
It was pretty much any kind of internal mental state at all.
They had this whole, you had this whole paradigm of behaviorism, which was especially dominant
in the US, actually, which said that it's only reasonable to study behavior because only
behavior can be observed. And we shouldn't think about inner cognitive processes at all.
That got dealt a death blow primarily by Noam Chomsky when he argued that it's impossible
to understand language purely on the basis of behavior. Yeah, we all speak. We all have language,
and language is a very important thing to understand when we want to understand human beings.
Once people are able to start talking about internal cognitive processes, then consciousness
was perhaps next on the list. There's a whole history of why in the 90s the tide turns.
Good work on consciousness was going on before then, but it was a mixture of the fall of
behaviorism, of a couple of Nobel Prize winners, Gerald Aedleman and Francis Crick, giving
the area some legitimacy, and also the beginning of brain imaging, or really the widespread
availability of brain imaging. So you could look inside the living human brain while people
were having different kinds of experiences. All of these things contributed to consciousness coming
back on the menu. And I feel very lucky because I was beginning my career, my studies at around
that time. But just to return to this crucial question of definition, of course, there's still
disagreement. Philosophers and scientists will argue about definitions forever. But there's some basic
sense, I think, in which most people agree. And the definition that I think illustrates this the best
comes from Thomas Nagel, a philosopher, and he said, for a conscious organism, there is something
it is like to be that organism. It feels like something to be me. It feels like something to be
you. But it doesn't feel like anything to be a table or a chair or headphones. For these kinds of
things, I don't know why I picked headphones, but I'm just looking at them on the table.
But for these kinds of things, the intuition is there is no.
experience going on at all. So consciousness in this sense is very, very basic. We all do know
what it is, in fact. It's what goes away when we fall into a dreamless sleep or go under general
anesthesia and returns when we come back. It's any kind of subjective experience whatsoever.
And I think the generality of that is important because it helps us resist the temptation
of equating consciousness with something else. On this general definition, consciousness
is not the same thing as having a richly elaborated sense of self with a name and a set of
memories. We humans have that. We adult humans do, but consciousness doesn't require that in
general. And also, and probably critically for our conversation, consciousness is not the same
thing as intelligence. Just because a system is smart, it doesn't, it's not saying the same thing
as saying that that system has conscious experience. The two concepts are distinct.
whether they inevitably go together, despite being distinct, is something we can talk about.
I don't think they do.
So that's, I think that's a good starting point.
Most people would agree that just tying consciousness to experience is important.
Some people then would use it synonymously with awareness.
I do.
I think consciousness and awareness, the same thing.
And also there's this other word sentience, which gets bandied around quite a lot too.
I tend to avoid this because, well, for some,
some people, sentience also means consciousness. For other people, sentience just means
that a system responds to input in some way, in which case a robot could be sentient,
or my central heating system could be sentient for that matter, with no connotation that
there's any experiencing going on. So I would leave that aside and think about consciousness
as this central concept that emphasizes that experiencing is happening.
Yeah. And it's interesting how we take this one, you know, really tough to define term consciousness. And then we get into the answer and the answer has another really tough to define term, which is feeling. What is feeling? Well, yeah, why don't you answer that?
Well, yeah, you're right. There's something a little bit circular about defining consciousness this way because it just keeps pointing back to this central fact that experience is happening. When I open my eyes, it's not just that my brain is registered.
visual information and guiding my behavior. I open my eyes and colors happen in my mind.
There are experiences of colors and shapes and I can perceive consciously objects. It's not just
that I respond to them. That's what feeling means in this context. It feels like something to
have a visual experience. It also feels like something to have an emotional experience too.
So yes, the word is a little confusing because feelings can be used to mean emotions. But I'm
using it here, just to emphasize that when I am conscious, there is something that it is like
to be me, back to Miguel's definition. Right. That is some experiencing happening. So why do you
find it so difficult for machines to have that experience? I mean, and then talk a little bit about
how you think that it's not impossible. So you do leave open a possibility that that can be something
that a machine or software program can actually have at some point.
Yeah, I think you have to leave it open because we just don't, and we have to leave it open mainly for the reason that although we now understand quite a bit more than we used to in neuroscience, psychology, philosophy, about the brain basis of human and animal consciousness, we don't know everything.
And in particular, we don't really know, although some people will claim otherwise, we don't really know what the necessary and sufficient.
conditions are. We know in a human brain that if you knock out parts of it, consciousness goes away
forever. And there are experiments going on that argue about which parts of the brain are more
important for consciousness as it happens in humans. But we can't really generalize beyond humans
to arbitrarily, arbitrarily different systems. So it's difficult to know what criteria we would use
to say that a machine has consciousness.
This is different from intelligence, because intelligence can be defined from the outside.
We can define intelligence as having the general capabilities of general artificial intelligence,
as it's often called, would be an AI that has the cognitive behavioral capabilities of a human being.
And that can be assessed from the outside.
It's still very tricky to do, but it's sort of.
not impossible in principle. The consciousness is very, very difficult. So we don't really know
the necessary insufficient conditions, which means we have to leave the possibility open. Why am I
suspicious about it? Well, I think there are two big assumptions that underpin the eagerness with
which some people attribute consciousness to computers to AI. The first of these is that
intelligence and consciousness go together in some intimate and unavoidable way that intelligence
just implies consciousness. And I don't think this assumption is very well supported. I think
it comes from a human tendency to see ourselves at the center of everything is the pinnacle of
everything, this of human exceptionalism, where we know we're conscious and we think we're
intelligent. So we assume the two are very, very, very perhaps necessarily linked.
And then the argument often goes that, okay, so when AI reaches a certain level of intelligence,
which is often but not always human level intelligence, then that's the point where the lights come on.
And the system is not only intelligent, it's also conscious.
But I don't really see any justification for that at all.
In the animal kingdom, there may be many organisms, many species that have conscious experiences,
but don't stack up against our human, sort of human-oriented view of what intelligence is.
You don't have to be particularly smart to suffer, for instance, to have experiences of pain
or disgust, which might be very important for many species in staying alive.
So that's one.
But the other really big assumption that people make is, in philosophy of mind, it's called
functionalism.
And this is the idea that the stuff that something,
is made out of doesn't really matter.
It's the, it's sort of couples two ideas together.
One is substrate independence.
So applied to consciousness, the idea is it doesn't really matter that we're made out
of carbon and that we have these kind of flesh and blood bodies and biological neurons
with neurotransmitters washing around.
That's just the substrate.
What really matters is what's, what functions.
the substrate supports what functions the mechanism of the brain implements.
And that if you could implement the same functions in a different mechanism,
then you would also get the same not only behavior,
but the same potential for consciousness.
So this assumption of functionalism is really quite common
that consciousness is just a consequence of the function of a mechanism.
But I think it's a really strong assumption.
and it may be true. I can't prove that it isn't. But it may well not be true because for some
things it is true. So computers, we all know by now, can do things like play go very well,
very, very well. And they actually play go. Now that's fine. Playing go is substrate independent
in that sense. But then there are other things which aren't. So a computer simulation of a weather
system, however accurate, however detailed, however complicated it is, it never gets wet or windy
inside the computer. Rain is substrate dependent. So simulation is not the same. The substrate in
this case would be the stuff the computer is made out of. Right. So you know, you make a computer
out of silicon, it can play Go against the human being. It's fine. Go is being played. But a computer
at a simulating a storm, it doesn't actually get wet and windy.
Yeah, I mean, it could get wet.
It would just break, so.
It could, but it's not getting wet in virtue of simulating rain.
That's just stuff that's going on inside and transistors of switching and so on.
So the question is, what's consciousness like?
Is it more like go or is it more like the weather?
If it's more like go, then indeed it might be substrate independent.
And then there's no obstacle to things that are made of silicon, made of tin cans, strung together the right way, being conscious.
But if consciousness is more like the weather or more like digestion, for example, in real bodies, digestion is a biochemical process that goes all the way down to the molecules, then the idea that a system, you know, a computer system could be conscious just doesn't work.
yes, sure, you could simulate it, but it wouldn't give rise to the phenomenon. There's this
distinction between simulation and instantiation. And it's important to just keep that in mind because
the fact that you can simulate something does not mean for all things that you also give rise
to the thing that you are simulating. Right. And can I suggest that, okay, these are physical
experiences, right? The experience of weather, for instance. But it seems to me that when it comes
consciousness what you're really sort of pointing to is that and you can correct me if i'm wrong
here but it's somewhat metaphysical right um that it's not simply a chemical reaction to experience
that there's something beyond maybe what we can explain uh with science in terms of what what that
actually is i'm not sure i i agree with that of course there's a debate let let me just you know
sort of give a secondary to that and i want you to answer but like if it's not metaphysical then why can't
you know, you eventually code that into machines.
Okay, because there are two different things going on here.
There is a legitimate debate about metaphysics in philosophy.
For instance, whether a scientific, materialist understanding of the brain as a machine of some sort,
of a mechanism of some sort, will ever explain the fact that experience is going on.
The philosopher David Chalmers has called this the hard problem of consciousness.
He says that why should it be that physical processing gives rise to a rich inner life at all?
It seems objectively unreasonable that it should, and yet it does.
So the hard problem is the intuition that a science of anything, whether it's computers or biology, physics,
any level of description of a system is going to fall short of explaining why and how consciousness happens.
that's the intuition behind the hard problem.
That's a metaphysical challenge.
And it's still out there.
I happen to think it's probably not going to turn out that way.
Just we underestimate the resources of materialism.
But that's very different from the claim that, okay, let's assume that we can.
Let's assume that there is a scientific materialist explanation for consciousness.
We haven't found it yet, but let's assume that some future neuroscience will have shown satisfactorily
how certain physical systems give rise to conscious experiences.
That does not mean that consciousness is therefore something
that you can program into a computer.
That's a separate thing.
Only certain things you can do that for.
You can do that for go.
You can't do that for weather.
You can't do that for digestion.
You can simulate these things,
but you can't generate them in a computer.
So the two are actually very different claims.
Okay, interesting. And then going back to your definition of it feels like something to be that thing.
You know, the interesting thing, and this is something Blaise Aguera, Yarkas, has the Google researcher who's worked a little bit with Blake Lemoyen has brought up, is that maybe there's something in the middle between, you know, being, you know, purely unconscious and then conscious in the way we think about it.
In that, the chatbot Lambda is, you know, not only are we trying to.
to interact with it but it's you know making a model of us and it remembers us and it's you know
it's sort of you know has has an experience potentially of speaking with people and listen i i think
that like i'm bringing this up for you know for the nature of conversation i'm not you know
with blake lemoyne and saying and i mentioned this on the show that i'm not with him and saying
that lambda is sent you but i do think that it's interesting you know in terms of like when i think
basically about the, it feels like something to be something.
You know, maybe there is.
Maybe it does.
Well, I think we can't rule it out, but I really don't think there's any strong,
any convincing reason to believe that's what's going on at all.
It's one of the things that Blake has argued, which I totally agree with,
is that by focusing so much on this question of whether the chatbot or whatever it is next,
has conscious experiences, we just stop paying attention to how remarkable some of this
technology actually is. The chatbot Lambda is so much better than chatbots of a few years
ago that it's really quite uncanny. It really is very, very impressive. And as you just said,
it does interesting things. Like, it will, as I understand, it will make a model of whoever it is
talking to. It tries to have a generative model of the mind it's interacting with. This is
pretty clever. This is cool stuff. But again, there's nothing in there that means that, okay,
we've now crossed the line. So now awareness has to be there too. That's a totally different claim.
It may be that in biological systems that have this capability of, for instance, modeling the minds
of others, having a kind of theory of mind, that we do this.
in a way that's inextricable from being conscious.
That's the evolutionary path we followed.
Evolution just hits upon consciousness as a very useful trick for us to do many,
many things, and we do it in virtue of being conscious.
But that doesn't mean that's the only way to do it.
And AI generally does things in a very, very different way from brains as we know them.
I know this is a point Gary Marcus makes quite frequently.
Much of AI at the moment is very, very clever pattern recognition.
With added things now too, with generative models and so on and so on, but it's still
quite a way off what real brains are doing.
And even if they become closer, we still have this issue that for consciousness, it's still
unknown whether simulation is going to be the same as instantiation.
And what we're going to end up with, I think, is a situation where we have machines
that become very convincing that there's a conscious mind behind them, much more convinced.
thing than even Lambda. I mean, Lambda, as has been pointed out many times, is really quite
easy to catch out. I think there was one snippet of conversation where somebody said,
so what makes you happy? And Lambda said something like friends and family, but it doesn't
have any friends and family. This is a bit of a giveaway, that it's just an algorithm that's
figuring out this is the kind of thing to say in this kind of situation. Now, maybe Lambda
in five years or 10 years, we'll not make that kind of mistake and we'll be in this very curious
psychological situation where we almost can't help attributing a conscious mind to the thing
we're interacting with. Yet we will have no good reason to believe that that is actually
a conscious mind behind it. And I think this is going to be quite challenging for society.
And so then when does it, that's really interesting that it's going to, I mean,
basically fool everybody into thinking, potentially fool everybody into thinking it's
consciousness. So, but it won't be. So when does it cross that line? I mean, is, again, like,
if it makes a model of the world, if it makes a model of the people that it speaks with, if it,
you know, has some, you know, instinct of self-preservation coded into it, where does it cross
the line from being really good at tricking people to actually being that thing that you would call
conscious? Yeah, nobody knows. I think this is the truth. The reality is nobody. It's,
really knows where that line is crossed because nobody really knows with hand on heart
what are the sufficient mechanisms for consciousness in anything. We just don't know,
which is why, by the way, I think that the whole idea that people should just set out
to try to build a conscious machine is a really terrible idea because we might do it by
accident without really knowing when we've crossed that line.
And crossing that line might be, if it is true that consciousness is
substrate independent, i.e. that it can happen in a digital computer, it may be
not so far away. And it may not be a function of intelligence. It may be just, oh, yeah,
we give it the right kind of sense of self. We give it the right kind of self model.
It might be that. It may be a very long way away if consciousness turns out to really depend on
the stuff that something is made of, then we'll have to wait for properly biological computers.
And in this case, other technologies like brain organoids, which are lab-grown little mini-brains
made out of neurons and human embryonic stem cells. That's a much more worrying technology
in the sense that there you've got the same substrate. And these organoids might not be very
smart at all. But they're made out of the same stuff.
So the prospect of at least some minimal form of awareness happening in a brain organoid I think is a much closer prospect than consciousness in a computer, even if the computer appears to be quite smart, purely because the computer is made of something different, and we just don't know whether that makes a difference or not.
Anil Seth is with us.
He's the professor of cognitive and computational neuroscience at the University of Sussex, the author of being.
you. He's also running this thing called the perception
census, which we're going to talk about
a little bit in the second half, but
you want to give yourself a head
start looking at it. You can find it at
Perceptioncensus.dreamachine.
We'll be back in the second half to talk a little bit
more about why forging ahead
with this without fully understanding what might
make it conscious, what might make
AI conscious is a terrible deal
as idea, as Professor Seth
puts it. So why don't we pick up the second
half talking about why this might be
so bad? Maybe we talk a little bit about those
brain organized as well. We'll be back right after this.
Hey, everyone. Let me tell you about The Hustle Daily
Show, a podcast filled with business, tech news, and
original stories to keep you in the loop on what's trending. More than
two million professionals read the Hustle's daily email for its
irreverent and informative takes on business and tech
news. Now, they have a daily podcast called the Hustle Daily
show, where their team of writers break down the biggest
business headlines in 15 minutes or less and explain
why you should care about them. So,
for The Hustled Daily Show and your favorite podcast app, like the one you're using right now.
And we're back here for the second half with Professor Anil Seth.
He's a professor of cognitive and computational neuroscience at the University of Sussex,
author of Being You, which I read over the weekend and quite enjoyed.
And let's go back to this idea of forging ahead, trying to build, because everybody's trying
to build general AI, like, and of course, conscious AI.
to many people is sort of part and parcel of that.
And I think you might take some issue with that.
But clearly they're trying to do both.
You have this quote in your book, Professor.
We should not blithely forge ahead attempting to create artificial consciousness simply
because we think it's interesting, useful, or cool.
The best ethics is preventative ethics.
That's really fascinating.
So we're, you know, it's kind of interesting because we're at this moment where like everyone's
like, you know, that's far away or highly unlikely, including yourself.
And yet there is this concern of, like, like, you know, there is this concern of,
like continuing to go forward because it might unleash this Pandora's box.
So can you unpack that a little bit?
Sure.
So, yeah, centrally, the problem is we won't really have a good idea about when it becomes
possible to create artificial consciousness.
And our own intuitions about it, as systems like Lambda have demonstrated, are incredibly
unreliable because we tend to project consciousness into things that appear similar to us.
So the better we get at developing AI, that can mirror back to us what we're like, how we talk, how we behave, combine a future Lambda with deepfakes and so on, like, at least then on video, we're not that far away from systems that will give us a very compelling impression of being conscious, and that's a very unreliable intuition.
But why should we be worried about it happening by accident?
And there's a very basic reason for that, which is that once something has conscious experiences, then we have a moral and ethical obligation towards that thing.
There is the potential for suffering.
If there's the potential for artificial consciousness, there's the potential for artificial suffering.
And just imagine a world where we accidentally, and we don't even know that we've done it, maybe it's not something that appears particularly human like we build some system.
And for that system, its conscious experience is aversive.
It's negative.
It feels bad to be that system or some version of bad for whatever that system is.
And then we replicate this system a million times, a billion times across the planet.
We have just increased the net amount of suffering in the world by a huge amount.
And we won't even know that we've done it.
And this may seem very far-fetched.
It may seem very, given all the clear and present ethical worries about the intelligence part of AI, of which there are very many.
I mean, AI is so disruptive in so many ways.
This may just seem idle philosophical speculation.
But the possibility, however remote and against underlying, we just don't know how remote that is,
I think it's sufficient to give us pause and give us some motivation for developing a framework
about how we should think about this, what research efforts should we encourage,
what sort of regulations and frameworks we should put in place.
There's a German philosopher, a very good friend of mine, Thomas Metzinger,
who has argued that we should basically stop doing all research that uses computers to try to understand
consciousness for precisely the reason that we might accidentally create consciousness and that
would be an ethical catastrophe. Now, I don't tend to go that far because in my view, what we really
need to do is understand more about the brain basis of consciousness so that we can make more
informed decisions about the likelihood of consciousness beyond the human. Right. And in the first half,
we talked a little bit about, you know, whether this is potentially like totally explained physically
or, you know, potentially explain metaphysically.
I mean, what's your read on it?
It's just like some people will say, you know,
that love is something beyond a pure chemical reaction
and other people will clearly point to like what happens
with the release of oxy, what is it, oxytocin in the brain.
Oxytocin, yeah.
And say, no, there's no such thing as love.
This is just chemical?
So is consciousness the same way?
Do you, you know, I mean, I don't know if this is fully solved yet,
but, you know, in your, in your heart,
do you think that this is, that it is purely physical, something you could explain with physical
sciences, or is there something else there? In my heart and in the depths of my mind, I am not
100% sure. But what I am? Okay, what's your hunch though? Yeah. My hunch is that it's the best
strategy for us to follow to take the view that it is and see how far we get. That it is physically
explainable. Yeah, yeah, I think. So there's this whole program of basically, you don't have
to follow the path of working on consciousness, trying to understand it, you don't at the beginning
have to make a call either way. We know that consciousness is not independent of the brain.
They're very, very lawful relations. You give some anesthesia, you lose consciousness.
The anesthesia goes away. It comes back. You give people psychedelic drugs. They have all sorts
of weird conscious experiences. You manipulate the brain, then experience changes. And we can start
to build bridges between these two levels of description. You know, you mentioned oxytocin and
love. And of course, love is not the same thing as oxytocin. They're different levels of
description. But ultimately, you know, something like love or any emotion is the sum of all sorts
of other things that are going on. But it's the description of all those sorts of other things
at a higher level of description. It's not just a molecule. I get really frustrated with dopamine as well.
People sort of attribute dopamine with so many different magical properties.
It's just a chemical.
It's the brain, the body, the organism within a network of other organisms that has the
property of social bonding, let's say, that doesn't apply to a chemical itself.
So having different levels of description is perfectly compatible with there being a scientific
explanation underpinning that.
I mean, we look at a flock of birds.
The flock of birds looks like it has an ideal.
identity of itself. But there's nothing mysterious about that being true while it also being
true that the flock is just made up of a bunch of birds. So all of this stuff is perfectly fine.
Now with consciousness, there may be something else. This is back to David Chalmers' description
of the hard problem. It may be that whatever explanation in terms of chemistry and physics and
biology and neuroscience, there will always be something left over about consciousness, some
some residue of mystery. Now, I basically tend not to worry about that because we won't know
unless we really try to explain consciousness in terms of underlying mechanisms. This is what I call
the real problem of consciousness rather than the hard problem, to explain, predict, and control
properties of consciousness experiences in terms of their underlying mechanisms. Now, this isn't the
same thing as explaining why and how consciousness is part of the universe in the first
place, it's a slightly different thing. It accepts that consciousness exists and tries to
explain why it comes and goes in certain situations, why vision is different from emotion,
is different from smell, what an experience of free will is like and what work that does
for an organism. These are all things that can be tackled by neuroscience with computational
modeling and all these other useful techniques we have. And we will see how far we get. And maybe at
the end, there will be still something that that is not explained. But maybe they won't. Now,
a similar thing happened with the study of life about 150 years ago. Eminent scientists and
philosophers thought that life could not be explained by physics and chemistry. There had to be
something kind of supernatural, some sort of spark of life. But of course, that's not the case. And
life was understood not by people eventually finding a spark of life or proving that life
doesn't exist, but by instead of treating life as one big scary mystery, understanding that
life has many different features, there's metabolism, there's reproduction, there's all sorts of
things, explaining the properties of living systems. And bit by bit, the intuition that there
was something mysterious about life faded away and the hard problem of life wasn't solved it was
dissolved and so that's why i i really believe that by just doing the hard work of explaining
understanding how the brain and the body relates to explains properties of consciousness we may
not solve the hard problem but we may dissolve it and you mentioned free will and your answers so
I'm curious, having studied the brain as much as you have, do you believe in free will?
It depends what you mean by free will again.
So I don't think there's any sort of immaterial essence of me that swoops in and changes the course of physical events.
You know, pull strings inside my brain or body and makes things happen that wouldn't otherwise have happened.
I think the whole, there's a lot of discussion when people talk about free will about, well, if the world, if the universe is deterministic, then there can be no such thing as free will.
If there's some amount of randomness, some amount of stochasticity, then maybe there's a little elbow room for free will to come in and swerve reality one way compared to another.
I think that's totally misguided. That's such a red herring.
For me, we are complex organisms. Some of the things we do are very immediate.
get instant responses to our environment.
We put our hand on a hot stove.
We take it off even before we've noticed.
Other things seem more deliberative and volitional.
Before this podcast, I went to the kitchen and made a cup of tea.
And that felt like a voluntary thing to do.
I did that because I wanted a cup of tea and something to drink while we were talking.
This is fine.
This just means that the causes of my organism doing that came more from within rather than
from the immediacies of my environment.
Mike Shadlin has this lovely term for voluntary behavior.
He calls it freedom from immediacy.
And when organisms like us make actions that have this kind of freedom from immediacy,
the causes of which extend way back through the body maybe back in time and why did I make
tea?
It's because I'm English and therefore I like tea, but I didn't choose to like tea.
And that wasn't something, a choice that I made.
That's just the way I am.
That's the system that I am.
When we as an organism make these kinds of actions,
then I think the sort of subjective flip side of that is the experience of free will.
So we experience free will to basically label in our ongoing flow of experience
those actions that come more from within than from outside.
And why do we do this?
It's not because the experience itself causes the action.
That would be back to this kind of immaterial.
stuff parachuting in and changing things. I think it's so that the organism can learn.
So when we make a voluntary action, it's useful to label it so that the organism can pay
attention to the consequences and maybe do something different the next time around,
because then the system will be different. So in my view, there's nothing particularly
mysterious about free will. We experience it. These experiences are fundamentally tied to our
behavior in useful ways, just as, let's say, the experience of color is very useful for an
organism to have, even though color doesn't exist as a mind-independent property of the world.
So, yeah, we have many degrees of freedom.
We exert control over these degrees of freedom, or rather our brain-body does.
And in certain situations, that's accompanied by a particular kind of experience we label as
free will.
And there's a good reason for that.
Yeah. Now, look, I've lived in, I lived in Silicon Valley for long enough that I eventually ran into people who were coding these programs and started to come to believe that we were, you know, living in a simulation coded by someone else, which is an interesting idea.
Yes.
And, you know, effectively that, you know, maybe we are the future, you know, instantiations of Lambda created by some society or person and, you know,
that's, you know, a world we can't see. So anyway, you feel like you're a good guy to run that
by. What's your reaction to that? Well, the simulation hypothesis. Yeah, I don't know,
the simulation hypothesis, this really does wind me up a bit, actually. Okay.
There's, there's, it's no coincidence, is it, that it seems very popular in Silicon Valley
where people are built complicated simulations. So there's, I think there's a bit of a God
complex going on there to start with.
Wouldn't it be nice to see yourself at this pivotal moment of technological development?
There's also a bit of an immortality complex happening here, some kind of techno rapture that, you know, if that's true, if we are, if we are sort of simulated minds in a big universe-wide computer program, then maybe we can find a way to upload ourselves to a different simulation and live forever.
all of these things get a bit mixed together, I think, the idea of being its powerful tech god and immortality and, yeah, it's all a bit unseemly to me.
And the basic idea of it comes from a very interesting philosophical paper by Nick Bostrom, where he tries to lay out a statistical argument for why it might be the case that we indeed live in a simulation.
Basically, the argument being that if we don't exterminate ourselves, which seems very questionable,
seems very likely that we will.
Yeah, rough week to bring that one up.
If we don't, yeah, exactly.
If we don't, and we keep on building more powerful computers, then at least some people who build these really powerful computers will be interested in building simulations of their ancestors.
and because these simulations could be run many, many, many times in parallel,
then it might be more likely that we, in fact, are in one of these simulations than in
the base reality.
I don't like this argument because it's basically putting probabilities for things
which we really have no good way to estimate at all and assuming that future might,
assuming things like what some far future mind might find interesting,
which is also a bit strange.
But also, and Bostron does say this in his paper,
it brings us back to a running theme of our conversation,
which is the idea that consciousness is something
that can be generated in simulation,
that it's substrate independent.
So Bostrom explicitly says,
yeah, there's actually another assumption for his simulation argument,
which is that functionalism is true,
that consciousness is something that could exist in a computer,
equally well as it exists as a result of a biological brain.
And he says in this paper, this is kind of widely accepted in philosophy of mind.
And I just don't think it is.
Certainly, I don't accept it.
I think it's a very, it may be true, but as we've discussed at length, there's no good
reason to believe that it is true.
Right.
And, you know, there's been a theme that's sort of underlined this entire second half
here.
And that, you know, you just brought it up in terms of, like,
like people's interest in playing God.
And, you know, it's interesting that, you know, you're on this path to try to explain
consciousness as we can in the physical world.
Yet you begin your chapter about artificial intelligence with a spiritual story, the story
of the Golem in Prague.
And, you know, I just wanted to hear your thoughts on, and, you know, it is interesting
because, you know, maybe it is entirely physical, but it's really hard to stay away from
the metaphysical and the spiritual in this conversation.
And I'm curious, A, if you think there's, you know, aside from, you know, causing, well, maybe it's a part of this called, like the causing suffering.
There's spiritual risk to this type of work.
And then I'm curious why you think people are so interested in playing the role of God, you know, creating life from nothing, right, as opposed to through nature.
And what do you think that says about us?
Yeah, I mean, let's take that second bit first.
I'm not sure that everybody is interested in doing that.
I agree that there is this prominent theme throughout history
where at least some people are.
Well, there's a large portion of AI researchers that do so.
That's the ones that I'm focusing on.
Maybe everyone was a little too broad.
Well, I think that could be,
so there's one very good reason for having that kind of motivation,
which is the idea that we can't fully understand something
until we build it.
This is this sort of idea of understanding through synthesis.
and that is that's very reasonable for complex mechanisms in particular we often need to build them
to simulate them to understand what's going on not everything will come down to a magic equation
or two that suddenly explains everything so that's that's a good reason but i think what you're
hinting at there's something more going on than just this sort of scientific uh motivation
which is this really wanting to create life,
really wanting to create artificial consciousness.
And it's that part that I am a bit suspicious of.
I think there have to be other reasons.
There have to be good reasons for trying to build something that is alive
and especially build something that has conscious experiences
beyond the fact that it might give you a sense of extraordinary self-importance
to be the person that does that.
That's not a good reason at all.
Why do we, why do some people have this desire?
I mean, I'm not one of them, so I find it a bit difficult to understand.
Now, I would be very, very concerned if that was in the future of my research.
I'm fairly confident that's what I'm not doing.
I'm sufficiently confident that the kind of computational models I'm working with
are not on a path to becoming conscious that I'm happy to continue to do that work.
if it was the other way around, then I'd be, I'd be questioning myself a lot more.
Yeah, and I definitely, you know, we do this conversation at the World Summit AI, which I'm
actually hoping to put on the feed sometime soon and, you know, asking these spiritual questions.
And I feel like, you know, they're important to address and can be unpopular.
I definitely got a few angry tweets about it.
But look, it's sort of like this is part of this discussion.
And, you know, it is interesting that this, you know, it is interesting that this, you
research is taking place in the context of, you know, religious participation on the decline
in a big way, at least, I don't know, in the U.S., and I'm sure elsewhere.
Yeah, there's a lot going on, and I think that there is a spirit, there's certainly a spiritual
dimension to trying to understand consciousness. There's this constant interrogation of
what it means to be a person, what it means to be an individual. And this is the question that
I try to explore in the book with the title being you.
You know, what does it mean to be the individual that you are?
Is it the set of memories that you have?
Is it this experience of free will that you have?
Is it the experience of the body that you have?
Is it the moods, the emotions?
What is it?
And does it change?
We tend to have this idea that there's an essence of me or you that is fairly persistent,
fairly stable, maybe even completely immutable and transport.
from one body to another in some ideas, some frameworks.
But as the spiritual traditions like Buddhism have long taught us, this is a very, A, it's very
unhelpful, and B, it's really not a very accurate description of what experiences of self
are like. They're continually changing. And our relationship, both to the experience of self
and to the experience of the changing self, also changes. Sometimes we can be
quite comfortable with the changing self.
At other times, we feel we struggle against it and we want to hold on to this notion
that there's a deep continuity.
And so there's something inevitable when you try to understand the brain basis of conscious
experience, both of the world around us and of the self within that world, that you'll interact.
I think in a productive, complementary way with many ideas.
from spiritual traditions because the questions are the same why what's going on why am i here what's
the meaning of all of these things do come up and i think it's wrong to just reject them and say well that's not
you know that's not a hard no scientific question their questions for which a deeper understanding
of the biology of consciousness and self speak to and i think they speak to it in a useful way
yeah we've covered a lot of ground so far we've talked about machine consciousness human consciousness
spirituality, free will, and simulation theory.
So there's just one more thing I want to bring up, you know, as long as we're here covering
the full broad base, which is that, you know, there's another thing happening as all
this research is going on as these questions are being surfaced, which is that some companies
meta are trying to push us into a metaverse where, you know, maybe one day you won't be
able to tell the difference between whether you're speaking with a human or a simulated being
somewhere inside there.
And I'm curious what you think that that's all about.
I mean, in terms of like your research and when it comes to, you know, our interactions with
others, if we start moving to the Metaverse and we're living in this hybrid society,
you know, are there things we should be concerned about and thinking about?
Absolutely.
Yeah, so here's where I think the real clear and present dangers of AI reside, or at least some of them.
Again, setting aside the possibility of real artificial consciousness, it is true that technology
is going to do a much better job, whether in the metaverse or somewhere else, of convincingly
imitating it, especially if interactions are remote and virtual.
Hardware robotics is still a bit lagging.
and we will indeed live in a strange world where we might not be able to tell the difference
between a real mind, a conscious mind, and a simulated avatar mind.
And when I say we might not be able to tell the difference,
I don't mean this in the sense that we just wouldn't know enough and if we knew a bit more
we would be able to.
I mean it in the sense that, for instance, we see in some visual,
illusions. There are visual illusions, for instance, where I can show you two lines and they
look different lengths and then I can prove to you at the same length. Even when you know that,
they still look different lengths to you. We open our eyes, we see colors. We can do the research
and realize that colors that as we experience them don't exist out there in the world, but that
knowledge doesn't prevent us from seeing colors. So there are things that we would call
cognitively impenetrable. And that's what I worry might happen in the reasonably near future
in AI, in the metaverse or wherever, that we will get cognitively impenetrable virtual
avatars that we are fundamentally incapable of not perceiving as being conscious.
And here's where I do think we need proactive regulation and ethics, because there are
many dystopian situations that can arise in this. We could be much more easily manipulative.
manipulated. We could be in a situation where we're encouraged to, you know, sort of Westworld
type scenario, for those of you've seen that science fiction film and series, where, you know,
we're encouraged to take out our most depraved instincts on systems that we know our machines,
but nonetheless, we can't help perceiving as being aware. And what will that do to our minds?
It's kind of sociopathical. So I think that, and then there's the other thing, which is that,
And I think to one of my projects, you mentioned right at the top of it, is the perception
census, which is trying to measure how different our inner experiences of a shared world
really are.
We don't know very much about this.
And so when we're interacting with artificial systems, these systems, even Lambda, as you said
a minute ago, are making assumptions about the structure of the mind they're interacting
with.
So now we have a whole sort of new set of potential biases to worry about, the biases that come about by not recognizing the diversity of our inner lives, just as we've had plenty of problems with biases in AI that arise from not recognizing our externally visible diversity in things like skin color and so on.
So all these issues, I think, are much higher up the priority list than me, for me, than worrying about a machine suddenly,
becoming aware. I think we should worry about that a little. I like to think of a worry budget.
I've got 100 pounds. How do I spend it on worrying? It's important to spend maybe five or
10 pounds on worrying about artificial sentience, artificial consciousness, because the consequences
would be so world-changing and catastrophic, potentially catastrophic. But the rest of the 95-quid,
I think we should spend on worrying about those things which are very likely to happen, if not
already happening. And exactly your scenario of us being in an environment where we may be
constitutionatively unable to distinguish whether we're interacting with a real human or an avatar
raises so many tricky ethical issues that we need to get our regulations, what we want and
what we don't want, straight. And we need to get that straight now. Yeah, after speaking with Blake
Lemoy and I went to do some reporting and try to find some people working on the original
landa system and actually did get someone on the phone and I wrote about this in big technology
who said that this is not going to stay as you know chat as you would like you know chatting in
a messaging system but it will you know end up having an avatar in a physical you know some
you know a digital form and and a voice potentially and you might talk to it and that doesn't
seem like it's too far away so no it doesn't and again it's like it's
not cool to just do that because it seems cool. That has massive psychological consequences
when you start doing stuff like that. Yeah. All right, before we go, I definitely want to get
a shout out to the perception census in. So do you want to share a little bit more about what that is
and how people can participate? Oh, yeah, I'd be very glad to. So it comes out of a project
that I've been doing for the last couple of years working with philosophers and composers and
architects actually called the dream machine. And the dream machine, it's just finished a run in the
UK. It's based on an old art idea, art device, in fact, which was a very simple, very bright,
flickering light that if you sat in front of with your eyes closed, you would have extraordinary
visual experiences. The beat generation artist Brian Geysen with William Burroughs and others developed
this in the late 1950s and he called it the dream machine and it was remained on the fringes
of art and culture for a while but there's also a neuroscience background to this too and it's something
we study in the lab we use bright stroboscopic light to induce interesting very interesting
very powerful visual experiences in people with their eyes closed and it's a window into the brain
basis of visual consciousness because we're doing stuff to the brain people have their eyes closed yet
they have these interesting experiences.
And anyway, in the last two years, working with this larger group, we're called Collective Act,
we developed the Dream Machine as a public event again, reinventing Brian Geysen's idea for
the 21st century.
And we've had a collective dream machine where 30 people at a time would go in, have a visual
journey with their eyes closed, and then emerge and reflect on it.
And we've had lots of people come through.
over the summer in the UK.
And one of the things that became very apparent
was that everyone has a different experience.
Everybody, even in the same situation,
comes out with a very different inner journey
that they've just had.
And this is something that I find fascinating
and also an impetus to study this larger question,
which is that this is true for all of us
in everyday life as well,
not just in the dream machine.
We might, we're talking to each other
over a computer now, but imagine that you were here and we both look out of the window and look
at the gray sky because of course it's a gray sky in Brighton. Are we having the same experience
of color or of the background sound? Probably not. You know, we all have, just as we all differ in
our body shapes and sizes and skin colors, we all have slightly different brains. So we all differ
on the inside too. And sometimes these differences are quite large. People have florid
hallucinations. They see things that other people don't. Some people have autism and their inner
lives is sufficiently different that it surfaces in their language and behavior. This is all true
and important. But I think what's often overlooked is that there's a wide range of diversity among
everybody. We all differ. And maybe we just differ in ways that are not sufficiently different to notice.
We use the same words. We'll use the same language. We behave the same way.
And it seems as though for each of us that we see the world as it is. That's the other thing. We open our eyes, it seems as though we're seeing the world as it is.
And so it can be very hard for us to realize that that's not what's going on. We're constructing a subjective world in a way that's very dependent on the particularities of our own minds and brains.
So this is a very long answer to introduce the perception census, which is our attempt to actually
measure and map out this hidden world of inner diversity.
Not a lot is known about it.
And so we designed a series of short, hopefully fun, hopefully engaging and informative little
illusions and interactive experiments and things and surveys that anyone can do, anyone
over 18, anywhere in the world can do all you need is a computer.
It has to be a laptop or a desktop.
You can't do it on a tablet or a phone because we're trying to do proper experiments.
And through this perception census, we'll begin to get a sense of how rich, how varied,
and how wonderfully diverse our inner world can be.
And it's a bit of an exploratory study.
So we're just hoping for tens of thousands of people to join us, spend half an hour
an hour with us. I mean, if you do the whole thing, it takes longer, but you can do it in chunks.
You can do a bit, leave it, come back. And at each point, we try to explain why we're asking
the things we're asking and help you learn something about your own powers of perception
and how they relate to others. So if you would like to help us with this research, I'd be
enormously grateful for people to give it a go and look for the perception census online. It's easy to
find. It's perception census. You can just Google that or it's hanging off my website in a very
prominent place too. And yeah, please consider taking part. It would be amazing. Yeah, I'm definitely
going to do it. Perceptioncensus.dreammachine.comworld is the URL, I believe. And this has been
such a great conversation. And I really appreciate you being here. Alex, thank you very much.
It's been a real pleasure. I'm so glad we had the chance to dive a bit deeper into the issues that
came up in Amsterdam.
Yeah, definitely. Me too. Okay, everyone, that will do it for us. Thank you, I know, Seth, for being here with us. What a great conversation. We're going to keep going on these conversations about machine cognition because, you know, I personally can't get enough of them. And I think they're really important to talk about. And I think you need many of them to really get an understanding of the full picture. So, thanks for staying with us. Thank you, Nate Gwattany for doing the audio. I think you LinkedIn for having me as part of your podcast network. Thanks to all you, the listeners.
Thank you being here.
We will have a new episode with a tech insider or outside agitator coming next Wednesday, as we always do.
So we hope to see you then on Big Technology Podcast.