Decoding the Gurus - Interview with Kevin Mitchell on Agency and Evolution
Episode Date: March 6, 2024In this episode, Matt and Chris converse with Kevin Mitchell, an Associate Professor of Genetics and Neuroscience at Trinity College Dublin, and author of 'Free Agents: How Evolution Gave Us Free Will...'. We regret to inform you that the discussion does involve in-depth discussions of philosophy-adjacent topics such as free will, determinism, consciousness, the nature of self, and agency. But do not let that put you off! Kevin is a scientist and approaches them all through a sensible scientific perspective. You do not have to agree but you do have to pay attention!If you ever wanted to see Matt geek out and Chris remain chill and be fully vindicated, this is the episode for you.LinksKevin's websiteRobert Sapolsky vs Kevin Mitchell: The Biology of Free Will | Philosophical TrialsKevin's TedX Talk: Who’s in charge? You or your brain? | Kevin Mitchell | TEDxTrinityCollegeDublin
Transcript
Discussion (0)
Hello and welcome to Decoding the Guruus the podcast where anthropologists and the psychologists
listen to the greatest minds the world has to offer and we try to understand what they're
talking about i'm matt brown my co-host is chris cavanaugh uh how you doing today chris you ready
for the interview we have scheduled yeah i'm all geared up i made the decision freely i activated my agentic processes
and the molecules in the universe helped push me along that path but i was the biological entity
that offered that decision but in another sense aren't we all just protons and electrons and
whatnot vibrating to the cosmic energy.
So who can say, Matt?
Well, like a single-celled organism waving its little flagella,
drawn towards the light,
your arc draws you inevitably towards interviews and podcasts.
Correct.
I'm irreducibly complex.
I've often said that about to you. you can't take me apart there's no
there's nothing there you take me apart and you're like god must have made this it's just so
freaking astounding that's what many people have said well this is the thing that podcasters say
but we really do have a great interview coming up it was uh really fun to geek out with kevin
but uh there was something i think you wanted to get off your chest before we uh got into it chris yeah i like to think
of it more of the collective we map we the podcast decoding the gurus wants to clarify one point
because you know after doing this gig for a while i might you can anticipate what's going to happen. And I'm just going to say that because this conversation,
one, it talks about things like determinism and agency and so on.
And we have made some joking, lighthearted comments before
about avoiding philosophical topics and whatnot.
So one, we're not entirely opposed to any discussion of that.
Just generally think that it's
best done with people who are making so i have relevant expertise how would you put it maybe
you have a strict no philosophers uh on the podcast except for the philosophers that we've
had on they're okay but they're exceptions right and we're allowed on you kind of snuck in under the radar so why doesn't kevin mitchell fall far off the no philosopher all right well yeah it's it's
it's a pretty loose rule because it's basically a blanket ban on all philosophers except for the
ones that we like which appears to be a lot of them so it was never completely enforced through
anyway but yeah and kevin mitch isn't a philosopher, is the thing.
No, he's a neurogeneticist.
He's a science guy.
And, you know, this book and his thinking does gravitate towards some philosophical questions.
You're interested in things like that, Chris, I know.
Am I?
In consciousness and free will and things like that.
Yeah, I mean, I am, but just my general position is, we all know it's not that mysterious,
not making a big deal out of it,
and state your opinions less confidently.
That's the general position, but yeah.
Well, we try to get out of philosophy.
It's like the mafia.
They put you back in.
That's right.
We can't.
We're like Neil deGrasse Tyson.
We want to say, oh, what's the point of philosophy and whatnot,
and then it ends up, oh, fuck it,
we have to talk about epistemics and that stuff all the time so it's a nightmare but like that was just you know one
thing to flag up and yeah okay we get it yeah yeah it's a bit hypocritical but the other point is
that there are some bits during the conversation and i was aware of this at the time you'll hear
us kind of joke about it but kevin references sense-making and needing like a science of meaning. And now superficially,
that does sound like something that you would hear in the sense-making sphere or come from
Jordan Peterson's lips. I'm aware of that, but I just want to make clear to people that when people use words like meaning or evolution, it doesn't
always mean the same thing, right? So like, actually, Brett Weinstein says you should apply
an evolutionary lens to help understand what's happening in society. He's actually not wrong
that understanding that we are social primates and that evolution helps shape our psychology
can be a useful thing to consider.
But Brad's evolutionary lens
is an insane conspiracy theorist,
hyper-adaptionist bullshit thing, right?
So when he says,
I'm applying an evolutionary lens,
he is, but like a batshit insane conspiracy theory one.
If someone else, some biological
evolutionary scientist who's actually not Brett Weinstein says that they apply an evolutionary
lens when they're looking at human society or whatever, it can be completely different. It can
be bullshit. You have to take them as they come, but we are humans limited by the words we use. So I think it's
worth noting just that in a lot of these occasions, people are using the same words, but with more or
less technical specificity. And Kevin is somebody that has quite clear technical definitions,
which we don't go through all of them, but they're covered in the book. So it doesn't mean you can't
disagree with them, still disagree with them all you you want but it is not based on the same bullshit
reasoning or lack of biological knowledge that something like brett weinstein yeah displays
yeah and being very specific about it if you take something like that word meaning and kevin talks
about that in in relation to the way an organism processes sensory input.
And if you haven't read the book, it might, like you say, superficially sound a little
bit like a sensemaker type thing where meaning can be anything.
And suddenly you're talking about egregores and dragons of solitude in a JP universe.
But actually, when Kevin talks about it in the book in detail, he's very clear about
what it means.
He operationalizes
it and defines that word very well, which is basically information gathered by the sensors
that actually is informative to an organism in terms of what it wants, in scare quotes.
So it could be a single cell organism that is receiving photons and is interpreting that as
a light source.
And that might be meaningful to it because it wants to swim towards the light source
because it can photosynthesize there or something.
So there's nothing metaphysical about it.
Yeah, that's not saying meaning in the Jordan Peterson scientific mysticism.
It's not like that.
So it is applying a continuum, yes, but firmly entrenching meaning in a biological process and what is
distinguishing the agents from non-agents and non-material. So we cover it in the episode, but
again, just Kevin is not in the sense-making realm, but hopefully people in our audience
should be able to determine that there is a difference but i just know because of the vocabulary that there might be that objection raised and if so you know you can
go annoy kevin on twitter or you can read his book or other lectures and see what he says but yeah
that's all i thought it's useful to flag it up because they are the same words that are being used yeah in a different
sense okay very good chris anything else um you want to get off your chest on behalf of the podcast
no no the podcast is silent and we move monk-like in procession to enter the interview with
kevin mitchell kevin mitchell to enter the interview with Kevin Mitchell. Kevin Mitchell.
I'm Matt Brown.
With me is Chris Kavanagh.
And with me is a third person.
It is Professor Kevin Mitchell.
Welcome, Kevin.
Oh, thanks very much.
It's a pleasure to be here.
So, Kevin, we've got you on because we've both read your book
and we've enjoyed it very much.
So for everyone else, though, Kevin's an Irish neurogeneticist is that correct kevin sure yeah it's a bit
it's a little fudgy fudgy term but yeah why not it's it's something like cognitive anthropologist
that kind of thing yeah so you're a professor of genetics at Trinity College, Dublin, and your research focuses
on the mechanisms of neurodevelopment and how that contributes to cognition and behavior.
Now, the book that Chris and I both read and enjoyed is called Free Agents, How Evolution
Gave Us Free Will.
So to start us off, Kevin, I thought before we jump into free will and consciousness.
Wait, Matt, Matt.
What?
Before, I'm not gonna start
on consciousness i just i have a a non-book related question for kevin that i just wanted
to clarify before we start kevin i'm from ireland as well in case you didn't notice that i didn't
notice yeah then the north part but i've listened to a couple of your interviews like with robert sapolsky
and stuff as well i'm really bad in general with irish accents but i couldn't work out
where your accent is from and i think it's like a distinctive well-known irish style accent so i
was just curious which oh mine is from dublin mine is a mix. I live in Dublin, but I was actually born in the States.
So, yeah, yeah.
I lived in Philadelphia for, you know, till I was nine.
And then I moved to Dublin.
Had a proper good, proper Dublin accent.
But then moved in my 20s to California.
And I had to tone it way, way down.
Especially all the swearing just didn't go over well at all.
So then, yeah, I spent 10 years in California and then moved back to Dublin.
So I have a bang middle of the Atlantic weird accent that I can only apologize for.
No, I like that because this gives me more credibility as I detected something slightly different.
You know, I'm actually much better than I give myself credit for.
So that was just because I initially thought you were from Northern Ireland.
You know what, I get that a lot.
I had a taxi driver one time who flat out refused to believe
that I was not from Belfast.
I just told him I was not, but he didn't believe me so yep yeah i guess
we're um dialectically kindred spirits i agree i agree i like your accent that's beautiful so just
yeah yes yeah melodious and soothing i listened to it for chris too for many hours with uh with
the audiobook version of your book and uh Chris, if you ever write a book,
you better get someone else to read it.
Yeah, I don't think I'll narrate it.
Yes, that's right.
But you did a good job, Kevin.
Oh, thank you.
Yeah, that was the whole thing,
that reading the audiobook was,
I'd never had done that before.
And man, it's tiring.
You really have to work on your
diction and i was cursing myself for writing long sentences with eight clauses in them
and no breath points so how do you do it do you just do you just do it in shifts and you
like you talk for a couple of hours and have a break yeah that's it yeah exactly yeah yeah full
on you know in the studio it was um
five days recording with the you know the sound guy and the producer in my ear and she stopped
me every once in a while and say something like a little bit of mouth noise there darling could
you just do that again that's lovely i i think kevin you might not be entirely cut out to be
a guru though because in our experience that's one thing
gurus have no problem with they can talk uninterrupted for 30 40 minutes without any
real need for prompting or or a check so yeah if you feel at all self-conscious about it yeah
you need to train that out of you to become all pro drawer. All right. Well, look at you two.
I was there all business, business, business,
trying to get to the meat of the matter, and you have a nice chat.
I'll bring us back to business now.
So before we get into the book, which I want to give our listeners
a bit of a whirlwind tour through it,
but one of the bits that I really appreciated about it in fact i'm
thinking of even prescribing it as a sort of a supplementary text to my students because i teach
physiological psychology when i'm not too busy and uh i think what psychologists often miss
is that evolutionary grounding in in this is this is how it all came to be. And it sort of presented the brain and its functions are presented as a Baroque mystery
and a fascinating little mechanism or whatever.
But there isn't really much discussion about how it came to be the way it is.
So how do you see it?
Do you see like an evolutionary point of view is absolutely
crucial to understanding how the brain works i do i mean i would say that for for everything in
biology you know it's a sort of a truism that nothing in biology makes sense except through
that lens of evolution so and especially for something you know really complicated like
like free will or human cognition you know think we have probably the most complex form of cognition,
presumably on the planet.
And to try and understand it just in that most complicated form
from the get-go is, first of all,
it's just too complex to get all of the bits.
It's easier to build up an understanding
than to just approach a really complicated thing.
If you were interested in principles of aeronautics,
you wouldn't start with the space shuttle, right? You'd start with the Wright brothers or paper
airplanes or something like that, you know? So you can build up some concepts that we need to
have grounded to understand cognition in humans. That's the approach that I wanted to take. And
especially, I think that's true for things like agency. Because, I mean, partly because, and free will,
partly because it gets really tied up in humans, the debate does with questions of moral
responsibility and consciousness and ethics and all sorts of other things, which to me are
very important, interesting questions, but they're extra questions. You know, you can ask a more basic question. How does any organism do anything? And that was the
approach that I wanted to take. And again, you can build up and ground an understanding of concepts
that you're going to need, right? But they sound mysterious. Things like purpose and meaning and
value, right? They sound kind of mystical and unscientific, but you can get
a nice grounding of them, I think, in very simple organisms. And then you can approach, you know,
how that is manifested in humans. So, yeah, I'm a big proponent of the evolutionary approach. I
mean, if we want to understand how intelligence and cognition and agency work, then we can approach that
by looking at how they emerged,
you know, and how nature got there.
That seems to be a useful approach, yeah.
I might have a little bit,
a slight tangent even before you've got into it, Kevin,
but it seems a good time to ask it,
that, you know, you're active online
and will have come across
all the various bad evolutionary
psychology or the big focus on race and IQ online. And from that, I see often a legitimate reaction
about like skepticism around evolutionary psychology approaches and kind of wariness about anybody
who is a you know applying an evolutionary lens to human psychology and human behavior and for
all the reasons that you just talked about and matt mentioned i i think it's unfortunate that
that's the case but i i'm wondering just uh in your experience or your thoughts on the issue how to go about for like say the
layman right who doesn't know the ins and outs of evolutionary psychology or behavioral genetics
how do you go about distinguishing between reasonable and like people looking into the topic versus the the kind of culture war poison stuff
god so it's so difficult and it's a sort of a constant battle to be honest so on the evolutionary
psychology front i think you can think of it with the capital e capital p you know the field that
that self identifies as evolutionary psychology which is mainly concerned with the idea that we
humans evolved, you know, in our recent human lineages under certain environmental conditions
that we are adapted to and that our modern lifestyle is, you know, conflicts with some
of the things that we're adapted to. And, you know, there's some truth to that. I think there's
some useful insights there. And there's also some very facile sort of just-so stories that can be reached for. And it becomes, you know, it can be very difficultable, right? That's the problem, is that they tend
to be just a hypothesis. And sometimes they sound good, they make sense. For example,
the idea that we are predisposed on having access to fatty food to eat as much of it as we can,
right? Because we didn't get meat that often. And, you know, it just makes good, it made good evolutionary sense
to do that. But that is a mismatch with our current environment because we have access to
high fat food all the time, as much as we would want, right? And then it leads to overeating and
sort of maladaptive outcomes. That's one that makes complete sense to me. There's lots of other
ones that don't, well, they sound plausible, but then,
you know, about, you know, say men being hunters and women being the nurturers and stuff like that.
And, you know, first of all, even if that was true, and even if it does make a kind of a mismatch
with current society, which is not a given, there's usually a follow-on from that, which is that it ought to be that way,
right? It's this mistake of taking an ought from an is and saying, okay, you know, clearly in ancient
times when we were cavemen, men were hunters and that's the natural way of doing things. And
therefore we should be like that now, but the therefore just doesn't follow, right? I mean,
the whole point of civilization is that we don't have to be
beholden to any of those kind of you know biological legacies if we don't if we don't
want to be and if as a society we decide not to be so yeah um so i think on the on the evolutionary
psychology front what i'm proposing is that is small case e and small case P, right? It's just like an evolutionary,
it's just a part of doing biology, right?
You can't be doing biology
without an evolutionary approach, really.
That's my feeling anyway.
And so the evolutionary sweep that I'm taking here
is much, much longer, right?
I mean, it goes all the way back
to the beginning of life, frankly.
And so I'm not making kind of,
you know, sociocultural claims in this case.
Yeah, I think in some respects,
it's not universal
because I think there is plenty of good work
in evolutionary psychology
around like kind of cultural evolution.
Absolutely.
Sometimes it's speculative,
but like Michael Tomasello's work on shared intentionality and that kind of cultural evolution absolutely yeah sometimes it's speculative but like michael
tomasello's work on shared intentionality and that kind of thing or or comparative psychology
studies with other primates tend to be i think very good work in general but the uh i think
what you're kind of pointing out is that the more hyperist, like speculative claims that you hear, for instance, in, you know,
Brett Weinstein talking about the Nazis being a specific lineage of the German lineage, right?
And the evolutionary dynamics there, that tends to be stuff that you should be wary of.
Whereas a lot of the work, and I would say like in your book,
is more focused actually, as you say, not entirely on humans
and actually quite a lot on the evolutionary processes
in other species in general.
And yeah, so I think when people are actually talking about biology and neuroscience, that it so on it's just that it lends itself
to these really simplistic kind of hypotheses that can't be tested and then it's just bad
it's not even it's just bad evolution and it's bad psychology combined basically yeah
so that doesn't that doesn't help.
I just wanted to underline another point you made there,
which is that there's the capital E, capital P evolutionary psychology,
which, as you said, I like some of it, and some of it is absolutely terrible.
But, I mean, more broadly, I struggle to talk about it with people
because they immediately think to that.
But often what I'm talking about is what you alluded to to which is that it's more of a framework work which
underlies everything that you do um and informs it even if your focus is on something else so for
instance i do a lot of work in addiction the various behavioral addictions we see for instance
that cross-culturally there's a bit of a universal phenomenon where say gambling and other
risk-taking behaviors are elevated amongst young men we don't have a good explanation for that
except that taking an evolutionary perspective actually provides at least some interesting
avenues to explain it yeah yeah no and i think that's a great example that young men being more prone to taking risks,
there's a very good evolutionary explanation for that. And it's a, I mean, it's not just
evolutionary, it's an ecological life history explanation of what it is that teenagers should
be doing. And, you know, it's not an aberration. It's, I don't know if you know Sarah Jane
Blakemore's work, she would say it's a perfectly adapted part of the life cycle.
They're doing exactly what they should be doing.
That's the whole, that's, you know, it's part of the ecology there.
Yeah, so I think that's right.
I did want to just pick up on the other part of the question, Chris, that you asked me about the behavioral genetics side, because that is another area where it's just super, super easy to take really simplistic readings of what's actually a complex and nuanced kind of field.
And so, for example, you could say, well, these people have shown that such and such a trait is heritable.
And what they mean by that is across a population that they studied, where they see some variation in a trait, say, imagine like height, right? So
there's some tall people and short people, there's a distribution. And you can ask how much of that
distribution is due to differences in genes versus something else. And that proportion is the
heritability. And it's a really technical concept in population genetics.
But of course, it sounds like just a colloquial term, right? How genetic something is. And it's
easy to get those things mixed up. And it's also easy to infer that because something is partly
heritable in some population, that it's therefore genetically fixed, and that it's completely innate,
and that it's imm innate and that it's
immutable. And those things don't follow actually, right? And it definitely doesn't follow that just
because you have some trait that's heritable in one population, and then you see a difference in
the trait between two populations, that that difference is due to genetics, right? There's no,
it just doesn't follow from that at all. The difference could be entirely environmental, because all you've done in your one population is surveyed the set of variation that actually exists.
given population and very strongly genetically driven in that sense how uh how much your sort of appetite control energy balance set where that set point sort of is but if you look across
populations you see huge differences in average body mass index that are completely cultural and
societal right yeah so yeah so the heritability thing is easily, easily misunderstood and widely misunderstood, sometimes wantonly, I think. And yeah, then you get into all kinds of really unpleasant sort of stuff, especially in the, in the online world.
Correct me if my rough understanding of this is wrong,
but if the sort of natural inclination is to think about a heritability component and a non-heritable component
and they're just like two bits, it could be a percentage,
it could be 20, 80, 50, 50, and they just sort of add up.
But you can imagine that because one component is modifiable,
like the environment, if you had an environment which is extremely homogenous,
like everyone grew up in a similar circumstance or whatever,
then almost by definition, any variation that you did observe
would be 100% genetic.
Yeah, that's exactly right.
I guess on top of that, again, correct me if I'm getting this wrong,
they also interact.
So as well as there being the
additive bit that it gets more complicated again absolutely so you're right on both counts so the
heritability is a proportion that applies to a given population studied at a given time right
and so if there's very little environmental variance in that population at that time
the heritability will just be very high because it's the only thing left making a
contribution. But that doesn't mean that environmental factors couldn't make a contribution.
You just haven't done the right comparison. Yeah. So that's absolutely true. And then,
sorry, the second thing that you said was spot on as well. And I've just forgotten.
The interaction.
Oh, the interaction. So yeah. So in order to make
these calculations, you know, if you're doing twin studies or family studies or population studies,
people generally just make some set of assumptions that make the mathematics possible. And one of the
assumptions is there's no interaction between genetics and environment. And we absolutely know
for many, many traits that that's not the case in humans, right? So, you know, for example,
for something like intelligence and educational attainment, there's totally a genetic interaction
with the environment because we share an environment and we share genes with our
parents who create the environment that we're in, unless you're adopted. And so there's a massive
sort of confound there in those kinds of studies.
That doesn't mean they're not trustworthy at all.
It just means that the number of heritability that you settle on is like, it's sort of arbitrary to a certain extent, right?
If it's 80% or 70%, it doesn't matter.
It's not a constant.
You haven't discovered the speed of light.
It's just a descriptor of what's happening in the population you're looking at at the time.
Yeah.
Well, let's bring this back to, I guess, the beginning kind of of the story that you take in your book. actually it's quite helpful to think about like a single-celled organism or a little worm wriggling
about in the ground because you know we may not like to think about it we've got we've still got
an awful lot in common with those simple organisms in terms of the basic existential facts of being
an organism in the world that that needs to eat and defecate and hopefully find a mate and not get eaten. So maybe start us off with that journey of what you took.
Yeah, so the reason why I did that was because, you know,
the question of free will starts or hangs on things like how could it be
that we could make a decision, right?
How could we be in control of our actions if we're just neural machines?
And, you know, it's just our brains doing things and they push us around.
We're just the vehicle of our brains, right?
Or even, you know, even worse, how could it be we're in charge of anything if our brains
are just physical objects made of atoms and subatomic particles and ultimately quantum
fields and whatever.
And physics says all of that is just going to evolve as a system in a deterministic fashion,
right?
So it's just the laws of physics are going to determine where the atoms go when they
bounce off each other or whatever, right?
So where are you?
You just disappear in that.
There's no, you know, if that's true, there's no doing.
There's just lots of happenings.
There's no doings in the universe. And it turns out that that problem doesn't just apply to humans it applies to anything
right it applies to any living organism this question of agency how could any organism be
said to do something to act in the world if it's all just physics that's deterministically driven
you know by by these low-level laws.
And that's, you know, why I wanted to start there. And also for this other reason that I
mentioned earlier, that I wanted to disentangle the question of agency from these, what I take
to be secondary questions of moral responsibility and consciousness and things that are, you know,
sort of uniquely human and even sociocultural in some ways. So I wanted to get at the more basic
biology. And I think you can, you know, in order to do that, it basically took me back to the
question of what does it even mean to be a living thing, right? What's the difference between living
things and non-living things? And for me, one of the big differences, living things act, they do
things, right? And non-living things don don't and that it's funny because it's so
fundamental that property of a of agency and yet you know if you open a biology textbook and you
look at the list of the characteristics of life agency won't be one of them it won't be in the
index it's just not a thing that's talked about as a central pillar of what it means to be living
can i butt in can i butt in oh sorry i don't want to
ruin your flow but i i mean so if i've understood you right i mean it's partly like a levels of
description like what's the best level at which to understand a phenomena so we could be trying
to understand how a star works or how chemical interactions work and there is a level of
description there in terms of things happening but if you want to shift to say okay well why is that tree growing leaves or why is the worm
heading away from the light then you're not going to find your answers at that lower level of
description yes but it's deeper than that actually because you could say that right and then something
would say,
so the physicist Sean Carroll, for example, has this idea that you can have these effective theories at different levels, which are just useful, but in a sense, they're almost useful
fictions because all the low levels, all the real causation is still happening at the lowest levels.
Right. So you could say your tree is growing here and you could give some reasons for it, but actually someone else could say, no, look,
it's just the way the atoms are bouncing off each other. Your level of causality is kind of an
illusion. It's convenient, but it's not where the real business is happening. So what I want to do
is something even deeper than that, right? What i want to say is how could it be
that the the level of describing the tree and what it's doing is the right level right that it's not
that all the causation is not at the bottom actually how could that be how could how could
life have elbowed its way out of basic physics and and gotten some some wiggle room to become
living things with organization aware that there's some macroscopic kind of causation,
not just microscopic physics. So, yeah. So, the evolutionary story gets kind of metaphysical
pretty quickly, actually, because it has to dig into what do we even mean by causation?
How could it be that there can be some causal slack in the system
that enables the way it's organized to have any any causal power in determining what happens right
so yeah it gets it gets it gets deep pretty quickly i might i might push you forward more
levels than you're intending kevin. But one of the questions that I
think comes up with this, and when I was listening to the book, it was something I was waiting to
hear as well. And I think you explained it well, but you can probably do it better in person. So other person so whenever discussing the the kind of organization of the brain and a neural activity
and why it makes sense that like you're talking about it isn't just an epiphenomenon to talk
about the the kind of collective upper levels and the kind of assessing of patterns and this kind of thing. That to me, it did sound like you're talking about the causality
going from the bottom up, but also coming back down, right?
And being able to work from the top down.
But I think for some people, that will be the part
which is like kind of difficult because
how can it be going in that direction if everything relies on the fundamental processes so
could you elucidate a bit about you know how patterns and you described it very beautifully
so i'm sorry to try no no no you've hit yeah no you've absolutely hit on the sort of major problem that people have with this view of top-down causation, because it sounds like it couldn't possibly be in play if, indeed, all of the causation is already exhausted at the lower levels.
And so, I mean, the basic starting point here is to say, well, all the causation is not exhausted at the lower levels. Like if we look to physics, if we look to quantum physics and even classical physics, this idea that it's deterministic is just not supported.
There's just no good evidence for the idea that classical physics even is deterministic in the sense that the state of a system right now, plus the laws of physics,
completely determine the state of the system at some subsequent time point, right? And a time
point after that, and after that, and forever and ever and ever and ever, right? That view just
doesn't hold and isn't really supported by physics, even though many people take it as sort of proven.
I could be, I's well are you referring to
the quantum quantum dynamics like the uncertainty there or is something else i could be one of those
people so there is sure yeah yeah yeah so there is the quantum uncertainty right so it's it's it is
in the first instance literally physically impossible for a system to exist in a state
with infinite precision at any given time
point right so the idea that you that you just have a system you know it's just given with infinite
precision the description of the of the system and then it evolves from there just isn't true
it it just can't hold in the universe right and that's from the heisenberg uncertainty principle
and other things but also there's principle, which is simply the case that information is physical, right? It's not just some material
floating around kind of thing. It has to be instantiated in some physical form.
And that the amount of information that can be held in any finite bit of space is finite,
right? You just can't pack infinite information into a finite bit of space.
is finite, right? You just can't pack infinite information into a finite bit of space.
So the idea that at the Big Bang, it already contained all of the information of everything that's going to happen for the rest of time, including me saying this sentence and you hearing
it is, I mean, it's ludicrous on its face, but it's also just sort of mathematically and physically
impossible, right? There's good physical physics reasons why that couldn't be the case. And so what you have is systems that have a little bit of
fuzziness around the edges, as it were. And for lots of things like the solar system,
that little bit of imprecision doesn't matter because they're very linear systems. They're
very isolated. And of course, we can make these great predictions and we can land little rovers on comets and all kinds of stuff. It's amazing,
right? But most systems in the world are not like that. Most systems are actually chaotic and they
have lots of nonlinear feedback, which means that tiny imprecisions or bits of indefiniteness in the
parameters at any point over some time evolution lead to the
future being genuinely open right there's lots of ways the system could evolve depending on how
these sort of random digits take on values as time progresses yeah so that's my that's that's my
non-physicist understanding of some of that physics and it's contested right i might i don't want to i don't i want to just say this is the way that it is uh but it's a way that i mean
it fits it fits better my intuitions about you know certainly the idea that that everything we're
saying right now was prefigured and predetermined from the big bang just sounds silly right and
absurd and it doesn't have to be that way and physics doesn't say that it is or
prove that it was yeah chris and i i do want to go first yeah i look i mean i mean part of it
obviously is totally uncontroversial there is like you as you said you can't compress all the
information into into the big bang there's there's uncertainty and randomness being injected into even physical
systems all the time and there is room for emergent behavior complex systems that are that
are interacting with with many moving parts and my go-to when imagining this is always thinking
about the weather and thinking about hurricanes and things and i suppose to put words in your
mouth see if this is the kind of position you would hold,
if you're looking to understand why, say, a particular molecule of air
or even a leaf blowing around the wind is doing that,
then you can and maybe should be pointing to the hurricane,
which is, of course, an emergent complex system,
which is absolutely not predictable if you roll back the clock a thousand years or so.
And that is legitimate.
And sometimes, you know, you argue that's the correct attribution of causality for that phenomena.
Yeah.
So let me pick up first on one thing you said about, you know, the idea that there's randomness constantly being injected into the system.
I think that's probably, that's not the way I have come to think about it.
Because when you think about it that way, it sounds like the randomness is this positive thing that is like coming from nowhere and just appearing in the universe, right?
And that's problematic.
I think it's the reason why a lot of people kind of object to the idea.
It's like, where is this coming from? I'd like to think of it just as an indefiniteness in the future.
It's a negative. It's an absence of full determination. So the present state
underdetermines what happens in the future based on just the low-level laws of physics. It could
go lots of different ways, right? And then what becomes interesting is to ask, okay, well, what else could influence the way that a system evolves through time?
And what else is the way that it's organized, right? I mean, you referred to a tornado,
the self-organizing or a whirlpool, right, is a collective kind of phenomenon where what all of
the water molecules or air molecules are doing constrains what each of them is doing, right? So they collectively make up this dynamic pattern that in turn constrains what all the parts do.
And again, it's a sort of a non-problematic, non-controversial view, I hope, right?
And what you can do is take that a little step further and say, okay, well, what about,
what if the way a system is organized had some function, right?
And this gets back to these questions of purpose and meaning and value. So if we think about a
living system as like a storm in a bubble, right? It's this set of chemical reactions that are all
reinforcing each other collectively, right? So they generate this regime of constraints
that is made by all the bits, but also constrains
all the bits to keep that pattern going through time.
Well, then, right, if a living thing, you know, the ones that are good at it persist
and the ones that are bad at it don't, right?
So it's a kind of a tautology, but it's the basis for all selection.
So we could have ones that are configured one way or configured another way,
and the ones that are configured in a way that helps them persist will persist through time.
And that kind of configuration can come to take on the form of functional systems that, say,
allow a bacterium to detect things in its environment and embody a control policy that
says if you detect a food molecule, you should go towards that says, if you detect a food molecule,
you should go towards it. And if you detect a high pH, you should move away from it, right?
So then we've got some functionality embodied in the organization of the system that can do work
in the sense of determining what happens, right?ceptiling what happens in an adaptive fashion relative to this purpose
of existing, right? So, we've got purpose in a non-mystical, non-cosmic kind of way, just
this sort of tautological way. We've got value because it's good to go one way or another
relative to that purpose. And then we've also got meaning. This is a new kind of causation
in the world that non-living things don't show, is that the bacterium is sensitive to information.
It's buffered. It's insulated from physical and chemical causes outside it by its cell membrane
and its cell wall, but it's sensitive to information about what's out in the world,
and that information has meaning. And it's the meaning that's important for what happens. And ultimately, after billions of years of evolution, you have the same
things at play in us, where it's meaningful states of neural activity that are the important thing.
Yes, they have to be instantiated in some physical pattern, but the details of the pattern are
actually arbitrary. That's not where the causal sensitivity lies. It's what the patterns mean that's important. And that's why
I think anchoring that view in these simple systems, I hope at least, lets us get a grip
on these sort of slippery concepts, a bit nebulous bake sounding, lets us operationalize them in a
firmer way. And then we can go from there and
scaffold our understanding of more complex systems on the back of that. I want to dig a bit more into
the biology and the cognition, Kevin, but unfortunately, there is one philosophy that
is still floating. And I just want to check if i understood correctly so one one thing
that i fairly often encounter when debating with people about determinism is this notion that if
you wind the clock back and you set everything up you know and replayed it it would all go the
same way right and this is one of the the kind of thought experiments that you said every atom in the same way.
And are you saying something different would happen because it can't right now?
My answer to that has typically been that relies on assuming that the universe is, as
you described, like there's a ribbon going and we know that it would all go.
And that my argument is just, we don't know that it would all go and that my argument is just we don't know that we don't know
exactly that it you know the way that it is but there are different opinions amongst people about
the position and on that i detect that that you are saying something similar uh about that we
cannot just assume that the future is completely set and use that as the
foundational premise to argue. But on that as well, if it were the case that the universe was
deterministic in that manner, I'm just curious, does your model about how cognition functions and kind of top-down causality, does it completely rely on the universe not being deterministic in that manner?
Or could it be incorporated into a model with that?
Well, yeah.
So it's a great question.
It's the question, really.
So in a sense, you could say, let's take the universe as it is right now with all these
sort of cognitive systems moving around in it, like you and me, and say, okay, now let's assume
it's deterministic. Could you still do anything, right? Would what happens still be up to you?
And to me, that thought experiment suffers from the problem that it just assumes the existence of agents in a universe like
that, right? First, you have to show me how you would get agents who are trying to do things
and expending all this energy, apparently trying to take action to make things happen that they
want to happen when all of the causation is actually still deterministic. And what that means is that it's reductionistic, right?
It means it's not just that it's determined,
it's that all the causation is still happening at the level of atoms
and subatomic particles and quantum fields and so on, right?
So for me, the big question there would be,
well, why would you ever get living organisms in a universe like that?
And I'm not convinced that they would ever emerge.
And so, you know, the main point I would make is that physics just doesn't say that determinism is true.
So why does everyone start with that premise in the free will debate?
It just drives me bananas.
And you see people, like, they'll just accept determinism. Even you see people, they'll just accept determinism.
Even people who are arguing, they'll accept determinism. And then one of them, so Greg
Caruso and Daniel Dennett, for example, had a book out recently called Just Desserts,
which is very interesting to read. It's mainly about moral responsibility, really, but they both
accept determinism, this single timeline. And Greg Caruso ends up just being
a free will skeptic, which is really denying the phenomenology of our everyday existence completely,
that we do make choices and have some control over our actions. And then Dennett ends up being
a compatibilist, which is basically the view that despite the universe being deterministic,
we can still
assign moral responsibility to agents for their actions because the causation comes from within
them. So, Chris, that's the view that I think is quite common, actually. And it's the one that
you're alluding to where you can make this sort of construct where you say, okay, even though
there's no real choice in the universe, right?
There's only one possibility that will happen. You're still part of the causal influence there.
The way that you're configured is part of the causal influence of what's going to happen.
Therefore, we can still blame you, right? That's Dennett's view. And I'm trying to be
charitable about it, but I find it incoherent, right?
I just don't think it could possibly hold.
So for me, the rollback the tape experiment, first of all, I wouldn't start from that premise, right?
If you start from determinism, then you have to ask, well, where could the freedom come from?
But if you start just by accepting the evidence for indeterminacy and an open future, then
you have to ask a different question.
It really interestingly flips the script.
Then you have to ask, well, shit, now all these kinds of things could happen.
How does the organism make happen this one thing that it wants to happen?
Where's the control coming from?
Right?
And that's where I think the, ironically, in a sense, the answer, right, the power to
control what happens comes from the
indeterminacy itself, not directly. It allows macroscopic causation to emerge, to evolve,
right? So it's a necessary condition. And, you know, some people would say,
look, either the universe is deterministic physically, in which case I'm not in charge
of anything, or it's indeterministic, in which case my decisions are just driven by randomness
and I'm not in charge of anything. And there's a third way, which is what I'm arguing for,
which is, no, the indeterminacy just creates this causal slack. It allows macroscopic organizations
to emerge. And because we have this selective force through time, or process through time,
then what you will get is macroscopic organizations that do things, that allow organisms to do things
in the world and become loci of causal power themselves.
I've got one just very quick follow-up to that, which is, so in your model, Kevin, is it that the fact that there are agents in the universe, as distinct from objects and non-agentic things, that is the key evidence that indicates that the universe model cannot be purely deterministic because without that, agents can't function.
And if that's the case, I can imagine that one of the objections
that would arise would be, is that putting too much emphasis
on the fact that there are agentic beings on our, you know,
random little rock and we don't have any evidence yet for anywhere else in the universe.
So the fact that we exist still needs to be explained,
but there's a big universe and agents are only,
as far as we can see, on like our planet.
So, yeah, I guess I'm not reasoning very well,
but I'm just wondering you know that
it's it's putting a lot of work on the agents it is well so what i don't want to do is suggest that
the that i wouldn't say actually that that my theory my way of thinking about this rests on
that what i would say is for the compatibilists it's a thing that they need to explain right
I would say is for the compatibilists, it's a thing that they need to explain, right? I'm saying that that's part of what happens is that agency emerges because it can, right? Because there's
this little bit of causal slack that allows life to wiggle free and to become causally efficacious
as entities, right? Whereas for compatibilists, I think that's a thing that they just assume is
the case, right? Right now that these agents just exist. All I'm saying is that, wait a minute,
you need to explain how that would happen in a fully deterministic universe. So yeah. And you
know what? Let me just say one other thing about the control issue and the idea of winding back
the clock. So first of all, the question is whether you
would ever do otherwise. And I would say, actually, what the organism has to do is prevent
otherwise from happening, right? If it wasn't trying to do something, all sorts of stuff could
happen, right? So it has to exercise some control to make happen what it wants to happen, but only
within certain limits, right? It doesn't have to worry about all the microscopic details of all the atoms and everything like that.
It just has to achieve its goals, whatever it wants to happen at a macroscopic level.
So, again, it's not concerned with trying to put every atom in its place.
It just has to do what it wants, like take a drink of water here,
which I will.
Well, I think I'm going to take the opportunity
to pull you both back from the brink of an abyss of philosophy.
Because, yeah, I mean, I don't know.
It's all above my power grade.
I don't really understand.
I sometimes wonder whether or not, like there are a lot of,
just like in Hitchhiker's Guide to the Galaxy,
there are a lot of questions that seem to be well-formed questions,
which seems like they should have a definite answer,
but they're kind of just words that people use.
And just because the question, do I have free will,
sort of makes sense in English and feels important to me. I sometimes wonder whether or not those, like, it's just not a very good question.
Well, it certainly has some hidden layers to it, right?
And when you say, do I have free will, people would often say,
well, what do you mean by free?
And what do you mean by will?
And what they should be asking is, what do you mean by I?
Right? It's's like what is the
what is this self and that's not a jordan peterson version
i was also gonna say what do you mean by have
well i mean i think just in case anyone thinks kevin is transforming into jordan peterson i
think what you're um alluding to is is that i
guess physicalist perspective yeah where where there is no ghost in the machine there isn't
a little homunculus you can identify in your brain that is you rather where a big messy thing
yes as you emphasize we're organisms that do where our boundaries our skin or the cell membranes are important, but still we're a fuzzy construct and the like localizing the eye anywhere
isn't really possible in that, from that point of view.
Yeah, exactly.
And I think, you know, there's sort of two extreme views of the self.
One of them is this dualistic kind of theological view of a,
it's not even just necessarily
theological, but of an immaterial kind of thing, right?
As some sort of substance or object that is attached to your body somehow or inhabiting
your body that goes to the machine, as you say.
And there's no, you know, that's just not a useful kind of construct and there's no
evidence for that.
But the other extreme, and I was listening to your recent interview with Sam
Harris just this morning, actually, where, you know, someone like Sam Harris or many others would
say, well, actually, the self is just an illusion. And it's just, you know, when you look to find
yourself in your experiences, you find nothing but the experiences. And therefore, there's a
kind of a follow on from that, which is, therefore,
you can't have free will because you don't exist as a self. You're not the kind of thing that could
have free will. And to me, that's just a mistake because you can have a self that's not localizable.
You can have a self that is the continuity of pattern through time that still has causal power as a bundle of memories and
attitudes and dispositions and relationships and all the rest of that, right? That is a
really efficacious thing without being a separable, isolatable object. And that's fine,
right? It doesn't have to be an object. There's no reason for that. And it doesn't have to be an object there's no reason for that and it can still have existence
so that that's where i would fall down on that yeah now we're going to go down a whole other
philosophical rabbit rabbit hole selves can have can be systems we should get that on t-shirts
systems systems can be selves i mean that's exactly like the whole thing with living organisms that's
what they are is that there're patterns that persist through time.
And it's the persistence through time that is the essence of selfhood, right?
It's not a thing that you can isolate either within an organism or in an instant of time.
It just doesn't apply.
The concept just doesn't apply to an instant of time.
It applies to the continuity through time.
That's what makes
a self a self. So, Kevin, some people, I think, in hearing that, especially people that are,
you know, somewhat sympathetic to Sam Harris's kind of position about the self, will point to
things like the famous Leibniz experiments or split brain research right indicating that at least in the popular presentation
that although we think that we are the authors of our actions consciously a lot of the work
so to speak mental work is going on under the hood unconsciously and there's a kind of post hoc story that is appearing in the mind afterwards.
And I kind of already know, but I'm wondering if for our listeners, you could explain why that isn't the case, that those experiments have not dealt with the kind of position that we are just passengers who believe that we're at the steering wheel,
but we're mostly unconscious motives pushing us along.
Well, so first of all, there's just not one answer, right? There just doesn't have to be
the same kind of thing going on all the time. It may absolutely be the case that in certain
circumstances, we're kind of on autopilot and we're not thinking about what we're doing very
much. And there are sort of subconscious influences that are informing and, or even if you want to say
driving what we do. And so, you know, for example, the Libet experiments where you just have to kind
of lift your hand every once in a while, and there this um so-called readiness potential that's detected in in brain
waves if you do an eeg while people are doing that that suggested that the brain was making
the decision before the person right and was only telling the person afterwards now first of all
there's a whole bunch of technical reasons why that just doesn't it doesn't look like the right
interpretation but secondly even if it were, like, who cares?
That scenario, why would, like, they literally tell people do something on a whim whenever the
urge takes you. So imagine I'm the subject, right? I've made a decision. I made a conscious decision
to take part in this experiment. And now I'm sitting there watching a clock. And just every
once in a while, they tell me to lift my hand. They have told me that I should lift my hand on a whim.
So occasionally I do.
And if you let your brain sort of decide that, fine, that's a good strategy there.
You've got no other reason to do it.
So a lot of the decisions that we make, I think, fall into this range.
Either they're completely habitual because we've been through this sort of scenario before,
we've done all this learning, we tried things out, we know what's a good thing to do here,
we don't have to think about it. All that work is pre-done. We've offloaded it to these sort
of automatic systems, which is great, super adaptive. It's fast and it makes use of all
the learning that we've done. And then there's sort, then there's sort of ones where we don't care,
we don't really know what to do, we're sort of indifferent, but we should do something,
right? That's the important thing is that we shouldn't just dither or vacillate forever,
we should just get on with things, right? And a lot of the decisions that we make are like that,
where we control them to a certain extent, but, you know, we don't care about the details.
And then other cases where we're really kind of at a loss,
it's a really novel scenario,
something where we really don't know what to do,
we have to deliberate,
or it's a really important decision
and we have to really think about it.
And then we do, right?
So we take this sort of conscious deliberative control.
So generally speaking,
I think that the evidence from neuroscience
where people extrapolate from one particular setup and say, look, see, it's always like this.
We never have control.
Well, that just doesn't follow, right?
It's just a non sequitur if you allow that we can be doing different kinds of cognition that has a different level of conscious involvement in different scenarios.
Now, Kevin, I got to dwell on those mode of readiness potentials
a little bit, and that's purely because it gives me an opportunity
to remind people that I worked on those during my PhD, in fact.
I did my PhD in an EEG lab, and I was mainly interested
in the signal processing aspects of it.
But I picked those readiness potentials, lateralized readiness potentials
and that paradigm where people could elect to push a button basically whenever they felt like it.
And just to let people know the methodology there, the idea is that people press the button whenever they feel like it.
We know exactly when they press the button.
And then we can look at the event-related potential, they call them, but going back in time by a couple of seconds. And what we see is this slow
depolarization across the cortex when that happens. And that methodology you mentioned,
and this is one of the things that I thought was really cool as a first year PhD student,
and it was only until a few years later that I started to think about it a bit more carefully.
And putting aside a lot of methodological quibbles for instance the subjectivity and people saying
yeah you know just nominating when it was they decided to to to press the button bit of it I'm
sure there are other issues that you know about there Kevin I mean the thing that occurred to me
with that is that it seems like an of course it like for you to have a thought or or to form an
an intention to to to generate a little motor program to to to do anything or intend to do
anything then then by definition if unless you believe that there is a spirit that is entering
the brain and making that happen then there has to is a spirit that is entering the brain and making
that happen then there has to be something physically going on in the brain for that
attention to intention to arise and so when i thought about it i thought well this is kind of
and of course you you would you would almost be surprising if you didn't detect some kind of
activity in the brain before people form the conscious awareness of planning
to do something. Is that kind of how you see it? Yeah, it is. I mean, like I said, and like you
just said, there are some other technical quibbles about the interpretation of this thing where the
readiness potential starts to ramp up, you know, maybe 350 milliseconds before the movement. And
yet people say they were only became conscious of the
urge to move 50 milliseconds before or something like that. And so part of the technical thing is
that when you time lock to them actually having done something, you see this gradual ramping up.
But if you time lock to some arbitrary signal, like a sound or something, then what you see is that sometimes the activity goes up and goes down again.
And sometimes, you know, up and down, up and down.
So the start of the activity going up is not, in fact, a commitment to move that your brain has decided.
So anyway, that's a technical thing on that front.
But yeah, more generally, like, yeah, there's a whole field of, you know, many people saying, look, it's not you doing things.
We can go in in neuroscience and we can see it's just this part of your brain is active or that part.
Or it's just, you know, we can go in in animals and we can drive the activity of these different circuits.
And it really looks like it's just this big neural machine.
And it is driving activity.
You know, what does it matter what the states mean to you cognitively?
That's not where the causal power actually is.
And that, for me, is a mistake, I think,
because you can show, in fact,
that it's the patterns that are important.
It's not the low-level details.
And that this attempt to reduce cognition
to particular neural activities
is a mistake in two levels.
First of all, it's not the right level of sensitivity because the neural details are
somewhat incidental.
They're somewhat arbitrary.
It's the meaning of the pattern that's important.
That's what the system's causally sensitive to.
And secondly, it's very reductionist in the sense of isolating a particular circuit and
saying, look, it's this circuit that made you do that, right?
Whereas actually, the way the brain works is this much more holistic kind of thing where all the different parts are talking to each other.
They've all got different jobs.
They're monitoring different things.
They're interested in different bits of information over nested timescales. They're providing context to each other and data to each other.
And they're all trying to settle into a big kind of consensus for what to do.
And that is not this sort of mechanistic system.
And it's not even really an algorithmic system.
It's a big dynamical system trying to satisfy lots of constraints and optimize over many,
many variables at once for the reasons
of the organism and on the basis of the things that organisms are thinking about. So, you know,
once you're at that stage, you might as well say, well, that's the organism deciding what to do for
its reasons, right? Like, what else would you want? There's a nice illustration that you give in the book that a lot of people
take, you know, it's a very, it is a very good illustration of the way our cognition and our
attention can cause like blind spots, right? The famous experiment where you ask people to count
the basketball passes and a man in a gorilla suit walks on them. Most people don't recognize it
because their attention is focused on the task.
And you could easily see that if you ask people to look for a gorilla suit,
that they would 100% pick it up, right?
So that to me struck as a good example that taking from that experiment
that our minds are, you know, completely, we're completely driven by
unconscious mechanisms is a wrong extrapolation. And similarly, just in all of the things that you
were both saying there, I keep coming back to this thing that kind of raised it with Sam as well,
that like habits and unconscious things and so on, to me, these all seem like components of self.
Like, I guess this is the problem that self can mean so many different things,
but I am my habits.
I am my, you know, cognitive heuristics and that kind of thing.
So saying like that, yeah, that basically it has to only be the kind of very,
very high level top-down conscious reflection, which is the self,
is kind of creating an artificial divide. Because if it isn't you, who is it?
Absolutely right. So it sets, and Robert Sapolsky has a new book out. So he's a neuroscientist from
Stanford. He has a new book out called Determined, which is making the complete opposite case to the one that I make. And it's very, very similar to Sam Harris, actually. And it's sort of
ironic in that, you know, Robert is a reductionist and a behaviorist, I think, but he's also in this,
has this sort of dualist intuition, right? Where it's like, if it's not this disembodied thing,
definition right where it's like if it's not this disembodied thing self doing it if it can be shown if these processes can be shown to have any physical biological instantiation then by definition
they're not you and it's not you doing it it's just the biology doing it right and it's just a
it it's setting the bar so high that it it's only by definition rules itself out, right? It's just not a useful way of
arguing about it because it says the only way you could have free will is by magic.
And here we're showing there's no magic, therefore you don't have free will.
Just as sort of a circular, facile argument, frankly. And, you know, instead of that,
we've got this fascinating biology that we can get into, which can say,
well, how could it be that all of this machinery working, all of these neural bits and pieces
and so on can actually represent things that we're thinking about, where the thinking has
some causal power in the system without that being magic.
That, for me, was, I guess, the project that I was more interested in.
And to me, I think you can come to a naturalistic framework that allows you to think about those
things without appealing to any kind of mysticism or descending into this mechanistic reductive
determinism either. I think there's a middle ground that we can inhabit.
Yeah, yeah. It's interesting, these conceptual divides. And I've come to realize this in talking
to people who are clearly very smart, very well-educated, sometimes in a different field,
usually in a different field, and maybe that's the problem. But I think where the three of us
maybe that's the problem but i think where where the three of us um share a lot of common ground is that we operate from a heuristic which starts off which is first of all materialist and
physicalist but that just acknowledges that there are there are emergent properties um there there
are systems of interacting agents and then it it can be meaningful to talk seriously about things that are happening at that level.
And they are, in a sense, you know, they're virtual things in a sense, right?
And I'm thinking here of like representations and information.
They're not magical ideas.
Like you said earlier on, that information has to be represented somewhere physically.
like you said earlier on, that information has to be represented somewhere physically, but it isn't just like a pattern just by itself
that has like a high degree of Shannon entropy.
There's no physical discrimination between total random noise
and a bitmap of a cute cat.
The difference is in the information, but you can't point in a reductionist
sense to where that lives. So yeah, is that the difference between us and them?
So first of all, yes. I think what's interesting when you start having these discussions
is that you can be looking at the same evidence as somebody else and coming to a different
conclusion. And that usually means you're bringing different things to the table, right? You've got some sort of outlook that may
be implicit or tacit that, you know, is often worth kind of scratching into and to figure out
what that is. The other thing, you know, with what we're just talking about with information is that
we have a really good science of information, right? We have information technology, we have a really good science of information. We have information technology. We have whole fields of industry built on it.
We don't have a good science of meaning.
I mean, there is fields of semantics and linguistics and psychology and so on.
I'm sorry to interrupt, but have you read Maps of Meaning by George Newton?
That seminal work, it might have missed you, but i just want to make sure it's on my list
it's on my long list because it's so yeah i mean the thing about meaning is it's hard to localize
it's hard to quantify it's inherently subjective and it's interpretive right it's not in code it's
not it's not just encompassed in the signal or the message. It's in the interpretation of the message, right?
And so it's inherently a systems thing.
It's just not something that you can localize and point to and quantify and so on.
So it becomes more difficult to have a good science of it.
But that doesn't mean it's impossible.
And if we're not doing that, if we're not thinking about meaning,
you're not thinking about biology in the right way, because living things are sense-making,
right? They're extracting meaning. They're running on meaning. That's how they get around
in the world is what they need, right? They need meaningful information to get around in the world.
So yeah, again, it gets back to this idea that there are some very fundamental principles
and concepts that I think are key in biology
that have been kind of overlooked,
maybe in our physics envy,
to get to really mechanistic explanations.
But I think they have a rightful place
and I think we can have a science of them
that isn't woo, de factora kind of stuff i've got a
question about that kevin and the the way that i might ask you for an illustration in terms of like
so you know i for our listeners they will have heard about many of the gurus talk about the importance of meaning, the importance
of sense-making in a different context.
But I'm wondering if you could give an illustration, like, so when you were talking about, you
know, the human brain and neural processes and how it is that you could have the kind
of pattern from collections be the the signal or or something
that is being interpreted at a higher level where the individual components are not the the kind of
the core thing it's more the gradient right across it so i'm i'm gonna do a bad job of explaining
that but i i think if in terms of having a higher level like semantic or or associative
pattern could you give an illustration of how that could apply like in the in the way human
cognition works how it could feed down like so that it's more important than the individual
uh yeah like neural neurons firing sure i mean, I guess a sort of commonplace experience would be
that, you know, say we're reading some text and the text could be written in one font or a different
font or in italics or in bold or all in capitals or whatever, and it would all mean the same,
right? So the particular instantiation is not that meaningful because we categorize a bunch of different shapes as A, and we categorize a bunch of other different shapes as B, and other different shapes as C, and so on, right? like that at a high cognitive level into these categories where we are only interested in the
differences when they push the category from one, where they push the pattern from one category to
another, right? So we're attuned to the sharp borders between the categories that make up these
letter shapes, but we allow lots of variability within that. So neurons do the same thing, right? Individual neurons do it. There's this idea that comes from
basic kind of starting neuroscience with reflex arcs, where one neuron fires and it drives the
next one and it drives the next one and so on, right? So it's this electrical machine.
And I rather would reverse that and say, look, what's happening is that
this neuron here is monitoring its inputs, right? And it's doing some interpretation
on the signals that it's getting. Because for example, this one may be sensitive to the rate of
incoming spikes of action of signals from another neuron, but not the precise pattern in some cases, right?
So the precise pattern is lost. All this guy cares about is, did I get 10 spikes in that last 100
milliseconds, or did I only get five? And if it's only five, I'm not firing. And if it's 10,
I'm firing. So it's an active interpretation. And I think the same thing happens with populations
of neurons, where you have one population that's monitoring another one and if this lower one takes on uh any of a bunch of patterns that mean a that
this this guy might fire because that's what it's sensitive to whereas if any of the bunch of
patterns that mean b this guy won't fire okay so that caught that that's what i was kind of
referring to before about the
causal sensitivity in the system being at that higher level of the patterns.
And the important thing is that the sensitivity gets configured into the system because the
patterns mean something to the organism. They carry information about something out in the world
or the state of the organism or one of its ongoing goals or any of the other sort of elements of cognition that it needs to figure out what to do.
Does that answer the question?
I mean, I think you can couch it at cognitive levels, but actually what you see is that the neural instantiation is the same process happening.
process happening yeah i mean you actually reminded me of some intuitions that were for me at least anyway very fuzzy but i but very satisfying when i was reading your book which is
that i think you're hinting at that non-linearity in these systems is really important and you know
there's there's this sort of you know like a gradient of anus or venous rather let's go with venous um and uh you know and it fires or it doesn't so so
there's a non-linear activation there and the when when you don't have those sort of binary
binary boundaries and i think you could even extend this analogy to to to cells like it being
very important for for cells to have a cell membrane
so that their activity of being active or not or whatever
is separated from the environment around them.
Because if everything is just diffusing into everything else
and there is just these linear gradients,
then you really don't have any scope for interesting
and complex behaviour. And I can't articulate it. I'm sure I don't have any scope for interesting and complex behavior.
And I can't articulate it.
I'm sure I don't understand it properly, but it just feels intuitive to me
that that idea of sort of transitioning from continuous, messy,
stimuli physics to almost like a computer binary representation
is really important for all the things that make
big brains interesting. Yeah. I mean, I think, well, first of all, you've touched on a really,
really fundamental issue about what life is, which is that it keeps itself, right? It's a pattern of
stuff and processes that keeps itself out of equilibrium with the environment right so if it's
in physical thermodynamic equilibrium then it's just there's no boundary there's no entity it's
just it's not a thing it's just constant you know part of the flux right it's one with the universe
you don't want to be one with the universe so um yeah so that that's absolutely a fundamental
principle of what life is and then as you as you go through into individual cells in a multicellular organism, those cells
still have some degree of autonomy, right?
In the sense they're still trying to make sense of what's out in the world, right?
There's still this barrier, the cell membrane.
They're still taking in information and they're sensitive to it and they're operating on it.
And each individual cell will have a sort of criteria embodied in it for what it should do when it sees this signal or that signal.
And that's basically how knowledge gets embodied and procedural knowledge gets embodied into our brains through learning is by changing the weighting of those criteria that each neuron or each population of neuron is acting on. So yeah,
that's where it comes to be the case that the system, a neural system is really doing cognition,
right? Cognition is not an epiphenomenon. It's what the thing is doing. It's just using its
neurons to do that. And to, you know, I mean, this gets back to something that that um sam harris was saying in
your recent interview about this idea that when you think about yourself uh as an experiencer
what you find is just the experiences right you're just having some percepts at any given moment
and there's no self there and i just think that's really mistaken, actually.
I think it's a mistaken intuition from his introspection, because in fact, the whole point
of perception is that it's this active process. It's not passive. We're not passively sitting
here being bombarded by sensory information, or at least that's not what our percepts are made of.
Our percepts are the inferences that we actively draw from that sensory information that's our self doing something right
if you're not that a non-self can't be doing that that's an action that is required or an activity
that's that's that is inherently subjective where the organism is bringing all its past experience
and knowledge to the act of perception.
So it's a filtering, it's an interpretation.
It's not just this passive sort of flow of things, of experiences where there is no self-doing the experiencing.
Yeah, one of the introductory concepts you have to get across to undergraduate students is is is to correct that
intuition that even something like vision is is analogous to like a video camera that's feeding
a video stream into the brain where of course there is there is so much processing going on
that that transforms it and and probably reduces information in the Shannon entropy sense.
It simplifies it, but it makes it far more comprehensible
and allows you to actually do something with it.
So I think that is incontrovertibly true,
that even simple perception is an active process.
Yes, and we have to, I mean, because for organisms, again,
it comes down to what they need.
What do they need to get around in the world?
They don't need to get around in the world?
They don't need to know about the photons hitting their retina, right?
They need to know what they're bouncing off of in the world before they hit their retina,
right? So they need to make some inferences.
And it's a hugely difficult problem, this inverse problem.
There's loads of potential patterns that probably could make the same pattern of photons, right?
So you have to do loads of work,
and it's a skill that organisms acquire, right?
That they develop this skill of seeing.
And yeah, it's absolutely sense-making,
though not in the guru sense of sense-making, I think.
Well, I thought this was a good time
to raise the issue of consciousness.
I've got one question about it, This was a good time to raise the issue of consciousness, Matt.
I've got one question about it.
It's purely, again, because reading your book was very refreshing.
That's a crowd for me.
It helps, Kevin, that I agree with almost everything you say.
But even if it wasn't, I would have, I would have found it intellectually stimulating. But the one thing for me is that I have consistently not been puzzled by the concept of consciousness.
Okay.
Because for, for me, when, and I am approaching that, I think primarily from the view of a
cognitive anthropologist, right? So for example,
that human cognitive processes make ontological categorizations quickly to separate things into
agents, objects, living things, spatial entities, and so on. And we have some evidence that these
are cross-culturally pretty consistent, even if the taxonomical
categories that each culture invents are wildly different, underpinning quite consistent things
in cognitive processes.
But so when I think about consciousness, to me, the obvious kind of connection was that
we are agentic beings that are able to imagine potential different futures and to try and think
about different outcomes. And we model ourselves in different outcomes or different situations
and kind of mentally time travel into those scenarios while also thinking back experiences
that we've had in the past. So it seemed to me that humans have quite a sophisticated agent modeling cognitive apparatus in their mind. And that from that, I would
anticipate that some sense of self slash consciousness would be a very likely a component
of having such a model. It would be, I just imagine it as like, okay, so the model is set up this way
and would work like that.
Now, Matt assures me
that that is something I'm inserting to the model
that doesn't need to be there.
But I just find it hard to comprehend
how you would have such you know good modeling abilities without some you know like kind
of self-agentic aspect of that so why is matt wrong that's the question yeah i think i mean so
i i absolutely i think agree with you chris that that that sense, first of all, yes, we model all sorts of things,
right? So we model ourselves, we model the world, we model the future, we think about what could
happen, right? So we have this imagined future that we're sort of simulating, we're weighing up
the utility of various predictive outcomes and so on. The question, however, is why that should
feel like something, right? And so here's the pushback. I'll give
maybe, right? So Matt's encouraging me here. So the pushback is why should that feel like
something? Because maybe you could design a sort of a cybernetic system that has those functions
in it. And in fact, many other animals certainly do have many of those functions, just maybe not
to the same level that we have, not to the extent of the sort of time horizon that we're concerned with and so on, right?
So the concern would be, well, yeah, you can build all of those things. You can build
more complex control systems where having those functionalities give better adaptive control of
behavior. But why does it feel like something? That's the really important bit. And why does it feel the way it does, right? And so one counter to that is that
actually what we're doing, what we have a capacity for in humans is modeling, not just doing these
sort of simulations of objects in the world and states of the world and states of ourselves,
but modeling the processes that are doing the modeling, right? So we can think about our own thoughts.
So here we get to this kind of recursive idea that really, you know,
probably most nicely articulated by Douglas Hofstadter, for example,
in I Am a Strange Loop, where the idea is we get to a level
where we have enough levels of our cognitive cortical systems that the top ones are
looking down on the lower ones where the objects are now the thoughts themselves. We can think
about our thoughts, we can reason about our reasons. And it may just be that once you have
a model of yourself thinking, and you're using that as part of the control system because you can use that to direct
your attention, to think about other things. Maybe that just necessarily feels like something.
That's what it is for us. Now, it's also possible that there's another way that it feels to have
the kinds of control systems that other animals have, that are sentient,
that are clearly responding to things in the world that are sensitive to states of the world
and states of themselves. So they probably still have some kind of experience, but they may not
have the same kind of mental consciousness that we have, which is really a world of ideas and
thoughts that are somewhat abstracted from these more basic kind of cognitive operations.
I know I said one question.
I just have one slight comment.
And yet it's good because Kevin can expertly write and explain in technical details.
So everything there makes sense, Kevin, and also that I might throw shade towards, well,
it's not throwing shade, but just saying that this might be the cognitive processes that
meditators are so interested in.
But because we have this recursive aspect that they draw some unwarranted metaphysical
conclusions from that, but it could just be a fascinating process within the way that human
minds work. In any case, the one like kind of point that I've raised the map before, and I'll
raise it to you as well, is that, so when you were saying that, that, you know, why is it,
why would it be, or like, that it feels like this, or that, you know that you can create a system where you have all of that, but you don't have
the phenomenological experience. And my kind of reaction to that
was, but we've never seen that anywhere
in the world. So it's a theoretical possibility,
but it's never yet happened. So if they make something which
is exactly like that and which
like said then that would that would be true but at the minute it feels to me like a kind of thought
experiment to say well it is it's the it's the zombie experiment right it's it's chalmers
philosophical zombie experiment is that you have everything is happening exactly the same as in you
but it doesn't feel like anything and you know i don't think i don't think for me the idea that
you can conceive of that has no weight in any kind of argument by itself right conceivability is not
an argument but i i mean there's two ways you can see it either the phenomenology pops out of
the way that that thing works or which is equally problematic it's added right? There's this extra bit that is the phenomenology.
And that's what Chalmers' thought experiment is sort of getting at,
is like there's something you could subtract and have everything else left.
And that doesn't make much sense to me either.
But I think there's another way you could think about it,
which is there is a phenomenology that emerges,
and then that phenomenology has some adaptive value to it and some causal efficacy
in terms of being able to think about abstract thoughts. So there we're just into metacognition
and what metacognition gets you as a part of a control system. And it gets you, for example,
the ability to not just have a thought, but to judge the thought.
So you can have a belief and then say, wait, should I have that belief?
How certain should I be?
What's my confidence level in that belief?
That's going to inform my decision making here because I think such and such, but I'm not really sure.
So maybe I need to get more information or maybe I shouldn't jump here. And so, yeah, again, I think you can operationalize
the metacognitive stuff much more easily
in terms of control system, behavioral control, and so on.
The what-its-likeness, for me, I mean,
I didn't even try to address it
because I just don't know, right?
It's a here, it's the, maybe it's the biggest mystery
that we still have.
I certainly don't have an answer to.
But the very, very last thing just to
say is that i think the example that you give in the book about you know schizophrenia where
people experience for a voice an internal monologue not as theirs but it is generated by their brain
right yes that that could give some you know indication that the when the brain is not functioning entirely
properly that you can have the sensation of intrusive thoughts from elsewhere so yeah
anyway matt that's that's it i promise i won't mention consciousness again i'm very satisfied
say the c word no that was fine that was fine actually kevin kevin's answers i think brought
some balance to the force because you agreed with everything he said.
I do too.
I sign off, honestly, to all of that.
I can definitely see the adaptive benefits of having those
self-reflective processes.
Yeah, there's a lot of evidence for all of that.
The only thing I think is – so Chris rebels against the idea
of anyone calling anything mysterious because he thinks it's hinting
at something magical.
And I suppose where I'm coming from is, again,
emphasising I sign off on all of that.
I just do find it just at a gut-feeling level a little bit mysterious,
like where the consciousness – like this thing, whether it pops out
or it's added on or whatever,
I guess maybe I find the idea of P-zombies not entirely illogical.
Like I can imagine it being pretty possible
and all of the AIs that are floating around, you know,
I think show that you could make a pretty convincing simulation
of something that does do sort of chain of thought stuff
and does have some ability to reflect. It doesn't seem implausible to me and just that they're but so
we just have the you know i guess chris's argument is well you know we're we're complex agents and
we're conscious so you know why is it mysterious because we have proof it just is and you know i
i accept that i accept that it just is but i i still find it a bit odd
but well i mean yeah it is mysterious uh in the sense that we don't currently understand it so i
wouldn't uh yeah i agree i just don't think that that means it's that's a statement about us that's
not a statement about the thing right so it's a it's what the statement is we find it mysterious that's a description of of
us not the thing so it doesn't mean it's it doesn't mean it's always a mystery in and of itself
is to we can find out i mean lots of other things used to be mysterious too so yeah yeah it feels
mysterious to me i also feel hungry sometimes and a whole bunch of other things so i agree
you made me a happy man already kevin
i feel like you've restored balance to the universe by making us both feel that we're correct
yeah well i was trying to explain to chris because you really did help because you
you laid it out in a way that the chris could agree with and did agree with 100%. And I do too.
And my thing all along was we do agree, Chris.
I just think it's just a little bit mysterious
that there's a subjective phenomenological experience of it.
And that's all.
That's all.
And nothing else.
We're on board.
We actually agree.
I think you can, you can you can feel that
it's mysterious without adopting a mysterianism philosophy metaphysics right that you're just
committing committing to it always being mysterious for all time yeah yeah that's right and I know
where you react to that statement Chris because I agree I'm with you I don't like the people that
turn into some mystical you know mystery that we're going to find a place for spirits and poltergeists it's just the it's
it's not the issue about what people do like the pack chopper and stuff like of course we all agree
that that's mad right and or the people that really fix it on using quantum indeterminacy
to justify everything right like it would just as a get out of jail free card.
The bit that I just tend to bump into is that I,
I just kind of think because of all the processes that we've discussed about
agents going around in the world and,
you know,
trying to model things and so on,
that the fact that it's like something just seems to me like,
yes, it has to be.
Maybe it's a failure of my imagination, but I'm just like,
I was going to say that.
What's the alternative?
And I can't imagine it because it's the only way that I know
that that process works.
So I can theoretically imagine a world where it isn't.
And whenever I encounter beings that are doing that,
I'll be interested to discuss what they're in the worlds are like.
That's the tricky bit.
Do you know Mark Soames' work at all?
S-O-L-M-S.
Very interesting stuff about really sort of basal kinds of conscious experience
that are basically emotional, you know, triggered from
the brainstem, and they convey what are essentially control signals, right? Homeostatic signals that
say, my food levels are low, my osmotic balance is off, my sleep debt is too high, it's too cold
out here, right? You know, so really, really basic things that basic organisms feel. And he has a sort of a theory that these have to be submitted to a central cognitive economy
that is trying to adjudicate over them and arbitrate over them all at once to decide,
well, look, I'm hungry and I need sleep and I need shelter.
Which of them am I going to prioritize at any moment?
And these things have a valence, right?
They feel good or bad.
But just by itself, if all the central signal,
the central processing unit only got good or bad signals,
then it would no longer know the source of them, right?
It would no longer know, oh, wait, this is a hunger signal
that I'm feeling right now.
It's not just bad. It's bad relative to that thing that i need to keep track of
and his argument is that the qualitative feel is required in that because there's no other way to
keep track of it right you just it just has to feel like something in order to uh for the central
system to kind of keep an eye on everything
and know what it's referring to.
And to me, it's sort of intuitively appealing.
I kind of was like you, I was like nodding my head when I heard that,
and then I'm just not 100% convinced,
but I find it, yeah, kind of a useful way of thinking about
why it should feel like something,
just in terms, again, of a control system architecture.
Yeah.
I mean, like on one level, I totally get it.
And I, because I've always thought of emotions as being a modulating influence,
right, to help with decision-making.
But I guess I still, maybe this is where me and chris is different like
chris was there yes well of course it has to feel like something in order for that to be happening
but i i i just i suppose i can imagine a system that that has modulating it you know call them
emotional factors there which is changed like reprioritizing things and all that stuff and
and they're not having to be a kind of a self
and a unitary experience of that going on.
Yeah.
I guess that's my objection, but yeah.
Yeah, no, again, I can imagine both sides, right?
I can imagine exactly what you just said.
You just build a system that has these different sensors
to different parameters of internal states
and you somehow keep track of them
in a central system that
arbitrates over them and that's just robotics and and you know computation and then on the other
hand i i'm also tempted to think well maybe if you did that it just would feel like something
yeah well i guess chris i guess chris chris has to be right because we do feel something. I don't need to hear anything else.
That's it.
Chris has to be right.
That's the only thing that trashes it.
Just sample that and loop it.
Well, I just had a kind of, it's a bit of a speculative query,
but I'd be curious, Kevin, that with all the AI advances,
and Matt and I make use of Chachapiki and Claude
quite a bit in our work currently,
and have generally been impressed
and things are improving.
And I'm just wondering if any of the AI developments
and the things that are going on there,
if you have any thoughts about, you know,
how that relates to all of the yeah stuff that you've put
in to yeah thinking about the agents and agents yeah i mean i yes i do have thoughts um the uh
i mean they're i'm not an expert on the on the ai stuff so i'm i'm sort of this is filtered through
my understanding of what these systems do from talking with people who work in the field and so
on so first of all i would say you know there's a question about what these systems do from talking with people who work in the field and so on. So first of all, I would say, you know, there's a question about whether these systems have
understanding. And then that raises the question of, well, what the hell do we mean by understanding?
What does that entail in a natural agent? And I think you can build up this view of a map of
knowledge about the world, knowledge about causal relations in the world, exactly the
kind of things that we need in order to simulate an action and the predicted outcomes from it,
and so on. So that's actionable knowledge. It's causally relevant kind of descriptions of the way
the world works and the way the self works in it. And that's the kind of thing that large language
models don't have that because they're not interacting with the world in the same way. They're not causally intervening.
They're not acquiring causal knowledge themselves through that intervention.
However, they have all the text of all the people who've ever been interacting with the world.
And so they can mount a really good simulation by making a perfectly plausible utterance in
response to some prompt. And so
it looks like they understand things, but it's sort of a parasitic understanding. They don't
have a world model. They have a world of words model, which is a separated kind of a thing.
So I think what would be interesting, although potentially dangerous and ethically fraught, would be to think about, well, what would it take to build an agent, right?
Not to build artificial intelligence, but to build an intelligence, right?
An entity.
What would that necessarily kind of entail?
And I think if you look at the architecture of cognitive behavioral control that I just
described the evolution of in the book you could see there
are ways to make a being that has that kind of architecture which is not just an internal thing
it's a it's an architecture of relationships with the world that is an ongoing thing that
allows it to learn what to do in different situations and so on. And that leads to intelligence having an adaptive value where intelligence pays off in
behavior, right? It's not an abstract kind of a thing like playing chess or something like that.
It's like, I need to get food. Where can I find it? So yeah, I think there's a route that you
could at least imagine where you build an artificial agent that interacts with the world in such a way that intelligence becomes selected for
and ultimately emerges in that system if you allow it to kind of evolve or iterate over designs or so on.
But right now, the things that we have are these sort of disembodied, disengaged systems that I don't think are entities and that I don't think have agency
because they're not designed to. It's not a knock on them. That's not what they're for.
Yeah. So I think it's a really, really open field. I think it's really exciting. And again,
I think it's very ethically fraught because if people go about making artificial agents then we're going to have
all kinds of questions about responsibility for those agents responsibility of the agents and
what they do and all kinds of other sorts of questions that we probably should figure out
before anybody goes about building one yeah yeah i guess one of the things that um the
many people are worried about ai for many different reasons, but one thing I noticed amongst the AI doomers, the people who are very, very concerned about AI, I think perhaps going a bit beyond legitimate concerns and getting into a bit of speculation, is they definitely do imagine that an intelligent AI, hyper-intelligent AIigent AI, would immediately hop to a lot of human
motivations.
So the argument goes, okay, it becomes very intelligent, it is naturally going to want
to persist.
It is naturally going to want to exert some control over its own destiny, and this will
make it want to kill all humans, is kind of how the argument goes and
i i think from from your point of view the you know as you just mentioned evolution has invested
us and and and all living creatures with those imperatives so it's very natural for us to assume
that yeah that something intelligent is going to happen too. But I personally suspect not. But how about you?
Yeah, no, I think the order is important, right?
What happened in nature was the purpose and meaning and value came first.
And intelligent systems emerged from that because it's useful relative to those goals, right?
Those standards.
I don't think that this kind of systems that we're seeing now that are built for these you know things like text
prediction and and so on even though they have this great corpus of of words that they can kind
of build an internal relational model of i don't think agency will just pop out of them with the
current architectures i i think you'll have to do something different i think you'll have to embody
them you'll have to give them some sort of skin in the game you may have to you know to in order to get that yeah so i don't see the the doomer stuff you
know i don't see skynet or ultron emerging from chat gpt uh in its current incarnations or or any
further incarnation that has the same architecture i think we're probably safe on that front there's
lots of other reasons to be worried about the influence of AI systems as they're applied in the world, but those are more
societal, not technical, I think. Yeah. So getting back to the book and connecting a little bit to
AI, I suppose, is that one thing that these AIs are indisputably good at is creativity.
And you see it with the image generation and just recently
video generation is pretty, pretty interesting. And of course, the creative writing, it might
bullshit a lot. Sometimes it may not understand what it's doing, but it's certainly creative.
And I think that's something you emphasized in your narrative of, in the book of human
cognition in the context of evolution, that you had with simpler
organisms, I guess, a lack of creativity.
There was certainly agency.
There was certainly evolutionary imperatives and responses to stimuli.
But there was a lack of flexibility, a lack of behavioral flexibility.
And at some point, evolution figured out in some organisms that that could actually be a very handy thing.
How would you describe that process?
Yeah, I think that's right.
So many organisms can react to things in the world.
They may have some kind of pre-configured control policies that say, yeah, I should approach this kind of thing.
I should avoid that kind of thing,
and so on. And they may be capable of integrating multiple signals at once and assessing in effect
a whole situation and deciding what's the best thing to do in this situation. So they'll have
systems to do that. And they may have systems to learn from experience so that their recent history
or even far history can inform what
they should do in any scenario.
But yeah, many of them are not particularly, they don't have an open-ended repertoire of
actions, right?
Which is what we do.
We can really, I mean, within our physical constraints, we can do all kinds of things.
We can think of all kinds of things that we could do, right?
And so the, you And so part of what
has happened over evolution is that cognitive flexibility became really valuable in human
evolution in particular, in a sense, probably because I think it snowballs on itself, right?
The more sort of flexibility you have, the more control over your environment, the more you can
move into new environments, which makes it more valuable to have more cognitive flexibility and so on.
So I think you get this amplifying loop that happened in human evolution, which
moved at some point from human biological evolution to human cultural evolution. And
then it really took off because then we could share our thinking with each other.
So yeah, we have this ability for creative thought.
And I think what we deploy it most of the time in terms of creative problem solving,
right? And that's what most other animals that have some open-endedness to their behavior do that
as well.
And it's something that I don't know that ai does that right you said you know
ais are very creative and they're generative right i mean that's what they're for but they're
they generate new stuff by kind of recombining old stuff right so it's i don't know it's not
creative in the same sense as uh that's how i create stuff, Kevin. Well, maybe. I don't know how you do it. Yeah, yeah. I mean, maybe you're right.
Maybe they're recombining.
Well, are they recombining ideas?
That's the question, right?
Are they having ideas and recombining them,
or are they just recombining material in a new way?
And yeah, I don't know.
Maybe there's not a sharp line between those things.
I do think, you know, one thing is in terms of the creative problem solving,
many organisms, including us, have these systems where we can, you know, say we're in some given
scenario, we kind of recognize it, but there's a few things that we think we could do. So A, B, C,
or D, these are the options that we have. And we evaluate them and just decide on one of them.
Maybe we're not 100% sure, but we decide on one of them. And we try that out.
And it turns out we're not achieving our goals, right?
So we go back and we try one of the other ones.
And we might exhaust that and still not be achieving our goals. the brainstem called the locus coeruleus or the blue spot that releases norepinephrine into parts
of the rest of the brain including the cortex, where the theory is that it kind of shakes up
the patterns that are there. So it kind of breaks the patterns out of their ruts,
the most obvious things that were suggested to do, and allows it to explore, expand the search space for new ideas. So, you know,
thinking outside the box, and then those new ideas are evaluated and simulated and so on,
and maybe tested out in the world. So what's really interesting is that what that is actually
doing is kind of making use of the noisiness or the potential indeterminacy in the neural circuits by not by it doesn't add it it
releases it right so ordinarily the habits kind of constrain it constrain the neural populations
this system effectively reduces those constraints and lets the populations explore different
patterns that they wouldn't normally go into and and to me, that's a really just beautiful kind of example of how
an organism can make use of this indeterminacy, but it's deciding to do it, right? It's a resource
that it can draw on to enable it to expand the things that it even thinks of, right? The things
it even conceives of to do in any given scenario. Matt is an AI evangelist in a way,
as you can tell by his description of creativity.
So, Matt, you mentioned to me that whenever you make,
with ChatGPT or LLMs, prompts that encourage them
to be reflective and recursive, right?
Like think through what you did,
you can often overcome like blockages that people otherwise say, you know, they aren't able to do it. But if they aren't able to do it, they shouldn't be able to reach it when you kind of encourage
them to like go through additional steps, right? So I'm just, I know it's a completely different
process in a way, but it seems somewhat analogous to kind of, in this case, the difference being, of course, that we are the ones prompting it into the system.
So maybe we are functioning as the kind of agent in that system still.
But like, I don't see that it would be impossible to create a system that was doing that you know from its own like set of instructions so
is it maybe it's a umath and kevin but are is that a completely distinctive process or would
you see that as potentially analogous whoever wants the answer you're the guest okay well um
yeah i don't i mean it's it's open to me i you know like i said
earlier i'm not probably enough of an expert on the inner workings of those models to
to know exactly where or whether there's creativity at work you know this term that's
pretty poorly defined anyway even even in human endeavors.
So, yeah, I don't know.
I don't, again, for most of these things, I don't see any reason why they couldn't, this kind of system.
You know, what I just described, there's no reason why that kind of a system couldn't be put into an artificial entity of some kind.
I just am a little skeptical that the current versions have that kind of capability.
Yeah.
Yeah, I mean, I agree.
I think these terms are badly defined.
And until we get a really good definition of what we mean by creativity, there's not
much point.
Personally, I think a good starting point is just randomness combined with evaluation
in a cycle.
And certainly, I used to paint paintings, art, badly. But my i used to paint you know paintings art um badly but you
know my system was to kind of i do abstract art i'd sort of paint virtually almost randomly maybe
some intuitions were going in there but really probably mostly randomly and then i'd step back
and look at it and paint over the bits that didn't i didn't like and then have another go
and then following that process and the finished finished product can seem like it came about
through this mystical process of creativity,
but it can be arrived at algorithmically.
And I think if you talk to a lot of artists,
they'll often describe what they actually do in more prosaic terms.
But I think what we're getting at is that there's some fundamental issues
that confront all agents.
And, you know, one example is the conflict between exploitation
and exploration.
Yeah, and sometimes you're running around randomly trying
out different things and sometimes you're onto a good thing
and you just keep doing it.
And it's interesting to know that there was a little bit
of cognitive science i didn't know that there was we don't identify the mechanism that actually
encourages a bit more of the creativity right um yeah i mean i'm sorry you're exactly right that
exploit explorer problem is which is ubiquitous is another area where some little bits of randomness
can be useful right so occasionally even if if you're onto a good thing,
it pays to occasionally look around and do a little exploration
because it's always going to be the case that good times are not going to last, right?
And so having that policy built in to the systems that are directing exploration or exploitation
avoids going down this thing where you're over committing to a certain resource that is definitely going to run out.
And evolution knows it's going to run out because it has done in the past.
So, yeah, and there's a bunch of other systems, even in simple organisms,
where a little bit of noisiness is used as a resource for, you know,
a little bit of variability. First of all, it's just a, it's a feature in the nervous system.
It's not a bug. It's a feature that enhances signal processing in many ways, but also,
you know, sometimes it's useful, say when you're avoiding a predator, you know, the last thing you
want to be is predictable because you'll be lunch. So, you know, many organisms kind of use some sort of randomizer to tell them, you know,
which way to jump.
Yeah.
Well, for what it's worth, those deep artificial neural networks have, you know, are intrinsically
stochastic.
They wouldn't work very well without that.
But another thing that occurred to me, I wonder if this is related to your thesis at all,
which is something that's always fascinated me
and it is the credit assignment problem.
So with organisms and humans, intelligent agents,
you do some stuff sometimes over an extended period of time
and then something good happens.
You have a reward.
You have a signal that's something that you want.
But then you're confronted with this problem where you have
to look back.
Maybe it wasn't the thing you just did.
Maybe it was some sequence of steps or maybe it was this thing
you did way back then.
And this to me is one of the most interesting challenges I think
an agent's got in the world.
How do you – does that relate to your thesis at all?
It does. I mean, in the sense that as you're building up a model of the world and those
causal relations, that's exactly what you need to do is distinguish the real causal relations from
the ones that were only apparent, right? And of course, it's super difficult to do as the longer the time
frame over which the causal influence, the true causal influence obtains. So yeah, I mean,
there's lots of people working on this problem. I'm not really an expert on it at all. But you
can see the elements that would have to be required is that you need to have some record of events,
you need to have some working memory to be able to keep track of
not just what's happening right now, but what just was happening and what was happening 10 minutes
ago and what happened a week ago and so on. So you can see that it depends on this sort of nested
set of memory systems and then some kind of an evaluative system that can say, this was the important bit relative to that.
And in many cases, the only way you get that data is by doing it again, right?
And seeing that actually across many, many instances,
lots of things were varying through time,
but these ones all have this same thing in common between them,
and that must be what the causal influence was, right? And so,
that's exactly the system that leads to understanding, right? That's what understanding
what's going on entails. And what's interesting is that in many of the AI systems, the machine
learning systems, they have so much data and so much compute and so much energy available to them that they don't
necessarily compress things in such a way and abstract the identify that the salient features
and instead they often overfit right uh to to bizarre stuff or not bizarre just arbitrary stuff
and then that manifests as a failure to generalize to a new situation.
And so the ability to abstract true causal relations from noisy experience, and then generalize that to a new novel situation where that's useful knowledge, to me, that is what
understanding is, right? I think that's a reasonable sort of description of understanding and it's something that i feel like most of the
current machine learning models don't come close to yeah i just got one more question for you and
then we're gonna we're gonna let you go and get about your day um okay this is the last one for
me i promise and this is out of curiosity i mean i'm gonna, I'm going to simplify it a fair bit. But do you know how in like explanations for the,
in evolutionary terms, the explosive growth, I suppose,
in human intelligence in relatively recent evolutionary history,
there's to simplify a lot, there's kind of two explanations, right?
There's the humankind being the tool maker right we got
smart because being smart means you can make really good tools understand the the physical
environment get hunt prey better avoid predators all the rest and then there's kind of another
explanation which is more of uh intra-species um competition explanation which is that our
our our social environment started to get more complex as our
brains got a little bit bigger. And then an evolutionary arms race sort of went on that
the better understander you were of your fellow humans, and the better you could communicate with
and manipulate and understand their motivations and intentions the better you would do and you know that has some appeal to me as well i suppose in terms of
the language instinct and things like that yeah yeah i mean what's just i mean just shooting from
the hip it's not a serious question i suppose i'm just curious you know more about this stuff than i
do what's your gut feeling about it yeah i mean my general feeling whenever i hear any of those
theories that says this is the one thing that explains uh how we got there is that it's not
the one thing right it was just a bunch of things it was a confluence of various factors that feed
off each other and amplify each other and and and iteratively increase the value of of being
intelligent through through all of those things at once.
Some people say it's cooking and we could get more calories that way,
or it's the social number of people that we interact with,
or it's walking upright, it's having dexterity, all these various things.
To me, it doesn't make sense to settle on any one of those as the prime causal factor.
make sense to settle on any one of those as the prime causal factor. I think they were all convolved with each other in a way that can't be probably decomposed explanatorily in retrospect.
That would be my feeling. But yeah, I think all of those things are at play. One thing I will say
that you just kind of prompted there is this thinking about other people, right? So being a
better understander of other people, as you said, is really valuable, right? So being a better understander of other people,
as you said, is really valuable, right?
And so what that means is that we,
the capacity that psychologists call a theory of mind,
being able to think about someone else's thoughts
becomes really valuable.
And there is an argument that actually,
that's why we can think about our own thoughts
because the ability to think about other people's thoughts became valuable. And then it turned out, oh, hey, I can think about our own thoughts because the ability to think about other people's
thoughts became valuable and then it turned out oh hey i can think about my own thoughts at the
same time right yeah so so the the self-awareness was an acceptation on on the value social value
of theory of mind i you know i don't 100 buy that either i think it's probably both things going on
that you know being able to being able to model
someone else's mind while not being able to model your own mind doesn't make much sense they probably
co-evolved but you can at least see a value of modeling minds generally yeah right in the social
context which is it's just it's just more obvious there than simply the the isolated value of modeling your own mind i think i
totally agree i could see the the importance of having that that theory of mind and i can see how
it could be recapitulized that to to to promote self-awareness i suppose the other thing that
i guess sort of guides me a little bit more towards the importance of of i guess social
dexterity as being a as being an important factor is around language.
And it's connected to what we were talking about before,
which is that the thing about language, a bit like letters,
you have a discrimination between you see an A or you don't see an A.
And all words are pretty much about putting categories onto the world.
And putting those categories is
extremely important for us to do interesting mental like or construct
interesting mental representations from them and to do things with them right so
again just gut feelings and intuitions here from my side anyway I find it
really hard to imagine how how you could have a very intelligent species
that wasn't using some form of language to think,
not just communicate with others, but even to think.
To what degree do you reckon is language sort of fundamental
and required for thinking?
Yeah, I mean, I think what language enables us, because we get these
categorical entities that we can then manipulate as objects without having to mentally keep track
of all the properties of them, right? We can just use it as a label. We can think about dogs without
having to constantly say those warm-blooded, furry things with four legs that wag their tails and can
bite you, right? I mean, you just couldn't think like that. It would just be cumbersome, right? So now we have
a category dog. That's an element that we can use and we can recombine it in an open-ended way to
have thoughts and express them. Sorry, not just to express them, but to have them. I think you're
right there in ways that we couldn't do otherwise. So now there's a couple of ways to think about
that. One is that, you know, some people would say, well, the language that we have shapes our
thought. I would say that the way that the types of things that we want to think about shapes the
elements of our language, right? So we have objects and we have actions and we have mainly, you know,
prepositions, which are causal relations or interdependencies between them.
Those are the elements in the world. The world is structured like that. And so our thought reflects
that because it's useful, and therefore our language reflects those elements. But once we
have that, then I do think it opens up this explosion of infinite open-endedness of abstract thoughts that we can entertain
that couldn't have been there before and of course the community communicative element then
is that we can think together right we don't have to think alone anymore and then you can learn
something and tell me and it doesn't i i immediately get the benefit of the hard work that you've done
and then it's cumulative over generations and so on.
It's Chris's turn now and I'm going to shut up from here on.
But I just got to mention that I had one of those whoa dude moments there because I totally agree with you that the physical world that we care about in which you have subjects and objects and things acting on other things.
I could see how that does shape the things that we think about.
Even when a mathematician or a physicist is thinking
about something extraordinarily abstract
that we have no personal contact with,
they use physical and geometric analogies
in order to think about it.
And having that shape our language, I was like,
whoa, yeah, That's cool.
You can have the death of Matt's head exploding in the universe.
That's a good note, I think, to round off on, Kevin,
because I suspect if I don't stop, Matt, you will be here for another hour.
As a guru, Kevin, I give you good marks because you mention meaning
and you talk about sense-making.
You've got consciousness, free will, you know, big things popping around.
This could get you in the idea.
You get more time because you disparage monocausal accounts.
You're not entertaining mystic forces.
causal accounts you're not entertaining mystic forces um and yeah you said that you didn't know an awful lot about yeah that's not allowed that's also not allowed you can't express
uncertainty and your analogies were not long or flower enough so you're you're you're on
you've got some positive points but but basically not going to cut it.
But yeah, I have to say, I encourage anyone listening
that has an interest in any of these topics.
We generally don't do book promotion stuff.
And I know that you didn't come on to us,
but I just would recommend your book, I think, Matt would too.
To be clear, we contacted Kevin to please come on our show.
His agent did not contact us.
That's what I was saying.
Yeah, it's been a great pleasure and entertainment.
And I think the key takeaway is that essentially I'm right.
That's what I take away from what you said.
And, you know, Matt has a couple of things
that he's okay about as well.
But yeah, it's been a pleasure.
And yeah, hopefully we can do it again
for all the topics.
We didn't get you to talk about gurus
that much this time,
but it might be good to pick your brain
on those folks.
Great. Well, it's been great fun, guys.
Yes, Chris, you're right.
Yeah, it was a pleasure.
Actually, you're the music
to my ears.
Okay, and thanks for meeting, Kevin. This was great fun.
And yeah, if you can make the time
in the future, we'll have to get you
back on to get you to give us
your hot takes on the Infosphere
and gurus. Yeah, great.
That sounds like fun. Thanks,us. Yeah, great. That sounds like fun.
Thanks, Matt.
Thanks, Chris.
All right.
Bye, Kevin.
Cheers.
See you.