Daniel and Kelly’s Extraordinary Universe - How does consciousness emerge?
Episode Date: March 27, 2025Daniel and Kelly talk to Prof. Megan Peters about the inner workings of the mind, and how much we do and don't understand itSee omnystudio.com/listener for privacy information....
Transcript
Discussion (0)
This is an I-Heart podcast.
Ah, come on.
Why is this taking so long?
This thing is ancient.
Still using yesterday's tech,
upgrade to the ThinkPad X1 Carbon,
ultra-light, ultra-powerful,
and built for serious productivity
with Intel core ultra-processors,
blazing speed, and AI-powered performance.
It keeps up with your business,
not the other way around.
Whoa, this thing moves.
Stop hitting snooze on new tech.
Win the tech search at Lenovo.com.
Unlock AI experiences with the ThinkPad X-1 Carpent, powered by Intel Core Alter processors,
so you can work, create, and boost productivity all on one device.
December 29th, 1975, LaGuardia Airport.
The holiday rush, parents hauling luggage, kids gripping their new Christmas toys.
Then everything changed.
There's been a bombing at the TWA.
Terminal, just a chaotic, chaotic scene.
In its wake, a new kind of enemy emerged.
Terrorism.
Listen to the new season of Law and Order Criminal Justice System
on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
My boyfriend's professor is way too friendly, and now I'm seriously suspicious.
Wait a minute, Sam. Maybe her boyfriend's just looking for extra credit.
Well, Dakota, luckily, it's back-to-school week on the OK Storytime podcast, so we'll find out soon.
This person writes, my boyfriend's been hanging out with his young professor a lot.
He doesn't think it's a problem, but I don't trust her.
Now, he's insisting we get to know each other, but I just want her gone.
Hold up. Isn't that against school policy? That seems inappropriate.
Maybe find out how it ends by listening to the OK Storytime podcast on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
Have you ever wished for a change but weren't sure how to make it?
Maybe you felt stuck in a job, a place, or even a relationship.
I'm Emily Tish Sussman, and on She Pivots, I dive into the inspiring pivots of women who have taken big leaps in their lives and careers.
I'm Gretchen Whitmer, Jody Sweetie.
Monica Patton.
Elaine Welteroff.
Learn how to get comfortable pivoting because your life is going to be full of them.
Listen to these women and more on She Pivots, now on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
On the podcast, we love to ask deep questions and then explore the edge of our knowledge.
Today, we're going to go a little bit meta and try to understand the workings of our own minds.
The us that is asking those questions and yearning to understand the universe, from biology to physics and yes, sometimes even a little bit of chemistry.
Science gives us this incredible tool to consistently build knowledge about the universe, but it's not clear that it can tell us about ourselves.
that it can help us understand why we have a first person subjective experience in the first place.
Why we have an experience at all.
Why chocolate has a taste and pain has a negative valence.
Why there's something it's like to be me?
How's that different from what it's like to be a bat?
Is it like anything to be a rock or a puddle or a star?
Science, like all human endeavors, is limited by the senses that we use to interact with the universe.
It can help us understand what we sense and find patterns and build models,
but it can't ever really tell us about the hard reality of the physical universe
that exists out there beyond our minds and our senses.
And that same limitation might apply to the other end of that sensory pipe.
Can a scientist use science to probe the nature of the mind while trapped inside their own mind?
That might just be the province of philosophy.
Today on the podcast, we're getting philosophical and asking what we know and how we can know about how consciousness emerges.
Welcome to Daniel and Kelly's extraordinarily self-conscious universe.
Hello, I'm Kelly Wienersmith. I study parasites and space.
and today I'm wondering if we live in a simulation.
Hi, I'm Daniel.
I'm a particle physicist,
and I don't care if we live in a simulation.
I want to eat simulated dark chocolate.
So that's my question.
Is it possible that we live in a simulation?
We're getting philosophical today.
Let's go all the way.
Yeah, absolutely it's possible,
but it also seems like a modern reflection
on recent concepts about how computers work.
You know, 100 years ago,
people thought the whole universe was like pipes
and cogs, and these days we think it's all computers.
So it's possible, but we also have zero evidence that it's a simulation and dark chocolate
tastes really, really good anyway.
That's true.
And so does Pad C.U.
I'm hungry today.
This is why we shouldn't report the podcast just before lunch.
That's right.
I'm so glad that I have simulated taste buds and smell receptors so that I can enjoy all of those
things very much.
But it is fascinating to think about where your consciousness, where your experience comes
from why it is that you even have one while you can like or hate pad see you, whether or not
that's something that comes from the meat in your mind or whether it's just the information
circling through those neurons, which could be uploaded to the cloud.
Yeah.
I mean, I think you'd have to be a machine to not like Thai food in general, personally, but I
could be wrong.
But I'm excited that, so you and I, we get a lot of emails from listeners.
You can send those emails to questions at Daniel and Kelly.org.
And lately we've been getting a lot of questions from folks who have been asking about consciousness.
I think we've been getting like one a week for a while and we decided it was time to jump into the consciousness question and you lucked out and you happen to know the perfect guest.
I do. There's lots of really smart folks here at UC Irvine and I'm good at Googling and finding them and persuading them to come and talk to me on the podcast.
I have a lot of fun and this is a topic that's really close to my heart.
I've been thinking about it for a long time and also thinking about whether it's something.
something you really can ask scientifically.
You know, science is wonderful and powerful.
And of course, we are advocates for science here on the podcast.
But just because it's a tool that does let us build knowledge about the universe
doesn't mean it's a tool that can answer every question, right?
You can't answer the question like, should I get out of bed in the morning?
Why do people eat white chocolate anyway?
Right?
Not every question is scientific.
And so there are things about the universe that we can probe with science and there are
limits to what science can answer.
And I've always wondered how much we can understand about consciousness itself using science.
So I was really excited to talk to a cognitive scientist who thinks about this all day in a very serious way.
Yeah, I'm excited that consciousness is sort of at the interface of science and philosophy, which makes it a really fun topic to think about.
And so I had a lot of fun today with Megan Peters chatting with her about this topic.
Before we dive into this wonderful conversation, we were curious what folks out there thought about their own.
consciousness, what ideas people have for why they even have an experience, how these few pounds of goo in your brain somehow generate all of this joy and frustration that we experience.
So I asked our cadre of volunteers to chime in on the question, how does consciousness emerge?
If you'd like to provide answers for future episodes, please don't be shy, write to questions at daniel and kelly.org.
So think about it for a minute. How do you think consciousness emerges?
Here's what our listeners had to say.
I think we have to really think about what consciousness might mean
before we can figure out where it comes from.
But if I had to make a guess, I'd say consciousness is just energy
sliding into different forms as it rolls along.
From our brain juice.
Exponential, exponential extension of the emergence of life.
I don't think anyone knows.
Human consciousness is an intersection of the more mechanistic functions
of the human brain and the created soul.
I think consciousness needs you to have both theory of mind
and the complexity to apply it to yourself.
Very slowly on a Saturday morning
after a night out on town with several coffees.
Consciousness is fundamental
and that we are as humans somehow tapped into it.
Wow, consciousness, that's a biggie.
Well, I think it's magic.
As the parent of twins, you think,
I'm pretty much useless.
I'll say that somehow it involves interaction with others.
Consciousness emerges when the penguin waddles in.
I've always leaned towards the idea that consciousness is an illusion.
I believe consciousness emerges when the brain's electrical fluid or plasma interacts with the
noise and dendrites in the brain things.
The brain itself is a sensory organ that can sense its own processing.
A pint of strong, fresh coffee is a necessary but not sufficient condition.
It's a very complicated dance in the brain.
brain. It's a very slow process. I think consciousness takes many, many forms from things that the brain is
doing. Neurons firing reaching out a critical consistency and interaction number. I don't think we can
know how consciousness emerges until we know what consciousness is and if we have it. All right, we've got
coffee, penguins, magic, all kinds of amazing things in there and a large diversity of answers.
Mm-hmm. And there's a thread here I've noticed a lot of people suggesting that somehow it arises from the complexity of the brain, neurons reaching a critical number or, you know, all these interactions. To me, that sort of frames the question, but doesn't really answer it, right? Like, why is it that a complex network of cells can have a consciousness emerge, regardless of the complexity, right? It's not obvious to me that complexity is enough. And that's really, I think, the focus of the question for today's episode.
I would personally love to know the evolutionary reason why we have consciousness.
How does it benefit us?
And would we expect to see other animals to have it as well?
Yeah.
And anyway, there were so many exciting things we talked about with Megan today.
Yeah, there's lots of really fun questions there.
Like, were there other species of humans which actually were more intelligent but didn't survive because our ancestors were more warlike or more cannibalistic, right?
Or if you ran the Earth experiment a thousand times, how often would you even get intelligence or more or less intelligent?
All right. Is this a fluke or is it a common outcome? Man, I'd love to know the answer to those questions.
And you're also getting into some of the complexity there because you were calling it intelligence and where does intelligence end and consciousness begin and what is the difference in those things?
And it gets complicated pretty fast. It does. And fortunately, we have an expert to help us dig through this. So let's jump right into our conversation with Professor Megan Peters from the Cognitive Science Department here at UC Irvine.
All right, so it's my pleasure to have on the podcast, Professor Megan Peters.
She's an associate professor here at UC Irvine in the Department of Cognitive Science.
She's a PhD from UCLA in Cognitive Science, and her research aims to reveal how the brain represents and uses uncertainty.
She uses neuroimaging, computational modeling, machine learning, and neural stimulation techniques to study all of this.
And most importantly, she agreed to come on the podcast and talk to us about what is.
consciousness and answer all of our questions and fend off Kelly's poop references.
Good luck.
Thank you both for having me.
It's really a pleasure to be here.
As an aside, my PhD is in psychology with a focus on cognitive and computational
neuroscience, if that matters.
So you can cut this part out if you want to redo that intro.
No, we'll find out if that matters.
We'll see.
Yes, absolutely.
Great.
So let's dig right in.
I want to know, and this is maybe the hard.
hardest question we're going to ask you all day, how do we define consciousness? We're going to be
talking about it. We're going to be arguing about it. We're going to be saying, do rocks have it,
do dogs have it? But it's kind of slippery if we don't even know what it is we're talking about.
So what is consciousness? How do we define this thing that we all sort of know intuitively but have
a hard time describing? Super important question. And leads me into one of the first kind of
technical things that we'll talk about today, which is the distinction between are the lights
on, but is anyone home, essentially. So we talk in the fields of psychology and in the
philosophy of mind about the distinction between what's called access consciousness and phenomenal
consciousness. So these are terms that the philosopher Ned Block introduced a while back.
And the idea is that access consciousness is about something like the global availability of
information in a complex system to allow that system to behave usefully in its environment,
to survive, to seek goals, to seek rewards, to get food, to not be eaten by something.
And the distinction then is between, you know, access consciousness, which is information goes
all over in the brain or in the mind if there isn't a brain present.
But the difference then between that and phenomenal consciousness is that the phenomenal
consciousness has to do with the phenomenology of the experience that the observer is having.
So I'm going to break that down.
That really just comes down to this something that it is like to be you, the qualitative
experiences that you have of the world, the fact that pain hurts.
It isn't just a signal like your Tesla would send to itself like damage in my right front
tire or something like that.
It's more like it has qualitative character to it.
There's something that it's like to be.
be in pain. If I whack you in the leg, it's not just a signal that I've damaged the tissue in your
leg. There's more to it than that. So are you saying that a Tesla has access consciousness
because it notices damage in his tire, but it doesn't feel pain so it doesn't have a nominalogical
consciousness? Or if I misunderstood? I wouldn't even go that far to say that a Tesla has any
sort of access consciousness whatsoever. I think that that's a higher bar there. But I think that
the distinction is important, right? We can imagine a system that has all of the hallmarks of
of access consciousness, like a fancy future Tesla.
Okay.
You know, that has all of this global availability of information.
It enables flexible, adaptive behavior.
It can change based on its context.
It's not just going according to its programming, you know, that kind of thing.
And yet there's nothing that it's like to be that fancy future Tesla.
It's just a zombie.
It's just engaging in behaviors that are useful for that organism, but there's no one home.
So the someone being home aspect is also really important.
because we can imagine in like, you know, not a future robot scenario, but just a medical situation
where you've got a person who's in a coma or they're in a persistent vegetative state
and they might wake up seemingly and have sleepwake cycles and maybe respond to external stimuli.
But the critical factor in deciding whether, you know, someone's home, whether to keep them on life support,
is whether there's anything is like to be them inside.
And the flip side is also really important in the context of medical science because even if they don't exhibit any of the outward signs or symptoms of being conscious, of having access to that available information, of being able to behave in their environments, someone still might be in there.
They might be locked in.
And so it's this presence of phenomenal experience of someone being home, so to speak, that I think is the important thing to remember when we're talking about conscious.
in the context of biological systems or artificial ones.
So I'm still trying to wrap my head around the two different kinds of consciousness.
Is there a kind of organism or a situation a person can be in where they would have one but
not the other to help me sort of differentiate?
Well, for humans, the idea is that presuming that you don't subscribe to philosophical
zombieism, that like you have no idea that I'm conscious, right?
I exhibit all the hallmarks, but you have no idea.
But presuming that you don't want to go down that rabbit hole.
I do actually, but in a minute.
In humans, it seems like they go hand in hand, right,
that if you have access consciousness, you have phenomenal consciousness
in general as an awake behaving organism.
But there might be cases in specific scientific experiments
where you can reveal symptoms or cases or evidence of access consciousness
in the absence of phenomenal consciousness.
So there's some very specific psychological experiments
that suggests that these are separable entities.
One classic example is a task that was actually attributed to
and was developed by a cognitive scientist here in my department, George Spurling.
So this is the classic Spurling task where you show someone an array of letters.
There's like five letters in a row and there's five rows of letters.
And you flash it very fast and you ask people to kind of give their impression of the overall array.
Like, did you see it?
Did you feel like you got the whole thing?
And people say, yeah, I feel like I got the whole array.
That's fine.
But then when you ask them to report the whole array, they can't.
But when you ask them to report a specific row, they can.
It feels like there's a distinction then between this feeling that you've got phenomenal experience of the whole array, but your access might be limited.
And so Ned Block has famously called this phenomenal consciousness overflowing access.
This seems to me sort of the crucial distinction, right?
because access consciousness is something we can understand sort of on a fundamental level,
like we can trace signals into your eye and watch those proteins fold and then up the optic nerve
and into your brain, but we can't know whether anybody's there like experiencing or what that
experience of seeing a red photon is or whether my red is the same as your red and all this sort
of dorm room philosophy kind of stuff, right? And so access consciousness is sort of more accessible
scientifically than phenomenological consciousness, which is more philosophical. Is that fair?
I think that's a relatively fair characterization, and that there are a lot of kind of current
scientific theories of consciousness that purport to be a theory of consciousness, and most of
them, really, when it comes down to brass tacks, end up being about access consciousness,
because the phenomenal part is really hard to get at, as you said. I think that there are some
approaches that might be promising in this vein. So you could look,
for neural correlates or patterns of neural activity that are associated with reports of phenomenal
experience. So, like, I can create conditions where I show you the same stimulus over and
over and over again. And so the early parts of your visual brain are responding kind of similarly.
I mean, there's noise and there's variability and so on. But if I flash the same thing at you
over and over and over again, I'm going to get a consistent kind of pattern of responding in the
back of your head, which is the early visual cortex. But you might have fluctuations in your
subjective experience of that stimulus. Sometimes you feel like you see it. Sometimes you feel like
you can't. So to the extent that I can hold most of the stuff in the back of your head constant,
and I can measure that in relation to flashing something at you over and over and over again. But then I
look for how your subjective experience fluctuates from one moment to the next. I feel like I saw
that strongly. I feel very confident that I got that right. I feel like that was nothing at all.
I didn't really experience anything. Then I can go look for neural correlates that are
co-varying with the subjective experience that you're reporting to me. There are a lot of problems
with that too that maybe you don't have perfect access to your own subjective experiences and your
reports are spurious and blah, blah, blah. But I think that there are ways that we can go about
trying to get at how the brain constructs or supports or represents these subjective experiences
that are different from just how your brain processes information about the external environment.
Right. And I do want to get into those experiments. But first, I just want to make sure we're
like totally clear on these definitions. And I think the example that you were a little dismissive
of a minute ago is actually helpful, at least to me, to clarify what we're talking about.
And that's the example of philosophical zombies, right?
The idea here is, could you take Daniel and replace him with some machine, biological, or whatever, that replicates all of my actions, seems the way I do to be conscious, but there's nobody home, right?
There's nobody experiencing the red and eating the pizza and whatever.
It's just, but it seems like, and it reacts exactly the same way.
So philosophical zombies, as I understand them and tell me if I'm wrong, conjectures that it's possible to build something.
like this, which means, therefore, that the phenomenological consciousness is totally
unmeasurable, right? That there's no way for us to know. We have, as you said, we have to sort
of trust your first person reports about your subjective experience, which makes it difficult
to do any actual science, right? Is that a fair characterization of philosophical zombies
if I missed something? Oh, I have like five things I want to say. Let's make sure I get to all of them.
Okay, so first, the idea of philosophical zombies has been hotly debated in the philosophical literature
for a very long time. And there are a number of philosophers who say, yes, this is absolutely
totally reasonable to posit that this could happen. And then there are other people who say,
no, like, that's not really like a reasonable assumption. So, you know, go read Dan Dennett,
if you want to hear about all of this stuff. But I think that you've touched upon another important
point, too, which is this idea of testing for whether someone is in there or not. And how we don't
currently have the capacity to do that, even in other beings that look and behave precisely
exactly like you do, like me or like Kelly.
Or chemists or like, are they really in there?
Do they actually like chemistry?
Is that possible?
They have to be a machine.
Right.
Are they just lying?
Yeah.
We don't have a way of testing it.
We don't have like a consciousness ometer.
I can't pull out my like hair dryer looking consciousness ometer and point it at you and be
like, are you conscious?
Like, we don't have that.
How do you know what it would look like if we don't even have the device?
Why does it look like a hair dryer?
I think she's right.
I think at some point,
Axel Clearman's positive that it was going to look like a hair dryer
and is kind of propagated throughout by thinking since then.
But that seems about right, right?
It's kind of like a spedometer, like the, you know,
cough pulls up that on the side of the road.
Too much consciousness.
Here's your ticket.
But we don't have those.
And in fact, like we wouldn't even know how to build one, right?
We don't even know what the relevant metrics are.
Do we care about measuring brain stuff?
or is that kind of spurious?
But one of the arguments kind of against philosophical zombieism
is the supposition that consciousness is not just this epiphenomenon,
that it's not just this thing that kind of comes along for the ride
over and above an organism behaving usefully in its environment
and flexibly adapting to different conditions
and being intelligent and seeking goals
and not getting eaten by predators,
that consciousness itself serves a function
that you cannot have all of that stuff, all of that useful stuff for staying alive, without consciousness.
And this is an arguable position. It's not a fact. But the folks who are going to argue that
consciousness serves a function. There is a function of consciousness. It's not just that there are
functions for creating consciousness somewhere in your head, but that consciousness itself is useful,
that it allows some evolutionary adaptive advantage.
That is certainly a defensible position.
And from that context,
you actually couldn't have philosophical zombies at all
because it is actually impossible
to get all of those other behaviors
and cognitive processes and all of that stuff
without the consciousness bit.
I see, if that's true,
then the interaction with people
who appear to be conscious
is evidence that they are actually conscious.
Yeah.
But these are all like,
arguments that we can have, right? There's no way of saying I'm right and you're wrong depending on which
position you're holding. I think about this when I interact with my dog, you know, because I feel like
my dog is in there, right? Like my dog knows me, my dog loves me. I love my dog. My dog understands that
I love him, that I'm nice to him. It's hard to imagine that there's nobody at home in my dog.
but you know this whole concept suggests that it's possible for there to be a machine effectively
and I say a machine just to indicate like that there's nobody home though I don't actually
know if AI could have phenomenological consciousness but it's hard to imagine that you're not actually
in there Kelly's not actually in there I'm the only one in the universe who's experiencing this
but I can't actually prove it right so it really is a fundamentally important point even though
it sounds absurd on the face of it right that we could be the only one aware that we
can't actually know if anybody else is in there. We feel people loving us, you know, and their
experience and their pain reflects ours, but we can't actually know. And I think that's a
fundamentally important point to hold on to, even though it feels ridiculous. It is. I mean,
the easiest solution here is that we all have consciousness, right? Like, that would be the most
parsimonious explanation. And it's easy to make that leap because there's so many physical
information processing similarities between you and me right our brains are not exactly the same but
they're pretty darn close and so we can make the leap maybe to talking about great apes or monkeys
or dogs or other vertebrates other mammals that kind of thing i have a harder time when we get down
into you know insects like i don't know that a bee larva really but i don't know like people
who like live in virginia for example that's just like what's i'm not going on there
but the presence of like the idea of a philosophical zombie it becomes maybe a more useful exercise
from like an empirical science standpoint or from even a philosophy standpoint when we start
talking about entities that are so fundamentally different from us so not just you versus me
or us versus your dog or my cat who you know is probably right on the border of it
but no I think he's conscious I think there's something that it's like to be him
But when we talk about octopuses, like those are really weirdly different creatures that are basically aliens, or when we talk about, you know, the potential for silicon-based systems in the future or alien species or those kinds of things, things that are so fundamentally different from us, then it starts to be like, okay, well, when we don't have the kind of one-to-one mapping between the structure of my biological robot that I drive around and your biological robot that you drive around, right?
When we don't have that very similar to, then it becomes maybe a little bit more useful to talk about, like, can you have all of these behaviors and all of these cognitive capacities in the absence of someone being home?
And we don't know the answer, but it at least gives us more than just kind of a philosophical, as you said, dorm room argument.
They're like, I don't know that you're in there.
But is that a difference that makes a difference right now? Probably not.
All right. So we're going to take a break. And when we get back, we're going to dig into Octopi a little bit more.
December 29th, 1975, LaGuardia Airport.
The holiday rush, parents hauling luggage, kids gripping their new Christmas toys.
Then, at 6.33 p.m., everything changed.
There's been a bombing at the TWA terminal.
Apparently, the explosion actually impelled metal glass.
The injured were being loaded into ambulances, just a chaotic, chaotic scene.
In its wake, a new kind of enemy emerged, and it was here to stay.
Terrorism.
Law and order, criminal justice system is back.
In season two, we're turning our focus to a threat that hides in plain sight.
That's harder to predict and even harder to stop.
Listen to the new season of Law and Order Criminal Justice System on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
My boyfriend's professor is way too friendly, and now I'm seriously suspicious.
Well, wait a minute, Sam, maybe her boyfriend's just looking for extra credit.
Well, Dakota, it's back to school week on the OK Storytime podcast, so we'll find out soon.
This person writes, my boyfriend has been hanging out with his young professor a lot.
He doesn't think it's a problem, but I don't.
trust her. Now he's insisting we get to know each other, but I just want her gone. Now hold up,
isn't that against school policy? That sounds totally inappropriate. Well, according to this person,
this is her boyfriend's former professor and they're the same age. It's even more likely that
they're cheating. He insists there's nothing between them. I mean, do you believe him? Well, he's
certainly trying to get this person to believe him because he now wants them both to meet.
So, do we find out if this person's boyfriend really cheated with his professor or not?
To hear the explosive finale, listen to the OK Storytime podcast on the IHeart Radio
out Apple Podcasts or wherever you get your podcast.
Have you ever wished for a change but weren't sure how to make it?
Maybe you felt stuck in a job, a place, or even a relationship.
I'm Emily Tish Sussman and On She Pivots, I dive into the inspiring pivots of women who have taken big leaps in their lives and careers.
I'm Gretchen Whitmer, Jody Sweeten.
Monica Patton.
Elaine Welterah.
I'm Jessica Voss.
And that's when I was like, I got to go.
I don't know how, but that kicked off the pivot of how to make the transition.
Learn how to get comfortable pivoting because your life is going to be full of them.
Every episode gets real about the why behind these changes and gives you the inspiration and maybe the push to make your next pivot.
Listen to these women and more on She Pivots, now on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
The U.S. Open is here.
And on my podcast, Good Game with Sarah Spain, I'm breaking down the players from rising stars to legends chasing history.
the predictions, well, we see a first-time winner and the pressure.
Billy Jean King says pressure is a privilege, you know.
Plus, the stories and events off the court and, of course, the honey deuses, the signature cocktail of the U.S. Open.
The U.S. Open has gotten to be a very fancy, wonderfully experiential sporting event.
I mean, listen, the whole aim is to be accessible and inclusive for all tennis fans, whether you play tennis or not.
Tennis is full of compelling stories of late.
Have you heard about Icon Venus Williams' recent wildcard bids?
Or the young Canadian, Victoria Mboko, making a name for herself?
How about Naomi Osaka getting back to form?
To hear this and more, listen to Good Game with Sarah Spain,
an Iheart women's sports production in partnership with deep blue sports and entertainment
on the Iheart radio app, Apple Podcasts, or wherever you get your podcasts.
Presented by Capital One, founding partner of IHeart Women's Sports.
So for access consciousness, and you've sort of hinted at this already, the definition included
like escaping predators, having different motivations, that to me does sound like it encompasses at
least all vertebrates. So is it safe to say then that the consensus is that all vertebrates
at least have access consciousness? I don't want to speak for any big scientific community
on anybody's behalf. I would say that my personal sense is that, yes, you're going to be
hard-pressed to find people who would die on the hill of saying that cats are not conscious.
I think that would be hard to find. Yeah. So you mentioned that octopus, they're smart, but their
brain is different than ours. Would you say that they have phenomenological consciousness?
Because it seems like someone's in there when you interact with them? I have no idea. I have no
idea. But I think that, you know, saying that we have no idea is reasonable under this situation.
that I don't think that we have strong empirical evidence either way,
because that strong empirical evidence is predicated on this whole conversation
that we've just been having about what would that evidence even look like?
And how do you separate evidence for phenomenal consciousness
from evidence for not just access consciousness,
but just pure intelligent behavior, right?
Never mind, access consciousness and this global availability of information thing.
It's really just that octopuses, octopi.
Octopuses are very intelligent.
They're very intelligent, clearly.
They can be even sneaky, you know, they get out of their cage and they go open something and steal the food and then they go home so that they don't get in trouble.
So they're clearly highly intelligent creatures.
I think that my margins of uncertainty, my cone of uncertainty is just so wide there that I have no idea.
So how do you study the difference between intelligence behaviors and consciousness?
We'll just stick with excess consciousness in animals.
in the lab? Yeah. So in animals, that's even harder, too, because how do you query the phenomenal
experience of a rat? It's kind of hard. In my line of work, we focus on one particular kind of
subjective experience, which is that of metacognitive uncertainty. So one of the things that we
ask people to do, like people now, but we can do this potentially in rodents and monkeys as well,
although I don't have those folks in my lab. I only have people. We'll ask you to do some task,
like we're going to show you some stuff on a computer screen, ask you to press buttons, tell us what you see, and then we'll ask for a subjective report on top of it. How sure are you that you got that right? How confident do you feel in your assessments? How clearly do you think you saw the thing? And some of those questions we can design experiments to ask in animals too. And so those are kind of more about the subjective experience, especially if we do manipulations to the stimuli or tab.
such that the animal's behavior is basically exactly the same between one condition versus another condition.
But their subjective reports differ.
So that tells us that we're trying to get at something that has to do with phenomenology,
or maybe something that's facilitated by global access of information,
rather than something that's just kind of the zombie part of their brain,
like, you know, almost reflex-like responding to the stimuli out there in the world.
but the distinction between is that subjective report facilitated by globally available access to
information or are we actually tapping into phenomenology that's also kind of sticky when you get
to talking about rodents and that kind of thing but that at least allows us to say never
mind all of phenomenal experience or subjective experience or consciousness i'm going to focus on
this bit just this one thing that i can operationalize very classic for a psychologist right i'm
going to put you in a room and ask you to look at a computer screen and press one of two buttons
for an hour. And that's what you're going to do. But that's how we try to get at this. I think the
example of an octopus really puts the finger or the tentacle on the question of like what it's like,
which to me is the core of the question. Like, you know, the mechanics of how information is accessed or
stored or whatever, that's fascinating and that's good science. But to me, the real question,
which I think is what people call the hard problem is how you get to have an experience from something which is, you know, just made out of stuff.
Like my desk doesn't have an experience.
My computer, I think, doesn't have an experience.
Most of the universe doesn't have an experience, but I have an experience and you have an experience.
And probably, for example, an octopus has a very different kind of experience.
It's got like eight little brains arguing about what each leg will do.
And, you know, an alien out there with a different kind of sense.
you know, a tongue that can taste quantum electrons or something might have a completely different
kind of experience. And to me, the hard question is, you know, where does this come from,
this phenomenological aspect? Is it fundamental to matter? Is it somehow emergent in some way we don't
yet understand? How do you pose this question? Is that the central question you think? And how do you
think it's best asked? I think that's absolutely one of the central questions is, you know,
how is it that there's something that it's like, that the conscious experience arises from matter?
That is, as you said, that Dave Chalmers' hard problem, which is, you know, there seems to be this
fundamental disconnect between patterns of brain response and the subjective experience.
It's a very different natural kind.
It's a very different kind of stuff, right?
The subjective experience stuff is unlike any other kind of stuff in biology.
What do you mean by that?
That's the hard problem.
That's the idea, is that it's not.
clear how you would get something like subjective experience, whatever that is, out of
the interactions between physical neurons. The bridge, you know, might be something like,
okay, well, one of the things that physical neurons do in talking to each other is similar
to how computers do this, is that they create information. And so now we're kind of moving
from physical space into information space. But nevertheless, we don't know how to even get
from information to subjective experience, we can call them the same thing and say,
ha-ha, we're done, like moving on with our lives, but that doesn't feel satisfying.
And there are people who think that the hard problem is not a problem at all,
that subjective experience might not even exist at all, or that our belief that subjective experience
exists is an illusion. So there's like a whole, this is really complicated, and we could talk
for hours and hours about this. Who's experiencing that illusion, though, in that case, right?
Yeah, exactly. But if you want to read about illusionism from a very deep and powerful perspective, I'm going to mention him again. Go read Dan Dennett. He wrote very extensively on this topic. So the fundamental question is, you know, how do we get something like consciousness out of something like brains? And is the substrate important for creating the subjective experience and the shape and nature of that subjective experience? Or is there kind of one type of
that all kinds of systems might create, regardless of their physical substrate.
And you might maybe think that the latter statement seems a little strange, that, like, how is it that an octopus could create a subjective experience that's very similar to mind because our substrates are so fundamentally different?
But we do see evidence of convergent function in evolution in terms of things like digestion, where, like, you've got lots of very different kinds of systems that all accomplish the same computation.
or function of digesting stuff.
And so it is possible to think that different substrates
might accomplish the same kind of function
in creating the same kind of conscious experience.
Are you saying that the fact that like birds and bats
evolved flights separately,
but fundamentally it's the same thing,
suggests that different kinds of wet matter
could generate the same kind of subjective experience?
It is possible.
I don't know.
If you think that consciousness serves functions,
that there might be one kind of subjective experience that best serves that function.
The only thing I was going to say beyond that, though,
is that I'm more sympathetic to the idea that there's probably different kinds of subjective experience
across different substrates.
It feels like the burden of proof on saying that all subjective experiences are kind of the same
across lots of different systems would probably be on the people who are making that claim.
I think it's much easier to say the kind of subjective experience you have depends on the
substrate that's generating it. How else to explain how some people actually eat white chocolate
and claim to enjoy it, right? I mean, it's impossible to understand it. Got to be maladaptive.
Hey, I like wet chocolate. Oh, oh, sorry. That's all right. I'm from Virginia. Daniel's just going
around insulting everybody today. All right, so we were talking to Joe Wolf the other day. She's an
evolutionary biologist who studies convergent evolution. I feel like if she were here, she would be
explaining to us about how if you look at traits like, you know, the evolution of the crab body
plan. You'd really love to like have an evolutionary tree where you can count how many times
this thing popped up and how many times it disappeared to try to understand like its adaptive
value, what kind of preconditions you need before this thing comes into existence. Could we ever have
that in the study of consciousness or will we never be able to know like does an octopus have
access consciousness? Will we ever be able to get there? Great question. As of now, the path is very
murky to me because, as we just talked about it before, like, you can't pull out your
hair dryer consciousnessometer and point it at things. And the challenge is really because the thing
that we're studying is by definition unobservable by anybody except for the observer who is
experiencing the consciousness. That is the definition of what we're talking about. And so it's
really very different from anything else that we study in science. And their second wrinkle there
also, not only is this something which we can't observe or measure, we have to rely on somebody
reporting it, but it's also filtered through our own consciousness, right? Like, we sort of assume
consciousness when we do science. We're saying we have hypothesis. We do experiments. So everything is
filtered through sort of like two consciousnesses when we're even like talking about this. And
maybe you're about to answer this question, but like, is this something we can probe scientifically
or is it limited to philosophical discussions on the roof with banana peels? Oh my gosh. There
we're like five questions buried in that. Okay. So is this something that we can study
scientifically? Yes, I think so, but I think that we need to have a lot of help from philosophers.
So a lot of the work that I do and a lot of my scientific friends are not actually
scientific friends only. They are philosophers as well. And I think that there's a lot of value
that we get from that. Yeah, and I didn't mean to suggest that philosophical exploration is
like not valuable. It's absolutely like, you know, the wonderful cousin of scientific exploration.
and fundamentally important, but it's also different, right?
It gives different kinds of answers.
Yeah, it does, but I think that we need to be informed by our philosophical friends down
the hall in understanding whether the experiments that we're designing are really getting
at the target that we think we're interested in studying.
And so there's a lot of work out there that's like, okay, I'm going to tell the difference
between whether you're likely to wake up from a coma or not.
and that's really relevant in the clinical setting,
and it is so powerful and important that we do that.
It doesn't necessarily tell us about the experience that the person is having.
It just tells us kind of binary, whether it's there or not.
And we need that.
We need to have measures that will allow us to predict.
Is someone in there right now?
Are they awake?
Are they likely to wake up?
We need all that stuff.
But that doesn't really get at the fundamental question of this subjective experience bit.
And we also have a lot of studies out there that purport maybe to do research on conscious access, but really, if you change the question, you might start to be skeptical about whether they're targeting them.
So I'll give you an example, which is I as a psychologist, I put you in a room and I ask you to tell me whether you saw something or not.
And that's a subjective answer, right?
Like, did you see it?
Did you not see it?
Right?
Like, I'm asking you, are you consciously aware of this stimulus?
And then I could go measure brain responses or whatever that go with cases when you said you saw it versus when you said you didn't see it.
And I say, okay, now I find like the neural correlates of consciousness.
But now imagine that I replace you as the human observer with a photodiode.
I could do exactly the same experiment.
And I would never conclude that that photodiod has conscious experience, probably.
So I think that there's like a lot of challenges that the philosophical fields of philosophy,
not only of philosophy of mind, but philosophy of science, I think too, like really has a lot to say
about how we're designing experiments and how we're interpreting the results.
There was another question that you asked earlier in that stream, but now I don't remember
what it was.
Maybe it's about like, can we ever develop a test for consciousness?
So that's another thing that we should talk about.
So there's been some work that I've contributed to recently.
where we're saying, well, we don't have a consciousness ometer, and we wouldn't even know
how to go about building one. We don't even know whether pointing it at behavior or brain or something
entirely different, like, I don't know, like, arras, some like crazy other thing. We have no idea
what to even point it at, never mind how to build it. But one way that we might make progress
is to collect all of the potential consciousness ometers that have been built over the years
in terms of behavioral signatures of awareness in humans and neural response patterns and so on,
collects them all and then make a decision about which ones are applicable.
First, how they all correlate with each other in terms of predicting whether someone is in there
and what their experience is, and then whether they're applicable to a neighboring system.
So I'm not going to jump straight to octopuses or Teslas or aliens, but I might jump to
young children. Because most of these studies and these metrics are developed on adults. And so I'm going
to say, okay, well, I'm going to take all this stuff and then I'm going to point it at young children,
which I also presume are probably in there. You know, when they stub their toe, they cry. Like,
they indicate that they are experiencing pain. And I'm going to see how much those metrics now
continue to correlate with each other and continue to make useful predictions about whether that
subject is aware of a stimulus or not or, you know, wakefulness versus sleep, that kind of thing.
And then if that seems to be okay, then I'm going to say, okay, now I'm going to point them maybe
at great apes. Now I'm going to point them maybe at New World Monkeys. Now I'm going to point
that, you know, so we can kind of go down the evolutionary hierarchy, so to speak, and say, well,
the degree to which a particular candidate set of consciousness ometers is applicable to a particular
system is defined by their similarity to us. And then we have to make decisions about what metrics
of similarity to use. But you can see that at least conceptually from a high level, this might
be a path forward. I have to admit, I'm not convinced. Like, it feels to me like it's just making
fuzzier what we're measuring about what's going on mechanistically inside people's heads.
But it doesn't actually tell us anything about the first person experience, which is almost
because of the way we defined it, infinitely inaccessible, right? Like, there's no way for me to share
my experience with you other than manipulating my mouth or whatever and filtering it through
your consciousness to your first person experience. So it feels to me like it's something that's
completely inaccessible. And the only way forward, I'm guessing, is to like try to dig into this
emergent behavior and see if we can make a bridge between the microscopic details that we do
understand and somehow come out mathematically with a realization of how this first-person experience
has to emerge. I'm skeptical, I guess. Well, I want to ask you about the mathematical thing,
because why is math the answer here? Because I'm a physicist. Because you're a physicist.
Okay, yeah, sure. I knew the answer to that question before I asked it. This isn't a zero-sum game.
We don't have to do one or the other. I think that an analogy that a lot of folks in Conscious
the science like to use is that of early investigations into the nature of life and vitalism.
And so this idea that we had to discover this very specific fancy thing that was almost magical
in nature that was like, why is this alive and why is this not alive? And like let's maybe do some math
or maybe like do a bunch of empirical experiments to discover like the life force or the vitalism,
like the lifeness there. Then over time we discovered that it turns out that life is just kind of a
collection, it's like a bag of tricks, right? And the boundaries are maybe a little bit fuzzy,
like, what are viruses? Are those alive? I don't know. So maybe there's an analogy here,
which is that if we keep pushing on multiple angles, that there might be a convergence of
approaches and information and evidence that will reveal that this hard problem is just going to go
away. We don't have to discover a mathematical transformation or an emergent property or anything
like that, that by better describing the system, we will discover that that problem completely
dissolves.
I don't know, but it's possible.
And we saw, we have historical examples of cases where something that seemed very mysterious
and seemed like an emergent property has now been transformed into a series of really
beautiful descriptions of how the system is working.
And maybe that will happen with consciousness too.
All right.
Well, my consciousness needs a break.
from all these really heavy but amazing ideas.
And when we get back, I want to talk to Megan
about some of the theories people have
to explain these deep mysteries.
December 29th, 1975, LaGuardia Airport.
The holiday rush, parents hauling luggage,
kids gripping their new Christmas toys.
Then, at 6.33 p.m., everything changed.
There's been a bombing at the TWA terminal.
Apparently, the explosion actually impelled metal, glass.
The injured were being loaded into ambulances.
Just a chaotic, chaotic scene.
In its wake, a new kind of enemy emerged, and it was here to stay.
Terrorism.
Law and Order Criminal Justice System is back.
In season two, we're turning our focus to a threat that hides in plain sight.
That's harder to predict and even harder to stop.
Listen to the new season of Law and Order Criminal Justice System
on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
My boyfriend's professor is way too friendly, and now I'm seriously suspicious.
Oh, wait a minute, Sam. Maybe her boyfriend's just looking for extra credit.
Well, Dakota, it's back to school week on the OK Storytime podcast, so we'll find out soon.
This person writes, my boyfriend has been hanging out with his young professor a lot.
He doesn't think it's a problem, but I don't trust her.
Now, he's insisting we get to know each other, but I just want her gone.
Now, hold up. Isn't that against school policy? That sounds totally inappropriate.
Well, according to this person, this is her boyfriend's former professor, and they're the same age.
And it's even more likely that they're cheating.
He insists there's nothing between them.
I mean, do you believe him?
Well, he's certainly trying to get this person to believe him because he now wants them both to meet.
So, do we find out if this person's boyfriend really cheated with his professor or not?
To hear the explosive finale, listen to the OK Storytime podcast on the IHeart Radio app, Apple Podcasts, or wherever you get your podcast.
Have you ever wished for a change but weren't sure how to make it?
Maybe you felt stuck in a job, a place, or even a relationship.
I'm Emily Tish Sussman, and on she pivots, I dive into the inspiring pivots of women who have taken big leaps and their lives
and careers. I'm Gretchen Whitmer, Jody Sweeten, Monica Patton, Elaine Welteroff. I'm Jessica
Voss. And that's when I was like, I got to go. I don't know how, but that kicked off the
pivot of how to make the transition. Learn how to get comfortable pivoting because your life is
going to be full of them. Every episode gets real about the why behind these changes and gives you
the inspiration and maybe the push to make your next pivot. Listen to these women and more on
She Pivots. Now on the IHeart Radio app, Apple Podcasts, or wherever you
get your podcasts.
The U.S. Open is here.
And on my podcast, Good Game with Sarah Spain, I'm breaking down the players from rising stars
to legends chasing history, the predictions, well, we see a first time winner, and the
pressure.
Billy Jean King says pressure is a privilege, you know.
Plus, the stories and events off the court, and of course the honey deuses, the signature
cocktail of the U.S. Open.
The U.S. Open has gotten to be a very fancy, wonderfully experiential.
sporting event. I mean, listen, the whole aim is to be accessible and inclusive for all tennis
fans, whether you play tennis or not. Tennis is full of compelling stories of late. Have you heard
about Icon Venus Williams' recent wild card bids? Or the young Canadian, Victoria Mboko, making
a name for herself. How about Naomi Osaka getting back to form? To hear this and more, listen to Good
Game with Sarah Spain, an IHeart women's sports production in partnership with deep blue sports
and entertainment on the IHeart Radio app, Apple Podcasts, or wherever you get your podcast.
Presented by Capital One, founding partner of IHeart Women's Sports.
All right, we're back, and we're talking to the apparently conscious Megan Peters,
who tells us she is inside her body and driving it like a meat machine.
And she's an expert on these questions of consciousness, so we should listen to her.
And we've been talking about this sort of hard question of consciousness,
how your first person experience is somehow generated from the meat inside your head
or if you're an AI and you're listening to this, the silicon inside your chips.
And to me, this question of emergent behavior is really important across science
and especially in physics, you know, where we see so many examples where we understand
the microscopic laws and then you zoom out and you need different laws.
You know, like we understand how particles work, but then you zoom out and you need fluid
mechanics and those laws are very different, but still applicable. And so it seems to me like
there might be some progress to be made if we can somehow tackle this emergent question or
think of it from this prism. But Megan, tell us what are people doing? What are the sort of current
leading theories of answers to the hard problem of consciousness? Great question. So yeah, there's a number
of theories that kind of bridge between philosophy and neuroscience. So, you know, ultimately a lot of
these theories are saying, the thing that we know is conscious is us. And so we're going to
study consciousness in us because that makes sense. And we can't go studying consciousness in
rocks because we don't know that they're conscious. And so that would kind of be a circular
argument. So these theories are kind of bridging the gap between philosophy and neuroscience
and psychology. And they come in a number of different flavors. And we are touching upon also
this difference between access consciousness and phenomenal consciousness that we talked about before.
But let's assume that we can study consciousness scientifically.
What do those theories look like?
So probably the most influential theory has been the global workspace theory.
So Bernie Barr's and then Stan DeHan started this idea that consciousness is about the global
accessibility of information in kind of a centralized processing space.
So you have these different modules that either take in information from the external
world through your sensory organs or they have other functions like memory.
storage and recovery. They have other functions like integration of different sensory systems,
information, that kind of thing, executive function, decision making.
Sort of mental analogies of organs, right? The way you're like your liver has a function and
your stomach has a function. Yeah. Think about them as modules is typically how they're
described, you know, encapsulated little modules. But then these modules share information
with a central global workspace where there's a translation
that happens between whatever representation is happening in that module
of the relevant information in that module,
and then it gets pushed into a global workspace.
And if it makes it into that global workspace,
it's available to all the other modules for processing.
So it can influence the processing in each of those other modules.
And so the idea is that this is kind of a computational level theory,
where this global availability of information,
then facilitates goal-directed interaction with the environment,
run away from that thing, don't get eaten, do eat that thing, etc.
And that we can also see hallmarks of this global broadcast
or global availability of information in the brain,
where when you have cases that someone becomes aware of something
because the signal is strong enough or so on,
that you actually see all of the information propagate throughout the brain.
You can see the information and say a visual stream of evidence, not only land at the back of your head.
If you reach back and touch the back of your head, you find that little bump.
That's about where your visual cortex is.
It's called the Indian.
So if you're not conscious, the information stays back here in your visual cortex.
And if you are conscious of that information, you can actually see with electrophysiology, with EEG, electroencephalography, with fMRI.
You can see that information travel.
forward and end up elsewhere. And so this global availability of information, both from a
neurophysiological standpoint and from a computational standpoint, seems very useful for an organism
and seems very related to, in us, whether we're conscious of something or not.
And this seems like a helpful way to sort of take apart what might be happening mechanistically
inside the brain the way you might like take a piece of code and look at it and be like,
oh, how are they organizing? Oh, it's the database and the access to the hash table, whatever.
oh, this makes sense, but does it get at the question of why is this thing experiencing itself?
Not necessarily, and that has been one of the criticisms of global workspace theory or global
neuronal workspace theory, which is the neural version of it, which is that it's kind of more
about access consciousness or maybe even just about global broadcast of information and not
anything having to do with the C-word consciousness at all. And so there have been other theories
that kind of compete with this one as well.
So one of them is that local recurrence,
so like kind of local feedback
within a given module,
specifically the visual cortex,
that the strength of that local feedback,
that kind of recurrent processing looping,
that that is something that somehow gives rise
to the experience that we have of the world.
And then there's another group of theories
that I happen to be partial to,
which is that there's an additional step
that needs to happen beyond,
broadcast into a global workspace or local recurrence or anything else like that.
And that additional step is that you've got a second order or higher order mechanism
that's kind of self-monitoring your own brain.
Why was higher order under scare quotes there for those of you who are listening?
Not scare quotes, but to indicate that this is a specialized term.
So that there are representations that your brain builds about the world, and we would call
those first order.
and then the representations that your brain or your mind builds about itself or its own processing
would be higher order, second order, higher order.
But the idea here is that let's say that you have information in these modules, it gets globally
broadcast into a central workspace, your brain has to kind of make a determination of
is this information that's available in the global workspace, is it reliable, is it stable,
is it likely to represent the true state of the environment?
So this is where you can see that I'm a metacognition researcher, too.
But it turns out that higher order theories like this that are about re-representation of
information that's available in some workspace or some first-order state, they actually
came from philosophy originally.
It didn't come from the metacognition literature originally.
So a philosopher named David Rosenthal and another one named Richard Brown, they're both in New York,
These are the folks who kind of pushed forward this idea of higher order representations being related to conscious experience, such that you are conscious of something when you have a representation that you are currently representing that thing.
I'll say it again.
Yeah.
So if I have a representation that consists of, I am currently in a state that is representing Apple.
then I am conscious of the apple.
It's not enough to just have a representation of Apple.
I need to also have on top of that a state in my head that is, I am currently representing Apple.
But wouldn't a philosophical zombie also have that state inside their cognition somewhere?
How do we know that that actually generates the first person experience?
We don't.
I want to be hard skeptic on this.
No, you can be hard skeptic.
That's great.
Even these theories that say that you have a mechanism,
that says, I am currently representing this, or that representation is good and stable and likely
to reflect the real world.
They don't answer the phenomenal consciousness question.
Right.
I remember reading Dan Nett's really fun book, Consciousness Explained, and I'll admit that
I was on a roof and I was smoking banana peels at the time.
But I found it compelling in the sense that it sort of changed the question, right?
Sort of like his theory, maybe, I'm sure you can describe it more accurately, you know, his multiple
drafts model. So it convinces you that your account of your own consciousness might be wrong.
You know, that there is no present moment. It's all just memories of the recent past that's
later on constructed to convince you that you were aware when you never really were. And there's
something I like about that theory because even though I don't believe it, it did make me think
differently about my own conscious experience. What do you think of that theory and are people still
taking it seriously? People are still taking it seriously. Yeah. And I think that we don't
have currently the empirical protocols or evidence to say that that's wrong. And this is why I want
to continue to push for the close integration between philosophy and neuroscience. This is another
one where I'm going to be agnostic. I don't like taking a stance on all this because quite
frankly, I think that anybody who says that they have solved anything about consciousness
is just full of it. Like there's no way. Because we just don't know, right? We've got a lot of
theories and we can build up empirical evidence in support of this theory or that theory and
this function of consciousness and that kind of thing. But ultimately, if you say that you've kind
of created the solution that your theory is the right one and you know that for a fact,
then sorry, like, I don't know what banana peals you're on, but I don't think that's useful.
Well, it was a bold title for a book, though, Consciousness Explained. Yeah. Yeah. Dan was a bull
guy. Yeah. Yeah. So a lot of the theories that you were talking about a moment ago were using
words that sort of remind me of computers and AI. How do we think about whether or not AI is
conscious, does it matter? Is that an interesting question? What do you think about that?
Great question. And I think that until, I don't know, 10, 15 years ago, this was a fun thought
experiment in science fiction. And now I think it's not quite so much anymore, right? Because now we have
machines that behaviorally really do pass the Turing test, which I assume we're all quite familiar
with, but just in case the test is that the machine has to convince a human observer or a human
player that it is a person. And we have machines that quite handily pass that under most
scenarios or most, you know, you get out your chat GPT app on your phone and like it feels
very convincing that there might be someone in there, right? Until maybe 15 years ago, 10 years ago,
this was really science fiction and now I think it's not anymore. And the questions not only bear on
let's figure out the ontological truth of whether the AI is in fact conscious, but there
is also really strong implications, regardless of whether it's conscious or not, for what happens
if we think it is, and what happens if it is, but we think it's not, right? So there's like strong
moral and ethical considerations here. I'll kind of have two ways of answering this. One is from
the perspective of current theories of how we think consciousness arises from a like functionalism
perspective, which is that there is some brain or computational function that gives rise to
consciousness in some capacity. A lot of the things that we're talking about, you've rightly
pointed out, have direct analogies in computer processing. We can certainly build even a little
simulation that monitors itself, sure, that says, is this representation reliable or stable?
I can build a set of computer modules that then send information into a global workspace.
sure. In fact, maybe your iPhone
sort of does that already when you talk to
Siri as the virtual assistant, right?
And recurrency in like local feedback
or recurrent processing,
that's absolutely like recurrent neural networks
are a thing. They've been a thing for a long time.
So we have now all of these kind of hallmarks
from these theories that we can use
to build something that looks like a checklist.
That's like, if you've got an artificial system
that has all of these things,
well, I'm going to not say then it's conscious, but I'm going to say it might raise our subjective
degree of belief that it could potentially have consciousness.
A lot of good qualifiers there.
Yeah, there are because we wrote this big paper, me and a whole bunch of other people in
23 that's up on archive.
It's called consciousness and artificial intelligence.
And it goes through this checklist.
It kind of develops the theoretical arguments for building this checklist and then kind of
ultimately says, look, we've got systems that take an awful lot.
of these boxes already. Should we think that they're conscious? Maybe not. Because clearly the theories
are not complete. Some of the things on that checklist are going to be completely irrelevant to
consciousness and we're probably missing a whole lot of things also. But to the degree that there is a
system that ticks more or less of these boxes, well, maybe the ones that tick more of these boxes
we might want to consider a little bit more closely. But then the other thing we do in that paper
is talk about the ethical implications of false positives and false negatives.
I think AI is a great way to differentiate between intelligence and consciousness
because it's not hard for me to believe you could build a very intelligent system.
One that surpasses humans that even like manipulates us and takes over the planet and runs us as slaves.
It could be super intelligent without actually being conscious without anybody being in there, right?
And there's a dark dystopian future there, you know, where super intelligence actually extinguishes consciousness.
But to me, you could maybe get an answer to the question.
Again, mathematically, if you could somehow build up a description of consciousness from the little bits inside, you know, the way, for example, you could say, hey, if you describe all the motion of water droplets, I can tell you whether or not a hurricane is going to form.
Like, if we can master the understanding of the dynamics at a small scale and compute it all the way up to the bigger scale, then we can say yes or no, there is a hurricane.
in the same way, then I could, like, analyze your brain, and I could tell you, oh, yes, this does emerge into some first-person experience or does not, if we had that mathematical bridge, and then we could apply it to AI and say, oh, yes, or no, there is, or is not somebody in there?
And I'm really attracted to these kinds of theories. I think they're called physicalism or do you call them functional theories?
How much progress have we made in that direction, and are we likely to make any more, or is it too intractable a problem?
I don't think it's too intractable a problem necessarily.
I'm not going to fully subscribe to the hard problem of consciousness.
I do think that if we continue to push on this kind of idea of emergent properties
and the translation between a physical substrate and the information that it produces
and that emerges from that physical substrate in terms of its interactions in space and in time,
I think that will take us forward.
I am a reductionist or a physicalist myself.
I don't think that there's anything magical or spiritual about consciousness personally.
I know that there are others who are going to disagree with me there.
But I do think that if we could build such an explanation that it would get us a long way to understanding consciousness,
the kind of emergent properties that you're talking about.
I think that it's really important for us to recognize that the success of such an endeavor is going to need, at its core,
the assumption that the system that we are studying is actually conscious. Otherwise,
it becomes circular, right? Like, if I want to create an explanation of how matter gives rise
to whatever I think is consciousness and then start pointing that at everything that I can think
of, then I'm not building an explanation of consciousness. I'm building an explanation of,
I don't know, physical interactions in the environment or information processing in the environment,
but I don't know that it's going to get me all the way to consciousness. But if we could do that in us,
Sure, in, you know, a thousand million years when we have that explanation and we have a full,
what the philosopher at least in Maracuititis would call a generative explanation of how a physical
system and interactions in that physical system fully gives rise to conscious experience.
Yeah, that would be great.
I don't know how we're going to do that.
But I do think that it does require a connection with that physical substrate.
We're getting close to the end of our time together.
We've talked about why this is a difficult problem. Let's end on why it's important to keep
studying this difficult problem. I think I'll answer this from three perspectives. One, as human
beings, we want to understand our worlds. We want to understand the basic science of how things
happen. We are curious and we have this compelling drive to understand our environments.
And we can see this both from the perspective of modern science, but also just from the
perspective of this is literally what brains do. They build internal models of the world and they
predict stuff that's going to happen from those internal models of the world and we use that to
drive ourselves forward. That's what evolution has done for us. So I think that we are hardwired to
do this, to be natural scientists in a way. So I think that that's important, like to give in to that
feeling that compulsion, but also from a practical and societal benefit perspective, we can
take multiple prongs on this. So one is the medical perspective, which is that it's really
important for us to understand the presence or absence of suffering in folks who have
disease or injury, that we want to understand the diversity and heterogeneity of those
perspectives. So here's a very concrete example. There are a number of disorders or conditions out there
that come with them chronic pain, but you don't have any physical substrate that you can
identify. You don't know why that person is in chronic pain. And so you don't know how to fix it,
but that doesn't mean that the pain isn't real, but the suffering isn't real. And so from
that perspective, understanding the nature of subjective experience is really critically important.
Fear and anxiety is another one. We know the fear circuitry. We've mapped that.
Neuroscientist Joseph Ladoo has been instrumental in driving forward the mapping of the amygdala circuit, the fear circuit in the brain.
But he's very careful to distinguish between processing of threatening stimuli and the experience of fear.
So if we develop pharmacological interventions that fix the circuitry bit and fix the behavioral bit in rats,
one of the reasons they might not translate to humans is that they don't reduce the fear.
fear. They change the behavior, but the fear persists. So I think that from like a clinical perspective,
it's really important for us to understand this. And from depression, from the perspective of
those who have autism spectrum disorders, so to understand their subjective experiences,
like there's just a huge amount of clinical benefit that we can build. And then finally,
from the perspective of the artificial systems that we've just been talking about. So we're in a
position now where whether or not we build systems that have phenomenal,
awareness or consciousness that whether anybody's home in there, maybe we're not going to be able
to answer that question for a long time. But the way we interact with systems depends on whether
we think that they have consciousness and the way that we build guardrails and legislation
and assign responsibility in legal settings depends on whether we think that these things can
think for themselves and whether they have moral compasses and all of those things, which are not
necessarily related to conscious experience per se. But there's an argument.
to be made that at least in a lot of the systems that we know about, ascribing responsibility
is related to the capacity for that agent to be self-directed and that that seems intimately
related to seeking out goals that are not just defined by a programmer, but ultimately
like decisions that that thing might make that might be driven by its intrinsic reward seeking
and there's something that it's like to seek reward because it feels good. So there's a
lot of kind of moral and ethical and legislation and societal implications for getting this
right. There's a lot of medical reasons to get this right. And then, you know, from a basic science
curiosity perspective, this is literally what we evolved to do is figure out the world. And so
let's keep doing that as well. And when we meet aliens, does it matter if they're conscious?
Like, if they're intelligent and they're interesting and they want to share with us, their space
warp drives, doesn't matter if when we point our hair dryer at them, it says yes or no?
What do you think?
I think so, but there's also a lot of evidence that consciousness and what's called moral
status can be disentangled, that we ascribe moral status to things that we think are conscious,
but we also don't need to require consciousness in order to ascribe moral status.
And we certainly treat things very badly even if we know that they're conscious.
So these are conceptually disentanglable things.
I think that it will matter, though, for the rest of the machinery around such an encounter, right?
Like maybe from an individual astronaut's perspective, it doesn't matter so much.
But from the perspective of the laws and regulations and societal implications of what that would mean and what kind of people do.
we want to be. I think that then it really does matter. And so having folks at the helm
who are paying attention to the moral implications of such a weighty determination would be
very good. Well, I definitely want the aliens to know that we're conscious before they decide
whether or not to, you know, nuke us from orbit. So. Or have us as a snack. Assuming they
ascribe moral value and moral status to beings with consciousness. And maybe they develop
the consciousness o meter and they can share it with us.
Maybe.
Here's hoping.
All right.
Well, thanks for being on the show today, Megan.
This was fascinating.
Thank you so much for having me.
This was really fun and engaging, and it's been a real pleasure.
Thank you very much.
Daniel and Kelly's Extraordinary Universe is produced by IHeart Radio.
We would love to hear from you.
We really would.
We want to know what questions you have about this extraordinary.
universe. We want to know your thoughts on recent shows, suggestions for future shows. If you
contact us, we will get back to you. We really mean it. We answer every message. Email us at
questions at danielandkelly.org. Or you can find us on social media. We have accounts on
X, Instagram, Blue Sky, and on all of those platforms, you can find us at D and K Universe.
Don't be shy. Write to us.
In the heat of battle, your squad relies on you.
Don't let them down.
Unlock Elite Gaming Tech at Lenovo.com.
Dominate every match with next level speed,
seamless streaming, and performance that won't quit.
And push your gameplay beyond limits with Intel Core Ultra processors.
That's the power of Lenovo with Intel inside.
Maximize your edge by shopping at Lenovo.com during their back-to-school sale.
That's Lenovo.com.
29th, 1975, LaGuardia Airport.
The holiday rush, parents hauling luggage, kids gripping their new Christmas toys.
Then, everything changed.
There's been a bombing at the TWA terminal, just a chaotic, chaotic scene.
In its wake, a new kind of enemy emerged, terrorism.
Listen to the new season of Law and Order Criminal Justice System on the IHeart Radio app, Apple Podcasts, or wherever you get.
your podcasts.
My boyfriend's professor is way too friendly, and now I'm seriously suspicious.
Wait a minute, Sam.
Maybe her boyfriend's just looking for extra credit.
Well, Dakota, luckily, it's back to school week on the OK Storytime podcast, so we'll
find out soon.
This person writes, my boyfriend's been hanging out with his young professor a lot.
He doesn't think it's a problem, but I don't trust her.
Now he's insisting we get to know each other, but I just want her gone.
Hold up.
Isn't that against school policy?
That seems inappropriate.
Maybe.
Find out how it ends by listening to the OK Storytime podcast and the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
Have you ever wished for a change but weren't sure how to make it?
Maybe you felt stuck in a job, a place, or even a relationship.
I'm Emily Tish Sussman, and on she pivots, I dive into the inspiring pivots of women who have taken big leaps and their lives and careers.
I'm Gretchen Whitmer, Jody Sweetie.
Monica Patton, Elaine Welteroth.
Learn how to get comfortable pivoting because your life is going to be full of them.
Listen to these women and more on She Pips.
Now on the IHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
This is an IHeart podcast.