Lex Fridman Podcast - #106 – Matt Botvinick: Neuroscience, Psychology, and AI at DeepMind
Episode Date: July 3, 2020Matt Botvinick is the Director of Neuroscience Research at DeepMind. He is a brilliant cross-disciplinary mind navigating effortlessly between cognitive psychology, computational neuroscience, and art...ificial intelligence. Support this podcast by supporting these sponsors: - The Jordan Harbinger Show: https://www.jordanharbinger.com/lex - Magic Spoon: https://magicspoon.com/lex and use code LEX at checkout If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 03:29 - How much of the brain do we understand? 14:26 - Psychology 22:53 - The paradox of the human brain 32:23 - Cognition is a function of the environment 39:34 - Prefrontal cortex 53:27 - Information processing in the brain 1:00:11 - Meta-reinforcement learning 1:15:18 - Dopamine 1:19:01 - Neuroscience and AI research 1:23:37 - Human side of AI 1:39:56 - Dopamine and reinforcement learning 1:53:07 - Can we create an AI that a human can love?
Transcript
Discussion (0)
The following is a conversation with Matt Botmanick, Director of Neuroscience Research at Deep Mind.
He's a brilliant, cross-disciplinary mind, navigating effortlessly between cognitive psychology,
computation and neuroscience, and artificial intelligence.
Quick summary of the ads. Two sponsors. The Jordan Harbinger Show and Magic Spoon Serial.
Please consider supporting the podcast by going
to JordanHarbinger.com slash Lex and also going to Magic Spoon.com slash Lex and using
code Lex at checkout after you buy all of their cereal. Click the links by the stuff. It's
the best way to support this podcast and journey I'm on. If you enjoy this podcast, subscribe on YouTube,
review it with 5,000 Apple podcasts, follow on Spotify,
support on Patreon or connect with me on Twitter,
Alex Friedman spelled surprisingly without the e just FR ID,
M-A-N. As usual, I'll do a few minutes of ads now and never any ads in the
middle that can break the flow of the conversation.
This episode is supported by the Jordan Harbinger Show.
Go to JordanHarbinger.com slash Lex. It's how he knows I sent you.
On that page, subscribe to his podcast, on Apple podcast, Spotify and you know where to look.
I've been bing on this podcast. Jordan
is a great interviewer and even a better human being. I recently listened to his conversation
with Jack Barsky, former sleeper agent for the KGB in the 80s, an author of Deep Undercover,
which is a memoir that pants yet another interesting perspective on the Cold War era.
I've been reading a lot about
the Stalin and then Gorbachev and Putin-era's of Russia, but this conversation made me
realize that I need to do a deep dive into the Cold War era to get a complete picture of Russia's
recent history. Again, go to jordanharborget.com slash lex subscribe to this podcast. So he knows I sent you. It's awesome.
You won't regret it.
This episode is also supported by Magic Spoon.
Low carb keto friendly super amazingly delicious cereal.
I've been on a keto or very low carb diet for a long time now.
It helps with my mental performance.
It helps with my physical performance,
even during this crazy push up pull up challenge. I'm doing, including the running, it just feels great.
I used to love cereal, obviously I can't have it now because most cereals have a crazy
mouse and sugar, which is terrible for you.
So I quit it years ago.
But Magic Spoon, amazingly, somehow, is a totally different thing. Zero sugar,
11 grams of protein, and only three net grams of carbs. It tastes delicious. It has a lot of
flavors, two new ones, including peanut butter. But if you know what's good for you, you'll go with
cocoa. My favorite flavor and the flavor of champions. Click the magicspoon.com slash
Lex link in the description and use colex at checkout for free shipping and to let them know I sent you
they have agreed to sponsor this podcast for a long time. They're an amazing sponsor and an even
better serial I highly recommend it. It's delicious, it's good for you, you won't regret it.
And now here's my conversation with Matt Botvenick. How much of the human brain do you think we understand?
I think we're at a weird moment in the history of neuroscience in the sense that there's
a...I feel like we understand a lot about the brain at a very high level, but
a very, very coarse level.
When you say a high level, what are you thinking?
Are you thinking functional?
Are you thinking structurally?
So in other words, what is the brain for?
What kinds of computation does the brain do?
What kinds of behaviors would we have to explain if we were to say, what is the brain do. What kinds of behaviors would we have to explain if we were going to look
down at the mechanistic level? At that level, I feel like we understand much, much more
about the brain than we did when I was in high school. It's almost like we're seeing
it through a fog. It's only at a very coarse level. We don't really understand what the neuronal mechanisms are
that underlie these computations.
We've gotten better at saying, what are the functions
that the brain is computing that we would have to understand
if we were going to get down to the neuronal level?
And at the other end of the spectrum,
we, in the last few years, incredible progress
has been made in terms of technologies
that allow us to see, you know, actually literally see
in some cases what's going on at the single unit level,
even the dendritic level.
And then there's this yawning gap in between.
Well, this is interesting.
So at the high level, so there's almost a cognitive science level.
Yeah.
And then at the neuronal level, that's neurobiology and neuroscience, just studying single
neurons, the synaptic connections and all the dopamine, all the kind of neurotransmitters.
When blanket statement I should probably make is that as I've gotten older,
I have become more and more reluctant
to make a distinction between psychology and neuroscience.
To me, the point of neuroscience is to study
what the brain is for.
If you're a nephrologist and you wanna learn about the kidney,
you start by saying, what is this thing for?
Well, it seems to be for taking blood on one side that has metabolites in it
that shouldn't be there, sucking them out of the blood while leaving
the good stuff behind, and then excreting that in the form of urine.
That's what the kidney is for.
It's like obvious.
So the rest of the work is deciding how it does that.
And this, it seems to me, is the right approach
to take to the brain.
You say, well, what is the brain for?
The brain, as far as I can tell,
is for producing behavior.
It's for going from perceptual inputs
to behavioral outputs.
And the behavioral outputs should be adaptive.
So that's what psychology is about.
It's about understanding the structure of that function.
And then the rest of neuroscience is about figuring out how those operations are actually
carried out at a mechanistic level.
That's really interesting, but so unlike the kidney, the brain, the gap
between the electrical signal and behavior, you truly see neuroscience as the science
of that touches behavior, how the brain generates behavior, or how the brain converts raw visual information into understand, like, you basically see cognitive science,
psychology, and neuroscience is all one science.
Yeah.
Is that a personal statement?
Is that a hopeful or realistic statement?
So certainly you will be correct in your feeling
in some number of years, but that number of years could be 200 to 300 years from now.
Oh, well, well, there's a, is that aspirational or is that pragmatic engineering feeling that you have?
It's, it's both in the sense that this is what I hope and expect will bear fruit over the coming decades.
But it's also pragmatic in the sense that I'm not sure what we're doing in either psychology
or neuroscience if that's not the framing.
I don't know what it means to understand the brain, if part of the
enterprise is not about understanding the behavior that's being produced.
I mean, yeah, but I would compare it to maybe astronomers looking at the movement of the planets and the stars and without any
interest of the underlying physics, right? And I would argue that at least in the early days,
there is some value just tracing the movement of the planets and the stars without thinking
about the physics too much because it's such a big leap to start thinking about the physics.
Before you even understand even the basic structural elements of...
Oh, I agree with that.
I agree.
But you're saying in the end, the goal should be...
Yeah.
...deeply understand.
Well, right.
And I think...
So I thought about this a lot when I was in grad school,
because a lot of what I studied in grad school was psychology.
And I found myself a little bit confused about what it meant to...
It seemed like what we were talking about a lot of the time were virtual causal mechanisms.
Like, oh well, you know, attentional selection then select some object in the environment
and that is then passed on to the motor, you know, information about that is passed on
to the motor system. But these are virtual mechanisms. These are, you know, they're metaphors. They're, you know,
that there's no, they're not, there's no reduction to, there's no reduction going on in that conversation
to some physical mechanism that, you know, which is really what it would take to fully understand,
you know, how, how behaviors are are rising. The causal mechanisms are definitely neurons interacting.
I'm willing to say that at this point in history.
So in psychology, at least for me personally, there was this strange insecurity about trafficking
in these metaphors, you know, which were supposed to explain the function of the mind.
If you can't ground them in physical mechanisms,
then what is the explanatory validity
of these explanations?
I managed to soothe my own nerves
by thinking about the history of genetics research. So I'm very far from
being an expert on the history of this field. But I know enough to say that Mendelian genetics
preceded Watson and Crick. And so there was a significant period of time during which people were, you know,
productively investigating the structure of inheritance
using what was essentially a metaphor,
the notion of a gene, you know.
Oh, genes do this and genes do that.
But, you know, where are the genes?
They're sort of an explanatory thing that we made up.
And we ascribed to them these causal properties.
Oh, there's a dominant, there's the recessive,
and then they recombined it.
And then later, there was a kind of blank there
that was filled in with a physical mechanism.
That connection was made.
But it was worth having that metaphor
because that gave us a good sense of what kind of cause,
what kind of causal mechanism we were looking for.
And the fundamental metaphor of cognition,
you said, is the interaction of neurons.
Is that what is the metaphor?
No, no, the metaphor, the metaphors we use
in cognitive psychology are, you know, things
like attention, the way that memory works, you know, I retrieve something from memory,
right?
You know, a memory retrieval occurs.
What is the hat, you know, that's not a physical mechanism that I can examine in its own right. But if we,
but it's still worth having that metaphorical level.
Yeah, so yeah, I'm misunderstood actually. So the higher level of abstractions is the metaphor
that's most useful. Yes, but what about, so how does that connect to the idea that arises from interaction of neurons?
Well, even it is the interaction of neurons also not a metaphor to you.
Or is it literally like that's no longer a metaphor?
That's already the lowest level of abstractions that could actually be directly studied.
Well, I'm hesitating because I think what I want to say could end up being controversial.
So what I want to say is, yes, the interactions of neurons, that's not metaphorical, that's
a physical fact.
That's where the causal interactions actually occur.
Now, I suppose you could say, well,
even that is metaphorical relative to the quantum events
that underline, you know, I don't want to go down that rabbit hole.
It's always turtles on top of turtles.
But there is a reduction that you can do.
You can say these psychological phenomena
can be explained through a very different kind
of causal mechanism which has to do with neurotransmitter
release.
And so what we're really trying to do in neuroscience
large, as I say, which for me includes psychology,
is to take these psychological phenomena
and map them onto neural events. I think remaining forever at the level of description
that is natural for psychology, for me personally would be disappointing. I want to understand understand how mental activity arises from neural activity.
But the converse is also true, studying neural activity without any sense of what you're
trying to explain to me feels like at best roping around at random.
Now you've kind of talked about this bridging
of the Gapaging Psychology in neuroscience.
But do you think it's possible?
Like my love is, like I fell in love
with psychology and psychiatry in general
with Freud and when I was really young
and I hoped to understand the mind.
And for me, understanding the mind,
at least that young age before discovered AI and even neuroscience was to is psychology. And do you think it's possible
to understand the mind without getting into all the messy details of neuroscience? Like
you kind of mentioned, to use appealing to try to understand the mechanisms at the lowest
level, but do you think that's needed? That's required to understand how the mind works.
That's an important part of the whole picture,
but I would be the last person on earth
to suggest that reality renders psychology
in its own right, unproductive. I trained as a psychologist.
I am fond of saying that I have learned much more from psychology than I have from neuroscience.
To me, psychology is a hugely important discipline.
And one thing that warms my heart is that ways of investigating behavior that have been
native to cognitive psychology since it's dawn in the 60s are starting to become interesting
to AI researchers for a variety of reasons.
That's been exciting for me to see.
Can you maybe talk a little bit about what you see as a beautiful aspects of psychology,
maybe limiting aspects of psychology?
I mean, maybe to start it off as a science, as a field.
To me, when I understood what psychology is
analytical psychology, like with the way it's actually
carried out, it's really disappointing
to see two aspects.
One is how small the N is, how small the number of subject
is in the studies.
And two is disappointing to see how controlled the entire
how much it was in the lab, how it wasn't studying humans in the wild.
There was no mechanism for studying humans in the wild.
So that's where I became a little bit disillusioned to psychology.
And then the modern world of the internet is so exciting to me, the Twitter data or YouTube data,
the data of human behavior on the internet becomes exciting
because the N grows and then in the wild grows.
But that's just my narrow sense.
Do you have a optimistic or pessimistic
cynical view of psychology?
How do you see the field broadly?
When I was in graduate school,
it was early enough that there was still a thrill in seeing
that there were ways of doing experimental science
that provided insight to the structure of the mind.
One thing that impressed me most when I was at that stage
and my education was neuropsychology,
looking at analyzing the behavior of populations
who had brain damage of different kinds,
and trying to understand what the specific deficits were
that arose from a lesion in a particular part of the brain,
and the kind of experimentation that was done and that's still being done
to get answers in that context was so creative.
And it was so deliberate.
It was good science.
An experiment answered one question but raised another.
And somebody would do an experiment that answered that question.
And you really felt like you were narrowing in on some kind of approximate understanding of what this
part of the brain was for. Do you have an example of the from memory of what kind of aspects of
the mind could be studied in this kind of way? Oh sure. I mean the very detailed neuropsychological studies of language function, looking at production and reception
and the relationship between visual function reading and auditory and semantic.
There were these beautiful models that came out of that kind of research that really made
you feel like you understood something that you hadn't understood before about how language
processing is organized in the brain.
But having said all that, I think you are, I mean, I agree with you that the cost of doing highly controlled experiments is that you,
by construction, miss out on the richness and complexity of the real world.
One thing that, so I was drawn into science by what in those days was called connectionism,
which is of course what we now call deep learning.
And at that point in history,
neural networks were primarily being used
in order to model human cognition.
They weren't yet really useful for industrial applications.
So you always found neural networks in biological form,
beautiful.
Oh, neural networks were very concretely
the thing that drew me into
science. I was handed, are you familiar with the PDP books? From the 80s, when I went
to medical school before I went into science. And really? Yeah. Wow. I also did a graduate
degree in art history. So I kind of explored it. Well, art history, I understand.
That's just a curious creative mind,
but medical school, with a dream of what,
if we take that slight tangent,
what did you want to be a surgeon?
I actually was quite interested in surgery.
I was interested in surgery and psychiatry.
And I thought that must be the only person
on the planet who was torn between those two fields. And I said exactly that to my advisor in medical school, who turned out, I found out later to be a famous psychoanalyst.
And he said to me, no, no, it's actually not so uncommon to be interested in surgery and psychiatry.
And he conjectured that the reason that people develop these two interests is that both
fields are about going beneath the surface and kind of getting into the kind of secret.
I mean, maybe you understand this as someone who was interested in psychoanalysis in
the other stage.
There's sort of a, there's a cliche phrase that people use now on, an NPR, the secret life of, like,
a goody-like, right?
And that was part of the thrill of surgery,
was seeing the secret activity that's inside everybody's
abdomen and thorax.
That's a very poetic way to connect it to people.
Disciplines that are very practically speaking different
from each other, that's for sure.
That's for sure. Yes.
So how do we get on to medical school?
So I was in medical school and I was doing a psychiatry rotation
and my kind of advisor in that rotation asked me what I was interested in.
And I said, well, maybe psychiatry, he said, why?
And I said, well, I've always been interested in how the brain works.
I'm pretty sure that nobody's doing scientific research that addresses my interests, which
are, I didn't have a word for it then, but I would have said about cognition.
And he said, well, you know, I'm not sure that's true. You might be interested
in these books, and he pulled down the PDB books from his shelf, and they were still shrink-rapped.
He hadn't read them, but he handed them to me. He said, you feel free to borrow these.
And that was, you know, I went back to my dorm room, and I just, you know, read them,
covered a cover, and what's PDP? Parallel distributed processing, which was one of the original names
for deep learning. And so, apologist for the romanticized question, but what idea in the
space of neuroscience and the space of the human brain is to you the most beautiful,
mysterious, surprising? What had always fascinated me, even when I was a pretty young kid, I think,
was the paradox that lies in the fact that the brain is so mysterious and so it seems so distant.
But at the same time, it's responsible for
the full transparency of everyday life.
It's, the brain is literally what makes everything obvious
and familiar and there's always one in the room with you.
I used to teach, when I taught at Princeton, And there's always one in the room with you.
I used to teach, when I taught at Princeton, I used to teach a cognitive neuroscience course.
And the very last thing I would say to the students
was, you know, when people think of scientific inspiration,
the metaphors often, well, look to the stars. The stars will inspire you
to wonder at the universe and think about your place in it and how things work. I'm
all for looking at the stars. But I've always been much more inspired. My sense of wonder wonder comes from the not from the distant mysterious stars, but from the extremely
intimately close brain.
Yeah, there's something just endlessly fascinating to me about that.
Like just like you said, the the one is close and yet distant in terms of our understanding
of it.
Do you, are you also captivated by the fact that this very conversation is happening because
two brains are communicating?
Yes.
Exactly.
I guess what I mean is the subjective nature of the experience.
If you can take a small tangent into the mystical
of it, the consciousness, or when you are saying you're captivated by the idea of the brain,
are you talking about specifically the mechanism of cognition?
Are you also just, like at least for me, it's almost like paralyzing the beauty and the
mystery of the fact that it creates the entirety of the experience, not just the reasoning capability, but the experience.
Well, I definitely resonate with that latter thought.
And I often find discussions of artificial intelligence to be disappointingly narrow.
Speaking of someone who has always had an interest
in art, it was just gonna go there
because it sounds like somebody who has an interest in art.
Yeah, I mean, there are many layers to full-bore human
experience and in some ways it's not
enough to say, oh, well, don't worry, you know, we're talking about cognition, but we'll add emotion,
you know? Yeah. There's an incredible scope to what humans go through in every moment. And,
humans go through in every moment. And yes, so that's part of what fascinates me is that
is that our brains are producing that. But at the same time, it's so mysterious to us how
we literally, our brains are literally in our heads producing this experience. And yet, it's so mysterious to us.
And the scientific challenge of getting at the actual explanation for that is so overwhelming.
That's just, I don't know.
Certain people have fixations on particular questions,
and that's always been mine.
Yeah, I would say the poetry that is fascinating.
And I'm really interested in natural language as well.
And when you look at art for intelligence community,
it always saddens me how much we need
to try to create a benchmark for the community
to gather around how much of the magic of
language is lost when you create that benchmark that there's something we talk about experience the
the music of the language the wit the something that makes a rich experience something that would be
required to pass the spirit of the touring test is lost in these benchmarks.
And I wonder how to get it back in, because it's very difficult.
The moment you try to do like real good rigorous science, you lose some of that magic.
When you try to study cognition in a rigorous scientific way, it feels like you're losing
some of the magic, the seeing cognition in a mechanistic way that AI vote at this stage in our history.
Well, I agree with you, but at the same time,
one thing that I found really exciting about that first wave
of deep learning models in cognition was,
the fact that the people who were building these models were focused on the richness and complexity of human cognition.
So an early debate in cognitive science, which I sort of witnessed as a grad student,
was about something that sounds very dry, which is the formation of the past tense. But there were these two camps. One
said, well, the mind encodes certain rules, and it also has a list of exceptions, because
of course, you know, the rule is ad-e-b, but that's not always what you do, so you have to have
a list of exceptions. And then there were the connectionists who evolved into the
deep learning people who said, well,
you know, if you look carefully at the
data, if you look at actually look at
corporate, like language corporate, it
turns out to be very rich because yes,
there are there's a, you know, the,
they're most verbs that and, you know,
you just tack on ed,
and then there are exceptions, but there are rules
that, the exceptions aren't just random.
There are certain clues to which verbs should be exceptional,
and then there are exceptions to the exceptions,
and there was a word that was deployed of deployed in order to capture this,
which was quasi-regular. In other words, there are rules, but it's messy, and there's
structure even among the exceptions, and it would be, yeah, you could try to write down,
you could try to write down this structure in some sort of closed form, but really the
right way to understand how the brain is handling all this, and by the way producing all of this, is to build a deep neural network and trained it on this data and see how it ends up representing all of this richness.
So the way that deep learning was deployed in cognitive psychology was that was the spirit of it. It was about that richness.
And that's something that I always found very, very compelling still do.
Is there something, especially interesting and profound to you, in terms of our current
deep learning, neural network, artificial neural network approaches, and the, whatever
we do understand about the biological neural networks in our brain,
is there, there's quite a few differences.
Are some of them to you either interesting or perhaps profound in terms of, in terms
of the gap we might want to try to close in trying to create a human level intelligence.
What I would say here is something that a lot of people are saying, which is that one
seeming limitation of the systems that we're building now is that they lack the kind of flexibility.
The readiness to sort of turn on a dime when the context calls for it, that is so characteristic
of human behavior. So is that connected to you to the, like which aspect of the neural networks
that are in our brain is that connected to? Is that closer to the cognitive science level of,
now again, see, like my natural inclination is to separate into three disciplines of neuroscience,
cognitive science, and psychology. And you've already kind of shut that down by saying you're
kind of seeing them as separate, but just to look at those layers, I guess, where is there
something about the lowest layer of the way the Neuron's interact that is profound
to you in terms of its difference to the artificial neural networks, or is all the key
differences at a higher level of abstraction?
One thing I often think about is that, you know, if you take an introductory computer
science course and they are introducing
you to the notion of Turing machines, one way of articulating what the significance of
a Turing machine is, is that it's a machine emulator.
It can emulate any other machine. And that to me,
that way of looking at a Turing machine
really sticks with me.
I think of humans as maybe sharing in some of that
character, or capacity limited,
we're not Turing machines obviously,
but we have the ability
to adapt behaviors that are very much unlike anything we've done before, but there's some
basic mechanism that's implemented in our brain that allows us to run software.
But you just, in that point, you mentioned a tool machine, but nevertheless, it's fundamentally
our brains are just computational devices in your view. Is that what you're getting at? Like, it was a little
bit unclear to this line you drew. Is there any magic in there or is it just basic computation?
I'm happy to think of it as just basic computation, but mind you, I won't be satisfied until
somebody explains to me how what the basic computations
are that are leading to the full richness of human cognition.
Yes.
It's not going to be enough for me to understand what the computations are that allow people
to do arithmetic or play chess.
I want the whole thing.
And a small tangent because you kind of mentioned coronavirus, there's group behavior.
Is there something interesting to your search of understanding the human mind
where behavior of large groups, so just behavior of groups is interesting.
Seeing that as a collective mind is a collective intelligence, perhaps seeing the groups of people as a single
intelligent organisms, especially looking
at the reinforcement learning work
you've done recently.
Well, yeah, I can't, I mean, I have the honor
of working with a lot of incredibly smart people,
and I wouldn't want to take any credit
for leading the way on the multi-agent
work that's come out of my group or DeepMind lately.
But I do find it fascinating.
And I mean, I think they're, you know, I think it can't be debated.
You know, human behavior arises within communities.
That just seems to me self-evident.
But to me, it is self-evident, but that seems to be a profound aspect of something that
created. That was like, if you look at like 2001 Space Odyssey, when the monkeys touched the...
That's the magical moment, I think Yvahari argues that the ability of our large numbers of humans to hold an idea,
to converge towards the idea together, like you said, shaking hands versus bumping elbows,
somehow converge without even, without being in a room altogether, just kind of this
distributed convergence towards an idea over a particular period of time, seems to be fundamental to
to just every aspect of our cognition of our intelligence because humans will talk about reward
But it seems like we don't really have a clear objective function under which we operate
But we all kind of converge towards one somehow and that that to me has always been a mystery
that I think is somehow productive
for also understanding AI systems. But I guess that's the next step. The first step is
try to understand the mind. Well, I don't know. I mean, I think there's something to the
argument that that kind of like strictly bottom-up approach is wrong-headed.
In other words, there are basic phenomena,
that basic aspects of human intelligence
that can only be understood in the context of groups.
I'm perfectly open to that.
I've never been particularly convinced by the notion
that we should consider intelligence to in here
at the level of communities.
I don't know why.
I'm sort of stuck on the notion that the basic unit
that we want to understand is individual humans.
And if we have to understand that in the context
of other humans, fine.
But for me, intelligence is just, I'm stubbornly defined as something that is,
an aspect of an individual human.
That's just my, I don't know if that's not a take.
I would do, but that could be the reduction
as dream of a scientist
because you can understand a single human.
It also is very possible that
intelligence can only arise when there's multiple intelligences. When there's multiple,
sort of, it's a sad thing if that's true because it's very difficult to study, but if it's just one
human, that one human will not be homo sapien would not become that intelligent. That's a possibility.
One thing I will say along these lines is that I think I think a serious effort to understand
human intelligence and maybe to build human intelligence needs to pay just as much attention to the
structure of the environment as to the structure of the cognizing system, whether it's a brain
or an AI system, that's one thing I took away actually from my early studies with the
pioneers of
neural network research, people like J. McClelland and John Cohen.
The structure of cognition is really, it's only partly a function of the architecture
of the brain and the learning algorithms that it implements what it's really a function, what really shapes it is the interaction of those things
with the structure of the world in which those things are embedded, right? And that's especially
important for this made most clear and reinforcement learning where simulated environment is, you can
only learn as much as you can simulate. And that's what made with DeepMind made very clear
with the other aspect of the environment,
which is the self-play mechanism
of the other agent of the competitive behavior,
which the other agent becomes the environment essentially.
And that's one of the most exciting ideas in AI
is the self-play mechanism
that's able to learn successfully. So there you go. There's a thing
where competition is essential for learning, at least in that context. So if we can step back into
another sort of beautiful world, which is the actual mechanics, the dirty mess of it of the
human brain, is there something for people who might not know? Is there something
you can comment on or describe the key parts of the brain that are important for intelligence,
or just in general, what are the different parts of the brain that you're curious about that you've
studied and that are just good to know about when you're thinking about cognition?
that are just good to know about when you're thinking about cognition.
Well, my area of expertise, if I have one is prefrontal cortex.
So what's that?
Or do we? It depends on who you ask.
The technical definition is, is anatomical.
It there are parts of your brain that are responsible
for motor behavior, and they're very easy to identify.
And the region of your cerebral cortex,
they out, sort of, outer crust of your brain,
that lies in front of those, is defined as the prefrontal cortex.
And we say anatomical, sorry, to interrupt. So that's referring to sort of the geographic
region as opposed to some kind of functional definition.
Exactly. So that, this is kind of the coward's way out. I'm telling you what the prefrontal
cortex is just in terms of like what part of the real
estate it occupies. The thing in the front of the book. Yeah, exactly. And in fact, the early history of
the neuroscientific investigation of what this like front part of the brain does is sort of funny to read because, you know, it was really, it was really World War
One that started people down this road of trying to figure out what different parts of the
brain, the human brain do in the sense that there were a lot of people with brain damage
who came back from the war with brain damage.
And that provided, as tragic as that was,
it provided an opportunity for scientists
to try to identify the functions
of different brain regions.
And that was actually incredibly productive.
But one of the frustrations that neuropsychologist
faced was they couldn't really identify exactly
what the deficit was that arose from damage
to these most, you know, kind most frontal parts of the brain.
It was just a very difficult thing to pin down.
There were a couple of neuropsychologists who identified through a large amount of clinical
experience and close observation, they started to put their finger on a synbrome that was
associated with frontal damage. Actually, one of them was a Russian neuropsychologist named Luria, who, you know,
students of cognitive psychology still read. And what he started to figure out was that
the frontal cortex was somehow involved in flexibility,
involved in flexibility, in guiding behaviors that required someone to override a habit, or to do something unusual, or to change what they were doing in a very flexible way from one
moment to another.
So focused on the new experiences.
So the way you're brain processes and acts
in new experiences.
Yeah.
What later helped bring this function into better focus
was a distinction between controlled and automatic behavior.
Or to, in other literatures, this is referred
to as habitual behavior versus goal-directed behavior.
So it's very, very clear that the human brain has pathways that are dedicated to
habits, to things that you do all the time, and they need to be automatized so that they don't require you to concentrate too much. So that leaves your cognitive capacity free
to do other things. Just think about the difference between driving when you're learning to drive
versus driving after your fairly expert. There are brain pathways that slowly absorb those
frequently performed behaviors so that they can be habits, so that they can
be automatic.
So that's kind of like the purest form of learning.
I guess this is happening there, which is why, I mean, this is kind of jumping ahead,
which is why that perhaps is the most useful for us to focus on and trying to see how artificial
intelligence systems can learn.
Is that the way you think?
It's interesting.
I do think about this distinction between controlled and automatic or
gold directed and habitual behavior a lot in thinking about where we are in AI research.
But just to finish the kind of dissertation here, the role of the prefrontal cortex is generally
understood these days sort of in contra-distinction to that habitual domain. In other words, the prefrontal
cortex is what helps you override those habits. It's what allows you to say, whoa, whoa, what I usually do in this situation is X, but
given the context, I probably should do Y.
I mean, the elbow bump is a great example, right?
If reaching out and shaking hands is probably habitual behavior, and it's the prefrontal
cortex that allows us to bear in mind that there's something unusual
going on right now.
And in this situation, I need to not do the usual thing.
The kind of behaviors that Luria reported, and he built tests for detecting these kinds
of things, we're exactly like this.
So in other words, when I stick out my hand, I want you instead to present your elbow.
A patient with frontal damage would have great deal of trouble with that. You know, somebody
proffering their hand would elicit, you know, a handshake. The prefrontal cortex is what allows
us to say, hold on, hold on. That's the usual thing, but I have the ability to bear in mind even very unusual contexts and to reason
about what behavior is appropriate there.
Just to get a sense, is our us humans special in the presence of the prefrontal cortex?
Do mice have a prefrontal cortex?
Do other mammals that we can study if no, then how do they integrate new experiences?
Yeah, that's a really tricky question and a very timely question, because we have revolutionary new technologies for monitoring, measuring, and also causally influencing neural behavior
in mice and fruit flies. And these techniques are not fully available, even for studying
brain function in monkeys, let alone humans.
And so it's a very sort of, for me at least,
a very urgent question, whether the kinds of things
that we want to understand about human intelligence
can be pursued in these other organisms.
And to put it briefly, there's disagreement.
to put it briefly, there's disagreement.
People who study fruit flies will often tell you, hey, fruit flies are smarter than you think.
And they'll point to experiments where fruit flies
were able to learn new behaviors,
were able to generalize from one stimulus to another
in a way that suggests that they have abstractions
that guide their generalization.
I've had many conversations in which I will start by observing, you know, recounting some
observation about mouse behavior where it seemed like mice were taking an awfully long
time to learn a task that for a human would be profoundly trivial.
And I will conclude from that that mice really don't have the cognitive flexibility that
we want to explain, and that a mouse researcher will say to me, well, you know, hold on.
That experiment may not have worked because you asked a mouse to deal with stimuli
and behaviors that were very unnatural for the mouse.
If instead you kept the logic of the experiment the same,
but kind of put it in a, presented the information
in a way that aligns with what mice are used to dealing
with in their natural habitats,
you might find that a mouse actually has more intelligence than you think.
And then they'll go on to show you videos of mice doing things in their natural habitat,
which seem strikingly intelligent, dealing with physical problems.
I have to drag this piece of food back to my, you know, back to my layer, but there's something in my way.
And how do I get rid of that thing? So I think these are open questions to put it, you know, to sum that up.
And then taking a small step back related to that, as you kind of mentioned, we're taking a little shortcut by saying it's a geographic part of the prefrontal cortex is a region of the brain. But what's your
sense in a bigger philosophical view? Prefrontal cortex and the brain in general.
The other sense that it's a set of subsystems in the way we've kind of implied that are
pretty distinct. Or to what degree is it that or to what degree is it a giant interconnected
mess where everything kind of does everything and it's impossible to disentangle them?
I think there's overwhelming evidence that there's functional differentiation, that
it's clearly not the case, that all parts of the brain are doing the same thing. This follows immediately
from the kinds of studies of brain damage that we were chatting about before. It's obvious
from what you see if you stick an electrode in the brain and measure what's going on at
the level of neural activity. Having said that, there are two other things to add, which kind of, I
don't know, maybe tug in the other direction. One is that it's when you look carefully at
functional differentiation in the brain, what you usually end up concluding, at least
this is my observation of the literature, is that
the differences between regions are graded rather than being discrete. So it doesn't seem
like it's easy to divide the brain up into true modules that have clear boundaries and that have,
clear channels of communication between them.
Instead-
Instead-
And it applies to the prefrontal cortex.
Yeah, yeah, the prefrontal cortex
is made up of a bunch of different subregions,
the functions of which are not clearly defined
and the borders of which seem to be quite vague.
Then there's another thing that's popping up
in very recent research,
which involves application of these new techniques,
which there are a number of studies that suggest that parts of the brain that
we would have previously thought were quite focused in their function are actually carrying
signals that we wouldn't have thought would be there.
For example, looking in the primary visual cortex, which is classically thought of as basically the
first cortical way station for processing visual information.
Basically what it should care about is, you know, where are the edges in this scene that
I'm viewing?
It turns out that if you have enough data, you can recover information from primary visual
cortex about all sorts of things, like, you know, what behavior the animal is engaged in
right now and what how much reward
is on offer in the task that it's pursuing.
So it's clear that even regions whose function is pretty well defined at a core strain are
nonetheless carrying some information about information from very different domains. So, the history of neuroscience is sort of this oscillation
between the two views that you articulated,
the kind of modular view, and then the big mush view.
And I think I guess we're going to end up somewhere in the middle,
which is an unfortunate for our understanding,
because there's something about our conceptual system
that finds it's easy to think about a modularized system
and easy to think about a completely undifferentiated system.
But something that kind of lies in between is confusing,
but we're gonna have to get used to it, I think.
Unless we can understand deeply
the lower level mechanism in your own all communicators.
Yeah, so on that topic, you kind of mentioned information,
just to get a sense, I imagine something
that there's still mystery and disagreement on
is how does the brain carry information and signal?
Like what in your sense is the basic mechanism
of communication in the brain?
Well, I guess I'm old fashioned in that I consider
the networks that we use in deep learning research
to be a reasonable approximation
to the mechanisms that carry information in the brain.
So the usual way of articulating that is to say,
what really matters is a rate code.
What matters is how quickly is an individual neuron spiking?
What's the frequency at which it's spiking?
Is it the timing of the spiking?
Yeah, is it firing fast or slow?
Let's put a number on that.
And that number is enough to capture what neurons are doing. There's still uncertainty about whether that's
an adequate description of how information is transmitted
within the brain.
There are studies that suggest that the precise timing
of spikes matters.
There are studies that suggest that there are computations
that go on within the dendritic tree,
within an neuron that are quite rich and structured
and that really don't equate to anything
that we're doing in our artificial neural networks.
Having said that, I feel like we can get,
I feel like we're getting somewhere by sticking to this high level of abstraction.
Just the rate and by the way, we're talking about the electrical signal.
I remember reading some vague papers somewhere recently where the mechanical signal, like the vibrations or something of the of the neurons also communicate information.
I haven't seen that, but the somebody was arguing that the electrical signal, this
is in the nature paper, something like that, where the electrical signal is actually a side
effect of the mechanical signal.
But I don't think they change the story, but it's almost an interesting idea that
there could be a deeper. It's always like in physics with quantum mechanics, there's always
a deeper story that could be underlying the whole thing, but you think it's basically
the rate of spiking that gets us, that's like the lowest hanging fruit that can get us
really far.
This is a classical view. I mean, this is not the only way in which this stance
would be controversial in the sense that there are,
there are members of the neuroscience community
who are interested in alternatives.
But this is really a very mainstream view.
The way that neurons communicate is that
neurotransmitters arrive at a, you know, they wash up on a neuron. The neuron has receptors
for those transmitters. The meeting of the transmitter with these receptors changes the voltage of the
neuron. And if enough voltage change occurs, then a spike occurs, right? One of these discrete events.
And it's that spike that is conducted down the axon
and leads to neurotransmitter or at least,
this is just like neuroscience 101.
This is like the way the brain is supposed to work.
Now, what we do when we build
artificial neural networks of the kind
that are now popular in the AI community
is that we don't worry about those individual spikes. We just worry about the frequency at which those spikes
are being generated. And we consider people to talk about that as the activity of a neuron.
And so the activity of units in a deep learning system is broadly analogous to the spike rate of a neuron.
There are people who believe that there are other forms
of communication in the brain.
In fact, I've been involved in some research recently
that suggests that the voltage fluctuations that occur in populations of neurons that aren't, you know,
that are sort of below the level of spike production may be important for communication.
But I'm still pretty old school in the sense that I think that the things that we're building
in AI research constitute reasonable models of how a brain would work.
Let me ask just for fun, a crazy question,
because I can.
Do you think it's possible we're completely wrong
about the way this basic mechanism of neuronal communication,
that the information is stored
in some very different kind of way in the brain?
Oh, heck yes.
I mean, look, I wouldn't be a scientist if I didn't think there was any chance we were wrong.
But, I mean, if you look at the history of deep learning research as it's been applied to neuroscience,
of course, the vast majority of deep learning research these days isn't about neuroscience, but
you know, if you go back to the 1980s, there's a, you know, sort of an unbroken chain of research in which a particular
strategy is taken, which is, hey, let's train a deep learning system, let's train a multi-layer
neural network on this task that we trained our, you know, back on or our monkey on or this human
being on. And then let's look at what the units deep in the system are doing. And let's
ask whether what they're doing resembles what we know about what neurons deep in the brain
are doing. And over and over and over and over, that strategy works.
In the sense that the learning algorithms that we have access to, which typically center on
back propagation, they give rise to, you know, patterns of activity, patterns of response,
patterns of like neuronal behavior in these, in these artificial models that look hauntingly similar
to what you see in the brain.
And is that a coincidence?
That's like at a certain point,
it starts looking like such coincidences
unlikely to not be deeply meaningful.
Yeah, yeah, the circumstantial evidence is overwhelmed.
But it could be always open to a total flipping table. Yeah, yeah, that's yeah the circumstantial evidence is overwhelmed, but it could we always open to a total flipping
Table. Yeah, of course, so you have coauthored several recent papers that sort of weave beautifully between the world of neuroscience and artificial intelligence and
this maybe
If we could just try to dance around and talk about some of them, maybe try
to pick out interesting ideas that jump to your mind from memory.
So maybe looking at, we're talking about the prefrontal cortex, the 2018, I believe paper
called the prefrontal cortex as a matter of reinforcement learning system.
What is there a key idea that you can speak to from that paper? Yeah, I mean the key idea is about meta-learning.
So what is meta-learning?
Meta-learning is, by definition, a situation in which you have a learning algorithm, and the learning algorithm operates in such a way that it gives
rise to another learning algorithm.
In the earliest applications of this idea, you had one learning algorithm sort of adjusting
the parameters on another learning algorithm.
But the case that we're interested in this paper is one where you start with just one learning algorithm and then another learning algorithm kind of
Emerges out of like out of thin air. I can say more about what I mean by that. I don't mean to be you know
Steer industry, but
That's the idea of netta learning you it relates to the old idea and psychology of learning to learn
learning. It relates to the old idea and psychology of learning to learn situations where you have experiences that make you better at learning something new. A familiar example would be learning
a foreign language. The first time you learn a foreign language it may be quite laborious and disorienting and novel.
But if you, let's say you've learned two foreign languages,
the third foreign language obviously
is gonna be much easier to pick up.
And why?
Because you've learned how to learn.
You know how this goes.
You know, okay, I'm gonna have to learn how to conjugate.
I'm gonna have to, that's a simple form of meta learning,
in the sense that there's some slow learning mechanism
that's helping you kind of update your fast learning mechanism.
That makes sense.
From your own focus.
So from our understanding, from the psychology world,
from neuroscience, our understanding,
how meta learning works might work in the human brain,
what lessons can we draw from that that we can bring into the artificial intelligence world?
Well, yeah.
So the origin of that paper was in AI work that we were doing in my group.
We were looking at what happens when you train a recurrent neural network using standard
reinforcement learning algorithms
But you train that network not just in one task, but you trained it in a bunch of interrelated tasks and then you
Ask what happens when you give it yet another task in that sort of line of interrelated tasks and and what we
Started to realize is that
And what we started to realize is that a form of metal learning spontaneously happens in recurrent neural networks.
And the simplest way to explain it is to say a recurrent neural network has a kind of memory
in its activation patterns.
It's recurrent by definition in the sense that you have units that connect to other units,
that connect to other units,
so you have sort of loops of connectivity,
which allows activity to stick around
and be updated over time.
In psychology, we call, in neuroscience,
we call this working memory.
It's like actively holding something in mind.
And so that memory gives the recurrent neural network a dynamics, right?
The way that the activity pattern evolves over time is inherent to the connectivity of
the recurrent neural network.
So that's idea number one.
Now the dynamics of that network are shaped by the connectivity, by the synaptic
weights. And those synaptic weights are being shaped by this reinforcement learning algorithm
that you're, you know, training the network with. So the punchline is, if you train or
a current neural network with a reinforcement learning algorithm that's adjusting its weights,
and you do that for long enough, The activation dynamics will become very interesting.
So imagine I give you a task where you
have to press one button or another, left button or right button.
And there's some probability that I'm
going to give you an M&M.
If you press the left button, and there's some probability,
I'll give you an M&M if you press the other button.
And you have to figure out what those probabilities are just by trying things out.
But as I said before, instead of just giving you one of these tasks, I give you a whole sequence.
You know, I give you two buttons and you figure out which one's best and I go,
good job. Here's a new box. Two new buttons, you have to figure out which one's best.
Good job. Here's a new box. And every box has its own probabilities and you have to figure it.
So if you train a recurrent neural network
on that kind of sequence of tasks,
what happens?
It seemed almost magical to us when we first started
kind of realizing what was going on.
The slow learning algorithm that's
justing the synaptic ways.
Those slow synaptic changes give rise to a network
dynamics that themselves turn into a learning algorithm. In other words, you can tell this
is happening by just freezing the synaptic weights saying, okay, no more learning, you're
done. Here's a new box. Figure out which button is best.
And the Recurrent and All in that work will do this just fine.
There's no, it figures out which button is best.
It kind of transitions from exploring the two buttons
to just pressing the one that it likes best in a very rational way.
How is that happening? It's happening because the activity dynamics
of the network have been shaped by this slow learning process
that's occurred over many, many boxes.
And so what's happened is that this slow learning algorithm
that's slowly adjusting the weights
is changing the dynamics of the network,
the activity dynamics into its own learning algorithm.
And as we were kind of realizing that this is a thing, it just so happened that the group
that was working on this included a bunch of neuroscientists.
And it started kind of ringing a bell for us, which is to say that we thought this sounds
a lot like the distinction between synaptic learning and activity, synaptic memory and activity-based memory in the brain.
It also reminded us of recurrent connectivity that's very characteristic of prefrontal function.
So, this is kind of why it's good to have people working on AI that know a little bit about neuroscience and vice versa,
because we started thinking
about whether we could apply this principle to neuroscience.
And that's where the paper came from.
So the kind of principle of the recurrence they can see in the prefrontal cortex, then
you start to realize that it's possible for something like an idea of a learning to learn emerging from this learning process,
as long as you keep varying the environments efficiently.
Exactly.
So the kind of metaphorical transition we made to neuroscience was to think, okay, well,
we know that the prefernal cortex is highly recurrent.
We know that it's an important locus for working memory for activation-based memory. So maybe the
prefrontal cortex supports reinforcement learning. In other words, you
what is reinforcement learning? You take an action, you see how much reward you
got, you update your policy of behavior. Maybe the prefrontal cortex is doing that
sort of thing strictly in its activation patterns. It's keeping around a memory in its activity patterns of what you
did, how much reward you got, and it's using that activity-based memory as a basis for
updating behavior. But then the question is, well, how did the prefrontal cortex get so
smart? In other words, how did it, where did these activity dynamics come from?
How did that program that's implemented
in the recurrent dynamics of the prefrontal cortex
arise?
And one answer that became evident in this work
was, well, maybe the mechanisms that operate
on the synaptic level, which we believe are mediated
by dopamine, are responsible for shaping those dynamics.
So this may be a silly question, but because this kind of several temporal
classes of learning are happening and the learning to learn is emerges, can you just
can you keep building stacks of learning to learn to learn to learn to learn
to learn because it keeps, I mean, basically, abstractions of more powerful abilities to
generalize of learning complex rules? Yeah. Is this over stretching this kind of mechanism. Well, one of the people in AI who started thinking
about meta-learning from very early on,
you're gonna in Schmitt-Huber,
sort of cheekily suggested,
I think it may have been in his PhD thesis
that we should think about meta, meta, meta, meta, meta,
meta-learning. That's really what's gonna get us to true intelligence. PhD thesis that we should think about metameta metameta metameta learning.
You know, that's really what's going to get us to true intelligence.
Certainly, there's a poetic aspect to it and it seems interesting and correct that that
kind of levels of abstraction would be powerful.
But is that something you see in the brain?
This kind of, is it useful to think of learning in these meta, meta, meta way, or is it just
meta learning?
Well, one thing that really fascinated me about this mechanism that we were starting to
look at, and other groups started talking about very similar things at the same time,
and then a kind of explosion of interest in metal learning happened in the
AI community shortly after that.
I don't know if we had anything to do with that, but I was gratified to see that a lot
of people started talking about metal learning.
One of the things that I like about the kind of flavor of metal learning that we were studying
was that it didn't require anything special.
It was just, if you took a system that had some form of memory,
that the function of which could be shaped by pick your RL algorithm,
then this would just happen.
Yes.
I mean, there are a lot of forms of,
there are a lot of metal learning algorithms that have been proposed since then
that are fascinating and effective in their
domains of application.
But they're engineered.
There are things that somebody had to say, well, gee, if we wanted metal learning to happen,
how would we do that?
Here's an algorithm that would, but there's something about the kind of metal learning
that we were studying that seemed to me special in the sense that it wasn't an algorithm.
It was just something that automatically happened
if you had a system that had memory
and it was trained with reinforcement learning algorithm.
And in that sense, it can be as meta as it wants to be, right?
There's no limit on how abstract the meta learning can get
because it's not reliant on a human engineering,
a particular metal learning algorithm to get there.
And that's, I also, I don't know, I guess I hope
that that's relevant in the brain.
I think there's a kind of beauty in the ability of this
emergent aspect of it.
Yeah, it's supposed to be engineered.
Exactly.
It's something that just happens in a sense.
In a sense, you can't avoid this happening.
If you have a system that has memory
and the function of that memory is shaped
by reinforcement learning,
and this system is trained in a series
of interrelated tasks, this is gonna happen.
You can't stop it.
As long as you have certain properties, maybe like a current structure to...
You have to have memory.
It actually doesn't have to be a recurrent neural network.
One of the paper that I was honored to be involved with, even earlier, used a kind of slot-based
memory.
Do you remember the title?
It was memory-augmented neural networks.
I think the title was meta-learning in memory-augmented neural networks.
It was the same exact story.
If you have a system with memory, here it was a different kind of memory, but the function
of that memory is shaped by reinforcement learning.
Here, it was the reads and rights that occurred on this slot based memory.
This will just happen.
And so this brings us back to something I was saying earlier about the importance of the environment.
This will happen if the system is being trained in a setting
where there's like a sequence of tasks that all share some abstract structure. Sometimes
we talk about task distributions. That's something that's very obviously true of the world that humans inhabit. We're constant, like if you just kind of think
about what you do every day,
you never do exactly the same thing
that you did the day before.
But everything that you do
has a family resemblance,
it shares structure with something that you did before.
And so, the real world is sort of, you know, saturated with this kind of
this property. It's endless variety with endless redundancy. And that's the setting in which
this kind of meta-learning happens.
And it does seem like we're just so good at finding just like in this emergent phenomenon
you describe. We're really good at finding that redund in this emergent phenomenon you describe.
We're really good at finding that redundancy, finding those similarities,
the family resemblance.
Some people call it sort of, what is it?
Melanie Mitchell is talking about analogies.
So we're able to connect concepts together in this kind of way,
in this same kind of automated emergent way,
which there's so many echoes here of psychology and neuroscience
and obviously now with reinforcement learning
with recurrent neural networks at the core.
If we could talk a little bit about dopamine,
you have really, you're a part of co-authoring,
really exciting recent paper, very recent,
in terms of release on dopamine and temporal difference
learning.
Can you describe the key ideas of that paper?
Sure, yeah.
One thing I wanted pause to do is acknowledge my co-authors
on actually both of the papers we're talking about.
So this dopamine paper.
I'll certainly post all their names.
OK, wonderful.
Oh, yeah.
Because I'm sort of a bas bash to be this book's person for these papers when I had
such amazing collaborators on both.
So it's a comfort to me to know that you'll acknowledge them.
Yeah, this is an incredible team there, but yeah.
Oh, yeah, it's such a, it's so much fun.
And in the case of the dopamine paper, we also collaborated with Nauji to Harvard,
who obviously a paper simply wouldn't have happened
without him.
But so you were asking for like a thumbnail sketch of?
Yes, thumbnail sketch or key ideas or things,
the insights that continue on our kind of discussion here
between neuroscience and AI.
Yeah, I mean, this was another, a lot of the work that we've done so far is taking ideas
that have bubbled up in AI and, you know, asking the question of whether the brain might be doing
something related, which I think on the surface sounds like something that's really mainly of use to neuroscience.
We see it also as a way of validating what we're doing
on the AI side.
If we can gain some evidence that the brain is using
some technique that we've been trying out in our AI work,
that gives us confidence that it may be a good idea that it'll, you know, scale
to rich complex tasks that it'll interface well with other mechanisms.
So you see as a two way road.
Yeah, for sure.
Just because a particular paper is a little bit focused on from one to the, from AI,
from neural networks to neuroscience.
Ultimately, the discussion, the thinking, the productive long-term aspect
of it is the two-way road nature of the whole entire world.
Yeah, I mean, we've talked about the notion of a virtuous circle between AI and neuroscience.
And, you know, the way I see it, that's always been there since the two fields jointly existed.
There have been some phases in that history when AI was sort of ahead.
There are some phases when neuroscience was sort of ahead.
I feel like given the burst of innovation that's happened recently on the AI side, AI is
kind of ahead in the sense that they're all of these ideas that we, you know, we, you know,
for which it's exciting to consider that there might be neural analogs. And neuroscience,
you know, in a sense has been focusing on approaches to studying behavior that come from, you know,
that are kind of derived from this earlier era of cognitive psychology.
And, you know, so in some ways fail to connect with some of the issues that we're, you know, grappling with in AI,
like how do we deal with, you know, large, you know, complex environments.
But, I, you know, I think it's inevitable that this circle will keep turning and there will be a moment
in the not two different distant future when neuroscience is pelting AI researchers
with insights that may change the direction of our work.
Just a quick human question.
Is it, you have parts of your brain, this is very meta, but they're able to both think about neuroscience
and AI. You know, I don't often meet people like that. So do you think, let me ask, a meta
plasticity question, do you think a human being can be both good at AI and neuroscience?
Is it like what on the team at deep mind, what kind of human can occupy these two realms and is that something you see everybody should be
doing can be doing or is that a very special few can kind of jump just like
we talk about our history I would think it's a special person that can major in
our history and also consider being a surgeon. Otherwise known as a deletage.
A deletage, yeah.
Easily distracted.
No.
I think it does take a special kind of person
to be truly world class at both AI and neuroscience.
And I am not on that list.
I happen to be someone who's interested in neuroscience and psychology involved using
the kinds of modeling techniques that are now very central in AI, and that sort of, I
guess, bought me a ticket to be involved in all of the amazing things that are going on
in AI research right now.
I do know a few people who I would consider
pretty expert on both fronts
and I won't embarrass them by naming them,
but there are like exceptional people out there
who are like this.
The one thing that I find is a barrier
to being truly world class on both fronts, is the complexity of the
technology that's involved in both disciplines now.
So the engineering expertise that it takes to do truly line, hands-on AI research,
is really, really considerable.
The learning curve of the tools,
just like the specifics of just whether it's programming,
other kind of tools necessary to collect the data,
to manage the data, to distribute the compute,
all that kind of stuff.
And on the neuroscience, I guess,
there'll be all different sets of tools.
Exactly, especially with the recent explosion
in neuroscience methods.
So, having said all that, I think the best scenario for both neuroscience and AI is to have people who, interacting, who live at every point on this spectrum from exclusively focused
on neuroscience to exclusively focused on the engineering side of AI.
But to have those people, you know, inhabiting a community where they're talking to people
who live elsewhere on the spectrum.
And I may be someone who's very close to the center,
in the sense that I have one foot in the neuroscience world,
and one foot in the AI world.
That central position, I will admit,
prevents me, at least someone with my limited cognitive capacity,
from being a truly, having true technical expertise
in either domain. But at the same time,
I at least hope that it's worthwhile having people around who can kind of, you know, see the
connections. Yeah, the community, the, yeah, the emergent intelligence of the community. Yeah,
yeah, yeah, it's nice. The distributed is useful. Okay, exactly. Yeah. So hopefully that, I mean,
I've seen that work, I've seen that work out well at DeepMind.
There are there are people who I mean even if you just focus on the AI work that happens at DeepMind
it's been a good thing to have some people around doing that kind of work whose PhDs are in neuroscience or psychology. Every academic discipline has its kind of blind spots and kind of
unfortunate obsessions and its metaphors and its reference points. And having some intellectual
diversity is really healthy. People get each other unstuck, I think. I see it all the time at
deep-mind. And I like to think that the people who bring some neuroscience
background to the table are helping with that.
So one of the, one of my, like, probably the deepest passion for me, what I would
say, maybe we'll kind of spoke off Mike a little bit about it, but that I think is a blind spot for at
least robotics in AI folks, is human robot interaction, human agent interaction. Maybe
a dear of thoughts about how we reduce the size of that blind spot. Do you also share the
feeling that not enough folks are studying this aspect of interaction?
Well, I'm actually pretty intensively interested in this issue now, and there are people in my group
who've actually pivoted pretty hard over the last few years from doing more traditional cognitive
psychology and cognitive neuroscience to doing experimental work on human
agent interaction. And there are a couple of reasons that I'm pretty passionately interested
in this. One is, it's kind of the outcome of having thought for a few years now about
what we're up to. Like what are doing? What is this AI research for?
So what does it mean to make the world a better place? I'm pretty sure that means making life
better for humans. And so how do you make life better for humans? That's a proposition that when you look at it
carefully and honestly, is rather horrendously complicated,
especially when the AI systems that you're building
are learning systems.
They're not, you're not programming something
that you then introduced to the world
and it just works as programmed,
like Google Maps or something.
We're building systems that learn from experience.
So that typically leads to AI safety questions.
How do we keep these things from getting out of control?
How do we keep them from doing things that harm humans? And I mean, I hasten to say, I consider those hugely important issues,
and there are large sectors of the research community at DeepMind, and of course elsewhere,
who are dedicated to thinking hard all day every day about that. But there's a, I guess, I guess I would say
a positive side to this too, which is to say,
well, what would it mean to make human life better?
And how can we imagine learning systems doing that?
And in talking to my colleagues about that,
we reached the initial conclusion that
it's not sufficient to philosophize about that. You actually the initial conclusion that it's not sufficient to
philosophize about that. You actually have to take into account how humans
actually work and what humans want and the difficulties of knowing what
humans want and the difficulties that arise when humans want different things. And so human-agent interaction has become a quite intensive focus of my group lately.
If for no other reason that, in order to really address that issue in an adequate way, you
have to, I mean, psychology becomes part of the picture. Yeah. And so there's a few elements there. So if you focus on solving, like the,
if you focus on the robotics problem, say, AGI, without humans in the picture,
is you're missing fundamentally the final step. When you do want to help human civilization,
you eventually have to interact with humans. And when you create a learning system, just as you said, that will eventually have to interact
with humans, the interaction itself has to become, has to become part of the learning process.
Right.
So you can't just watch, well, my sense is, it sounds like your sense is, you can't just
watch humans to learn about humans. Yeah. you have to also be part of the human world
You have to interact with humans. Yeah, exactly and I mean then questions arise that
start
imperceptibly but inevitably to slip beyond the realm of engineering so questions like
If you have an agent that can do something that you can't do, under
what conditions do you want that agent to do it?
So if I have a robot that can play Beethoven's sonatas better than any human, in the sense that the sensitivity, the expression,
the expression is just beyond what any human, do I want to listen to that?
Do I want to go to a concert and hear a robot play?
These aren't engineering questions.
These are questions about human preference and human culture and psychology, bordering
on philosophy.
Yeah.
And then, and then you start asking, well, well, even if we knew the answer to
that, is it our place as AI engineers to build that into these agents?
Probably the agents should interact with humans.
Beyond the population of AI engineers and figure out what those humans want.
Yeah. Um, and then,. And then when you start,
I referred this the moment ago,
but even that becomes complicated.
Be quote, what if two humans want different things?
And you have only one agent that's able to interact
with them and try to satisfy their preferences.
Then you're into the realm of economics and social choice
theory and even politics.
So there's a sense in which if you kind of follow what
we're doing to its logical conclusion,
then it goes beyond questions of engineering and technology
and starts to shade in perceptibly into questions
about what kind of society do you want?
And actually that, once that dawned on me,
I actually felt, I don't know what the right word is,
quite refreshed in my involvement in AI research.
It was almost like this building, this kind of stuff
is gonna lead us back to asking really
fundamental questions about what's, you know, what is this? Like what's the good life? And who gets
to decide and, you know, you know, bringing in viewpoints from multiple subcommunities to help us,
you know, shape the way that we live. This, it's, it This, there's something, it started making me feel like
doing AI research in a fully responsible way.
Could potentially lead to a kind of cultural renewal.
Yeah, it's the way to understand human beings at the individual, the societal level, and
maybe come a way to answer all the human questions of the meaning of life and all those kinds
of things. Even if it doesn't give us a way of answering those questions, it may force us
back to thinking about. You know, and it might bring, might bring it might restore a certain I don't know a certain depth to
Or even dare I say spirituality to
the way that you know to to the world. I don't know. Maybe that's too grandiose
Well, I don't I I'm with you. I think it's a it's AI will be the
The philosophy of the 21st century the the way which will open the door.
I think a lot of AI researchers are afraid to open that door of exploring the beautiful
richness of the human agent interaction, human AI interaction.
I'm really happy that somebody like you have opened that door.
And one thing I often think about is, you know, the usual schema for thinking about human agent interaction
is this kind of dystopian, oh, there are real bot overlords.
And again, I hasten to say AI safety is usually important.
And I'm not saying we shouldn't be thinking about those risks,
totally on board for that. But there's, having said that, there's a, there's a, there's a, I,
what often follows for me is the thought that, you know, there's another, there's another kind of narrative that might be relevant, which is when we think of humans gaining more and more information
about human life, the narrative there is usually that they've gained more and more wisdom
and they get closer to enlightenment.
And they become more benevolent.
The Buddha is like that's a totally different narrative. And why isn't it the case that we imagine that the AI systems that we're creating
and they're just gonna, like, they're gonna figure out more and more about the way the
world works and the way that humans interact and they'll become beneficent.
I'm not saying that will happen.
I'm not, you know, I don't honestly expect that to happen without some careful setting
things up very carefully,
but it's another way things could go, right?
Yeah, and I would even push back on that.
I personally believe that the most trajectories natural human trajectories will lead us towards progress.
So for me, there is a kind of sense that most trajectories in AI development will lead us towards progress. So for me, there is a kind of sense that most trajectories in
AI development will lead us into trouble. To me, and we over focus on the worst case,
it's like in computer science, the Oracle computer science, there's been this focus on
worst case analysis. There's something appealing to our human mind at some lowest level.
It's a big, I mean, we don't want to be eaten by the tiger, I guess.
So we want to do the worst case analysis, but the reality is that shouldn't stop us
from actually building out all the other trajectories, which are potentially
leading to all the positive worlds, all the, all the enlightenment,
there's a book in light, man, now, let's even paint her and so on.
This is looking at generally at human progress.
And there's so many ways that human progress can happen
with AI.
And I think you have to do that research.
You have to do that work.
You have to do the not just AI safety work of the one worst case
analysis.
How do we prevent that?
But the actual tools and the glue and the mechanisms of human AI interaction
that would lead to all the positive actions that you can go.
Yeah, super exciting area, right?
Yeah, we should be spending, we should be spending a lot of our time saying what can go
wrong.
I think it's harder to see that there's work to be done to bring into focus the question of what
what it would look like for things to go right. That's not obvious.
We wouldn't be doing this if we didn't have the sense there was huge potential.
We're not doing this for no reason. We have a sense that AGI would be a major boom to humanity.
But I think it's worth starting now, even when our technology is quite primitive,
asking, well, exactly what would that mean? We can start now with applications that are
already going to make the world a better place, like, you know, solving protein folding. You know,
I think this deep mind has gotten heavy
into science applications lately, which I think is, you know,
a wonderful, wonderful move for us to be making.
But when we think about AGI, when we think about building,
you know, fully intelligent agents that are going to be able
to, in a sense, do whatever they want, you know,
we should start thinking about what do we want them to want,
right? But what kind of world do we want to live in? That's not an easy question. And I think
we just need to start working on it. And even on the path to sort of it doesn't have to be
AGI, it was just intelligent agents that interact with us and help us enrich our own
existence on social networks, for example, on recommender systems and various
intelligent.
There's so much interesting interaction that's yet
to be understood and studied and how do you create?
I mean, Twitter is struggling with this very idea.
How do you create AI systems that increase the quality
and the health of a conversation?
For sure.
That's a beautiful, beautiful human psychology question.
And how do you do that without deception being involved,
without manipulation being involved,
maximizing human autonomy,
and how do you make these choices in a democratic way?
How do we face the, again, I was speaking for myself here.
How do we face the fact that it's a small group of people who have the skill set to build these kinds of systems?
But what it means to make the world a better place is something that we all have to be talking about.
Yeah, the world that we're trying to make a better place includes a huge variety of different kinds of people.
Yeah, how do we cope with that?
This is a problem that has been discussed in gory extensive detail in social choice theory. One thing I'm really
enjoying about the recent direction work has taken some parts of my team is that, yeah, we're
reading the eye literature, we're reading the neuroscience literature, but we've also started
reading economics, and as I mentioned, social choice theory, even some political theory,
because it turns out that
it all becomes irrelevant. It all becomes irrelevant.
But at the same time, we've been trying not to write philosophy papers. We've been trying not to write position papers. We're trying to figure out ways of doing actual empirical research that kind of take the first small steps to thinking about what it really means for
humans with all of their complexity and contradiction and
you know, paradox
you know to be brought into contact with these AI systems in a way that
that really makes the world a better place.
And often reinforcement learning framework actually kind of allow you to do that machine learning.
And so that's the exciting thing about AI
is it allows you to reduce the unsolvable problem,
philosophical problem into something more concrete
that you can get a hold of.
Yeah, and it allows you to kind of define the problem
in some way that allows for growth in the system that's sort
of, you know, you're not responsible for the details, right?
You say, this is generally what I want you to do, and then learning takes care of the
rest.
Of course, the safety issues are, you know, arise in that context, but I think also some
of these positive issues arise in that context.
What would it mean for an AI system to really come to understand what humans want?
And you know, with all of the subtleties of that, right?
Humans want help with certain things, but they don't want everything done for them. There is part of the satisfaction that humans get from life is in accomplishing things.
So if there were devices around that did everything for, I often think of the movie Wally.
That's like dystopian in a totally different way.
It's like the machines are doing everything for us.
That's not what we want it.
Anyway, I find this opens up a whole landscape of research
that feels affirmative.
And exciting.
To me, it's one of the most exciting and it's wide open.
Yeah.
We have to, because it's a cool paper,
talk about dopamine.
Oh, yeah, okay, so I can, let's see.
We were gonna, I was gonna give you a quick summary.
Yeah, a quick summary of what's the title of the paper?
I think we called it a distributional code for value in dopamine-
based reinforcement learning.
Yes. That's another project that grew out of pure AI research.
A number of people that deep-mind and a few other places had started working
on a new version of reinforcement learning, which was defined by taking something in traditional
reinforcement learning and just tweaking it. So the thing that they took from traditional reinforcement
learning was a value signal. So at the center of reinforcement learning, at least most algorithms,
is some representation of how well things are going.
You're expected cumulative future reward.
And that's usually represented as a single number.
So if you imagine a gambler in a casino and the gambler is thinking, well,
I have this probability of winning such and such an's thinking, well, I have this probability of winning
such-and-such an amount of money, and I have this probability of losing such-and-such an
amount of money.
That situation would be represented as a single number, which is like the expected, the
weighted average of all those outcomes.
And this new form of reinforcement learning said, well, what if we generalize that to
a distributional representation?
So now we think of the gambleros literally thinking, well, there's this probability that
I'll win this amount of money and there's this probability that I'll lose that amount
of money.
And we don't reduce that to a single number.
And it had been observed through experiments, through, you know, just trying this out, that
that, that kind of distributional representation really accelerated reinforcement
learning and led to better policies. What's your intuition about, so we're talking about rewards.
So, what's your intuition why that is? What is it? Well, it's kind of a surprising historical note,
at least surprised me when I learned it, that this had been tried out
in a kind of heuristic way, people thought, well, gee, what would happen if we tried,
and then it had this empirically, it had this striking effect, and it was only then that
people started thinking, well gee, wait, why?
Why?
Why?
Why is this working?
And that's led to a series of studies just trying to figure out why it works, which
is ongoing.
But one thing that's already clear from that research is that one reason that it helps
is that it drives richer representation learning.
So if you imagine two situations that have the same expected value, the same kind of
weighted average value.
Standard, deeper reinforcement learning algorithms are going to take those two situations
and kind of in terms of the way they're represented
internally, they're going to like squeeze them together
because the thing that you're trying to represent
which is their expected value is the same.
So all the way through the system,
things are going to be mushed together. But what if, what if, what if those two situations actually have different
value distributions? They have the same average value, but they have different distributions
of value. In that situation, distributional learning will, will maintain the distinction between
these two things. So to make a long story short, distributional learning can keep things separate
in the internal representation
that might otherwise be conflated or squished together.
And maintaining those distinctions can be useful
in when the system is now faced with some other task
where the distinction is important.
If we look at the optimistic and pessimistic dopamine neurons,
so first of all, what is dopamine?
Why is this?
Why is this at all useful to think about in artificial intelligence sense?
But what do we know about dopamine in the human brain?
What is it?
Why is it useful?
Why is it interesting? What does it have to do with the prefrontal cortex
and learning in general?
Yeah, so, well, this is also a case where
there's a huge amount of detail and debate,
but one currently prevailing idea is that
the function of this neurotransmitter dopamine resembles
a particular component of standard reinforcement learning algorithms, which is called the
reward prediction error.
So I was talking a moment ago about these value representations.
How do you learn them?
How do you update them based on experience?
Well, if you made some prediction about a future reward and then you get more reward than
you were expecting, then probably retrospectively, you want to go back and increase the value
representation that you attached to that earlier situation.
If you got less reward than you were expecting, you should probably decrement that estimate. And that's the process of temporal difference.
Exactly. This is the central mechanism of temporal difference learning, which is one of several
kind of, you know, kind of back the sort of backbone of our our momentarium in RL. And it was
this connection between the world prediction error and dopamine was made in the 1990s.
And there's been a huge amount of research that seems to back it up.
Dopamine made to be doing other things, but this is clearly, at least roughly,
one of the things that it's doing.
But the usual idea was that dopamine was representing these reward predictionnaires, again, in this
single number way, representing your surprise with a single number.
And in distributional reinforcement learning, this new elaboration of the standard approach,
it's not only the value function that's represented as a single number, it's also the raw
prediction error.
And so what happened was that Will Davney, one of my collaborators who was one of the
first people to work on distributional temporal difference learning, talked to a guy in
my group, Will, Seb Kruth Nelson, who's a computational neuroscientist, and said,
gee, is it possible that dopamine might be doing something like this
distributional coding thing?
And they started looking at what was in the literature, and then they brought me in.
We started talking to Nau Uchita, and we came up with some specific predictions about,
you know, if the brain is using this kind of distributional coding, then in the tasks
that Nau has studied, you should see this, this, this, and this, and that's where the paper came from.
We kind of enumerated a set of predictions, all of which ended up being fairly clearly
confirmed, and all of which leads to at least some initial indication that the brain might
be doing something like this distributional coding, that dopamine might be representing
surprise signals in a way that
is not just collapsing everything to a single number, but instead is kind of respecting the
variety of future outcomes, if that makes sense.
So, yeah, so that's showing, suggesting possibly that dopamine has a really interesting
representation scheme for in the human brain for its reward signal.
Exactly. That's fascinating. scheme for in the human brain for its reward signal.
Exactly.
That's fascinating.
That's another beautiful example of AI revealing something
nice about neuroscience.
But, essentially, suggesting possibilities.
Well, you never know.
So, the minute you publish a paper like that, the next thing you think is,
I hope that replicates.
I hope we see that same thing in other data sets.
But, of course, several labs now are doing the follow-up experiments.
So we'll know soon.
But it has been a lot of fun for us to take these ideas from AI and kind of bring them
into neuroscience and see how far we can get.
So we kind of talked about it a little bit, but where do you see the field of neuroscience
and artificial intelligence heading broadly. Like what are the possible
exciting areas that you can see breakthroughs in the next, let's get crazy, not just three
or five years, but next 10, 20, 30 years. That would make you excited and perhaps you'll be part of.
On the neuroscience side, there's a great deal of interest now in what's going on in AI.
And at the same time, I feel like, so neuroscience, especially the part of neuroscience that's focused on circuits
and systems, you know, kind of like really mechanism focused.
There's been this explosion in new technology and up until recently, the experiments that have exploited this technology have not involved
a lot of interesting behavior.
And this is for a variety of reasons, one of which is in order to employ some of these
technologies, you actually have to, if you're studying a mouse, you have to head-fix the
mouse.
In other words, you have to immobilize the mouse.
And so it's been tricky to come up with ways of eliciting interesting behavior from a mouse that's
restrained in this way, but people have begun to create very interesting solutions to
this, like virtual reality environments where the animal kind of move a track ball.
And as people have kind of begun to explore what you can do with these technologies.
I feel like more and more people are asking, well, let's try to bring behavior into the picture.
Let's try to, like, reintroduce behavior, which was supposed to be what this whole thing was about.
And I'm hoping that those two trends, the kind of growing interest in behavior and the widespread
interest in what's going on in AI, will come together to kind of open a new chapter in neuroscience
research where there's a kind of a rebirth of interest in the structure of behavior and its underlying substrates,
but that research is being informed by computational mechanisms
that we're coming to understand in AI.
If we can do that,
then we might be taking a step closer to this utopian future
that we were talking about earlier,
where there's really no distinction
between psychology and neuroscience.
Neuroscience is about studying the mechanisms that underlie whatever it is the brain is for, and what is the brain for? It's for behavior. I feel like we could maybe take a step toward that now
if people are motivated in the right way. You also ask about AI.
So that was a neuroscience question.
You said neuroscience, that's right.
And especially placed like deep mind
that are interested in both branches.
So what about the engineering of intelligence systems?
I think one of the key challenges
that a lot of people are seeing now in AI is to build systems that
have the kind of flexibility and the kind of flexibility that humans have in two senses.
One is that humans can be good at many things.
They're not just expert at one thing.
They're also flexible in the sense that they can switch between things very easily, and they can pick up new things very quickly, because they very,
they very able to see what a new task has in common with other things that they've done.
And that's something that our AI systems to blatantly do not have.
There are some people who like to argue that deep learning and deep RL are simply wrong
for getting that kind of flexibility.
I don't share that belief, but the simpler fact of the matter is we're not building things
yet that do have that kind of flexibility.
And I think the attention of a large part of the AI community is starting to pivot to
that question.
How do we get that?
That's going to lead to a focus on abstraction.
It's going to lead to a focus on what in psychology we call cognitive control, which is the ability
to switch between tasks, the ability to quickly put together
a program of behavior that you've never executed before, but you know makes sense for a particular
set of demands.
It's very closely related to what the prefrontal cortex does on the neuroscience side.
So I think it's going to be an interesting, an interesting new chapter.
So that's the reasoning side and cognition side, but let me ask the over romanticized question.
Do you think we'll ever engineer an AGI system
that we humans would be able to love and then we'll love us back?
So I'll have that level and depth of connection.
I love that question.
And it relates closely to things that I've been thinking about a lot lately, you know,
in the context of this human AI research.
There's social psychology research in particular by Susan Fisk at Princeton in the department
I used to work.
Where she dissects human attitudes toward other humans
into a two-dimensional scheme.
And one dimension is about ability.
How able, how capable is this other person.
And the other dimension is warmth.
So you can imagine another person who's very skilled and capable, but is very cold.
And you wouldn't really, like, highly, you might have some reservations about that other person, right?
But there's also a kind of reservation that we might have about another person who elicits
in us or displays a lot of human warmth, but is, you know, not good at getting things done,
right?
That, that, like the greatest esteem that we, we reserve our greatest esteem really for
people who are both highly capable and also quite warm,
right? That's like the best of the best. I'm just, this isn't an
enormous statement I'm making. This is just an empirical statement. This is what humans
seem. These are the two dimensions that people seem to kind of like, along which people
size other people up. And in AI research, we really focus on this capability thing.
You're like, we want our agents to be able to do stuff.
This thing can play go at a superhuman level.
That's awesome.
But that's only one dimension.
What about the other dimension?
What would it mean for an AI system to be warm?
And I don't know.
Maybe there are easy solutions here
like we can put a
face on our AI systems. It's cute. It has big ears. I mean, that's probably part of it.
But I think it also has to do with a pattern of behavior. A pattern of, you know, what would
it mean for an AI system to display carrying compassionate behavior in a way that actually made
us feel like it was for real? We we didn't feel like it was simulated,
we didn't feel like we were being duped.
To me, people talk about the Turing test
or some descendant of it.
I feel like that's the ultimate Turing test.
Is there an AI system that can not only convince us
that it knows how to reason,
and it knows how to interpret
language, but that we're comfortable saying, yeah, that AI system is a good guy.
You know, like, I mean, on the warm scale, whatever warmth is, we kind of intuitively understand
it, but we also want to be able to, yeah, we don't understand it explicitly enough yet to be able to engineer it.
Exactly.
And that's an open scientific question.
You kind of alluded it several times in the human AI interaction.
That's a question that should be studied.
And probably one of the most important questions.
And human to aging.
And human to aging.
We humans are so good at it.
Yeah.
You know, it's not just weird. It's not just that we're born warm, you know?
Like I suppose some people are warmer than others
given whatever genes they manage to inherit.
But there's also, there's also,
there are also learned skills involved, right?
I mean, there are ways of communicating to other people
that you care, that they matter to you,
that you're enjoying interacting with them, right?
And we learn these skills from one another,
and it's not out of the question
that we could build engineered systems.
I think it's hopeless, as you say,
that we could somehow hand design
these sorts of behaviors,
but it's not out of the question
that we could build systems that kind of
we we we we instill in them something that sets them out in the right direction.
So that they they end up learning what it is to interact with humans in a way that's
gratifying to humans. I mean honestly, if that's not where we're headed,
I want out. I think it's exciting as a scientific problem just as he described. I honestly don't see a
better way to end it than talking about warmth and love and Matt. I don't think I've
ever had such a wonderful conversation where my questions were so bad and your answers
were so beautiful. So I deeply appreciate it. I really enjoyed it.
Well, it's been very fun. I know it's, I, as you can probably tell, I, um, I really, you
know, I, there's something I like about kind of thinking outside the box. And like, yeah,
um, so it's good having fun with that.
I do do that. Awesome. Thanks so much for doing it. Thanks for listening to this conversation with Matt Bopinick, and thank you to our sponsors,
the Jordan Harbinger Show and Magic Spoon Low-Carb Keto Serial.
Please consider supporting this podcast by going to JordanHarbinger.com slash Lex, and also
going to Magic Spoon.com slash Lex and using code Lex at checkout.
Click the links, buy all the stuff.
It's the best way to support this podcast
and the journey I'm on in my research and the startup.
If you enjoy this thing, subscribe on YouTube,
review it with the five stars and upper podcasts,
support it on Patreon, follow on Spotify,
or connect with me on Twitter, at Lex Friedman, again, spelled miraculously without the E,
just F-R-I-D-M-A-N.
And now, let me leave you with some words from urologist,
V-S, I'm a chandraan.
How can a three-pound mass of jelly that you can hold in your palm,
imagine angels contemplate the meaning of an enth infinity, even question its own place and cosmos. Especially awe-inspiring is
the fact that any single brain, including yours, is made up of atoms that were
forged in the hearts of countless far-flung stars billions of years ago.
These particles drifted for eons and light years until gravity and change brought them together here now.
These atoms now form a conglomerate, your brain. They cannot only ponder the very stars they gave at birth, but can also think about its own ability to think and wonder about its own ability to wander. With the arrival of humans it has
been said, the universe has suddenly become conscious of itself. This truly is the greatest
mystery of all. Thank you for listening and hope to see you next time.