Lex Fridman Podcast - #76 – John Hopfield: Physics View of the Mind and Neurobiology
Episode Date: February 29, 2020John Hopfield is professor at Princeton, whose life's work weaved beautifully through biology, chemistry, neuroscience, and physics. Most crucially, he saw the messy world of biology through the pierc...ing eyes of a physicist. He is perhaps best known for his work on associate neural networks, now known as Hopfield networks that were one of the early ideas that catalyzed the development of the modern field of deep learning. EPISODE LINKS: Now What? article: http://bit.ly/3843LeU John wikipedia: https://en.wikipedia.org/wiki/John_Hopfield Books mentioned: - Einstein's Dreams: https://amzn.to/2PBa96X - Mind is Flat: https://amzn.to/2I3YB84 This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 02:35 - Difference between biological and artificial neural networks 08:49 - Adaptation 13:45 - Physics view of the mind 23:03 - Hopfield networks and associative memory 35:22 - Boltzmann machines 37:29 - Learning 39:53 - Consciousness 48:45 - Attractor networks and dynamical systems 53:14 - How do we build intelligent systems? 57:11 - Deep thinking as the way to arrive at breakthroughs 59:12 - Brain-computer interfaces 1:06:10 - Mortality 1:08:12 - Meaning of life
Transcript
Discussion (0)
The following is a conversation with John Hopfield, professor Princeton, whose life's work
weaved beautifully through biology, chemistry, neuroscience, and physics.
Most crucially, he saw the messy world of biology through the piercing eyes of a physicist.
He's perhaps best known for his work on associative neural networks, now known as Hopfield
networks, that were one of the
early ideas that catalyzed the development of the modern field of deep learning.
As is 2019, Franklin Metal and Physics Award States, he applied concepts of theoretical
physics to provide new insights on important biological questions in a variety of areas,
including genetics and neuroscience, with significant impact on machine learning.
And as John says in his 2018 article titled,
Now What His Accomplishments Have Often Come a Ball by Asking That Very Question, Now What,
And Often Responding by Major Change of Direction.
This is the Artificial Intelligence Podcast.
If you enjoy it, subscribe on YouTube, give it 5 stars in Apple Podcast, supporting on
Patreon, or simply connect with me on Twitter, and Lex Friedman spelled F-R-I-D-M-A-N.
As usual, I'll do one or two minutes of ads now, and never any ads in the middle,
they can break the flow of the conversation.
I hope that works for you and doesn't hurt the listening experience.
This show is presented by CashApp, the number one finance app in the App Store.
When you get it, use code Lex Podcast.
CashApp lets you send money to friends by bitcoin and invest in the stock market with
as little as one dollar.
Since CashApp does fractional share trading, let me mention that the order execution algorithm
that works behind the scenes to create the abstraction of fractional orders is to me an algorithmic
marvel.
So big props to the CashApp engineers for solving a hard problem that in the end provides
an easy interface that takes a step up the next layer of abstraction
over the stock market, making trading more accessible for new investors and diversification
much easier.
So again, if you get cash app from the App Store, Google Play, and use code LEX Podcast,
you'll get $10 and cash app will also donate $10 to the first, one of my favorite organizations
that is helping advance robotics and STEM education for young people around the world.
And now here's my conversation with John Hopfield. What difference between biological neural networks and artificial neural networks is most
captivating and profound to you?
At the higher philosophical level, let's not get technical just yet.
One of the things that very much intrigues me is the fact that neurons have all kinds of
components, properties to them.
And evolutionary biology, if you have some little quirk in how a molecule works or how
a cell works, and it can make made use of, evolution will sharpen it up and make it into
a useful feature rather than a glitch.
And so you expect in neurobiology, for evolution to have captured all kinds of possibilities of getting neurons
of how you get neurons to do things for you. And that aspect has been completely suppressed
in artificial neural networks.
So the glitches become features in the biological neural network. They can.
Let me take one of the things that I used to do research on.
If you take things which oscillate, they have rhythms, which are sort of close to each
other.
Under some circumstances, these things will have a phase transition and suddenly
the rhythm will everybody will fall into step. There was a marvelous physical
example of that in the Millennium Bridge across the Temnest River about
built about 2001 and pedestrians walking across pedestrians don't walk
synchronize they don't walk in luck, luck forth, and the pedestrians were walking in step to it. You could see it in the movies made it at the bridge.
And the engineers made a simple monitor of the steak.
They had a spoon when you walk at the same time.
The bridge was a bit too wide, but the bridge was a bit too wide.
The bridge was oscillating back and forth, and the pedestrians were walking in step to it.
You could see it in the movies made it at the bridge. And the engineers made a simple matter of mistake. They had to do it when you walk,
step step step step, and it's back at fourth motion. But when you walk, it's also right foot left
with side to side motion. And the side to side motion for which the bridge was strong enough,
but it wasn't stiff enough. And as a result, you would feel the
motion and you'd fall into stuff with it. And people were very uncomfortable with it. They
closed the bridge for two years, while they've totally built stiffening for it. Now, nerve
cells produce action potentials. A bunch of cells which are loosely coupled together, producing action potentials at the same rate, there'll be some circumstances under which these things
can lock together. Other circumstances in which they won't. Well, if they fire together,
you can be sure that other cells are going to notice it, that you can make a computational
feature out of this in an evolving brain.
Most artificial neural networks don't even have action potentials that alone have the
possibility for synchronizing them.
You mentioned the evolutionary process that builds on top of biological systems, leverages that the
weird mess of it somehow. So how do you make sense of that ability to leverage all the
different kinds of complexities in the biological brain. Well, look, in the biological molecule level,
you'd have a piece of DNA which encodes
for a particular protein.
You could duplicate that piece of DNA.
And now, one part of it can code for that protein
with the other one.
Could itself change a little bit
and thus start coding for a molecule
which is slightly different. Now, that molecule was just slightly different, had it not a
function which helped any old chemical reaction was important to the cell.
You would go ahead and let that try and evolution slowly improve that function and so you have the possibility of
Duplicating and then having things drift apart one of them retain the old function the other one do something new for you
And there's evolutionary pressure to improve
Look, there isn't computers too, which is improvement has to do with closing some companies and openings to all
the others. You have a evolutionary process looks a little different. Yeah, similar timescale, perhaps.
Much more shorter in timescale. Companies close, yeah, go bankrupt and are born, yeah, shorter, but not much shorter. Some company last the century
couples, but yeah, you're right. I
Mean if you think of companies as single organism that builds and you all know yeah, it's a fascinating
dual
correspondence there between
biological and companies have difficulty
having a new product competing with an old product.
And when IBM built its first PC,
you probably read the book.
They made a little isolated internal unit
to make the PC.
And for the first time in IBM's history,
they didn't insist that you build it
out of IBM components.
But they understood that they could get into this market, which is a very different thing
by completely changing their culture.
And biology finds other markets in a more adaptive way.
He adds better at it. It's better at that kind of
integration. So maybe you've already said it, but what to use the most
beautiful aspect or mechanism of the human mind? Is it the adaptive, the ability to
adapt as you've described? there's some other little quark
that you particularly like.
Adaptation is everything when you get down to it, but the difference is, there are differences
between adaptation where your learning goes on only-generation, the evolutionary time, where your learning
goes on at the time scale of one individual who must learn from the environment during
that individual's lifetime.
And biology has both kinds of learning in it. And the thing which makes neurobiology hard is that a mathematical system is that we're
built on this other kind of evolutionary system.
What do you mean by a mathematical system?
Where is the math in the biology?
Well, when you talk to a computer scientist about neural networks, it's all math.
The fact that biology actually came about from evolution and the fact that biology is
about a system which you can build in three dimensions.
If you look at computer chips, computer chips are basically two dimensional structures.
Maybe 2.1 dimensions, but they really have difficulty doing three dimensional wiring.
Biology is neocortex, is actually also sheet-like.
And it sits on top of the white matter, which
is about ten times at the volume of the gray matter, and contains all what you might
call the wires.
But there's a huge effect of computer structure on what is easy and what is hard is immense.
So and biology does, it makes something's easy that are very difficult to understand how to do computationally. On the other
hand, you can't do simple floating point arithmetic, because it's
awfully stupid. Yeah. And you're saying this kind of three
dimensional complicated structure makes it it still math. It's still doing math. The kind of math is
doing enables you to solve problems of a very different kind. That's right. That's
right. So you mentioned two kinds of adaptation. The evolutionary adaptation at
the end the adaptation or learning at the scale
of a single human life. Which is particularly beautiful to you and interesting from a research
and from just a human perspective and which is more powerful.
I find things most interesting that I begin to see how to get into the edges of them and
tease them apart a little bit to see how they work.
And since I can't see the evolutionary process going on, I am in awe of it, but I find it just a black hole is far as trying to understand what
to do.
And so in a certain sense, I'm in awe of it, but I couldn't be interested in working
on it.
The human life's time scale is however thing you can tease apart and study.
Yeah, you can do it.
There's the developmental neurobiology.
It's understands how the connections and how the structure evolves from a combination of
what the genetics is like and the real effect that your building is system in three dimensions.
In just days and months, those early days of a human life are really interesting.
They are, and of course, there are times of immense cell multiplication.
There are also times of the greatest still death in the brain.
It is during infancy.
It's turnover.
So what is what what what is not effective, what is not wired well enough to use the moment
throw it out?
It's a mysterious process from let me ask from what field do you think the biggest breakthroughs in understanding the mind will come
in the next decades?
Is it neuroscience, computer science, neurobiology, psychology, physics, maybe math, maybe literature. Well, of course, I see the world always through lens of physics.
I grew up in physics.
And the way I pick problems is very characteristic of physics and of the intellectual background,
which is not psychology, which is not chemistry and so on and so on.
Both of your parents are physicists.
Both of my parents are physicists, and the real thing I got out of that was a feeling
that the world is an understandable place.
And if you do enough experiments, then think about what they mean and structure things that you
can do the mathematics of the relevant to the experiments.
You also have to understand how things work.
But that was a few years ago.
Did you change your mind at all through many decades of trying to understand the mind of
studying in different kinds of ways not even the mind just biological systems
You still have hope the physics that you can understand
There's a question of what do you mean by understand?
Of course when I taught freshman physics I used to say I
Wanted to get physics to understand this subject, to understand Newton's laws.
I didn't want them simply to memorize a set of examples to which they knew the equations
to write down, to generate the answers. I had this nebulous idea of understanding.
So if you looked at a situation, you could say, Oh, I expect the ball to make that trajectory.
I expect some intuition of understanding.
And I don't know how to express that very well.
I've never known how to express it well.
And you run, smack up against it,
when you do these simple neural nets, feed forward neural nets,
which do amazing things.
And yet you know, contain nothing of the essence of what I would have fulfilled, was understanding.
Understanding is more than just an enormous lookup table.
Let's linger on that.
How sure you are of that. What if the table gets really big?
So I mean, asks another way, these feed forward neural networks. Do you think they'll ever understand?
Could it be answered that in two ways? I think if you look at real systems,
If you look at real systems, feedback is an essential aspect of how these real systems compute. On the other hand, if I have a mathematical system with feedback, I know I can unlay
this and do it. But I have an exponential expansion and the amount of stuff I have to
build if I can solve the problem
that way.
So feedback is essential.
So we can talk even about recurrence.
Yeah, absolutely.
So recurrence.
But do you think all the pieces are there to achieve understanding through these simple
mechanisms?
Like back to our original question, what is the fundamental, is there a fundamental
difference in the artificial neural networks and biological, or is it just a bunch of surface stuff?
Suppose you ask a neurosurgeon, when has somebody did?
Yeah. They'll probably go back to saying, well, I can look at the brain rhythms
and tell you this is a brain which is never going to function again. This other one is the one which if we treat it well, is still recoverable.
And then just do that by, I mean, electrodes looking at simple electrical patterns, don't look in any detail at all at what individual neurons are doing.
These rhythms are already absent from anything which goes on at Google.
Yeah, but the rhythms, but the rhythms what so what that's like comparing okay, I'll tell you it's like you're comparing
The the greatest classical musician in the world to a child first learning to play the question
I'm at but they're still both playing the piano I'm asking is there
Will it ever go on at Google? Do you
have a hope? Because you're one of the seminal figures in both launching both disciplines,
both sides of the river. I think it's going to go on generation after generation, the way it has where what you might call the AI
computer science community says, let's take the following. This is our model of
neural biology at the moment. Let's pretend it's good enough and do everything
we can with it. And it does interesting things, and after the while it sort of grinds into the sand.
And you say, ah, something else that needed for neurobiology, and some other grand thing
comes in and enables you to go a lot further.
What will go into the sand again?
What I think it could be generations of this evolution.
I don't know how many of them, and each one is going to get you further into what a brain What will go into the Sand again? What other thing? It can be generations of this evolution.
I don't know how many of them and each one is going to get you further into what a brain does.
And in some sense, past the Turing test,
longer and more broad aspects.
And how many of these are good there are going to have to be before you say, I've made
something, I've made a human, I don't know.
But your senses might be a couple.
My senses might be a couple more.
And going back to my brain waves, it's a word.
Yes.
From the AI point of view, they would say, ah, maybe these are an heavy phenomenon,
and not important at all.
The first car I had, a real wreck of a 1936 Dodge, cool, about 45 miles an hour and the wheels was shimmy.
Yeah.
Good speedometer that.
Now, don't be designed in the car that way.
The car is malfunctioning to have that, but in biology, if it were useful to know, when
are you going more than 45 miles an hour, you
just capture that and you wouldn't worry about where it came from.
Yeah.
It's going to be a long time before that kind of thing, which can take place in large
complex networks of things, is actually used in the computation.
Look, how many transistors are there
at your laptop these days?
Actually, I don't know the number.
It's on the scale of 10 to the 10.
I can't remember the number either.
Yeah.
And all the transistors are somewhat similar.
And most physical systems with that many parts, all of which are similar, have collective
properties.
Yes.
Sound waves and air, earthquakes, what have you have collective properties wither.
There are no collective properties used in artificial neural networks in AI.
Yes, very.
If biology uses them, it's going to take us to more generations of things to prevent people
to actually dig in and see how they are used, what they mean.
See, you're very right.
We might have to return several times to neuro your biology and try to make our transistors
more messy.
Yeah, yeah.
At the same time, the simple ones will conquer big aspects. And I think one of the most biggest surprises to me was how well learning systems was
are manifestly non-biological, how important they can be actually and how important and how
useful they can be in AI. So, if you can just take a stroll to some of your work that is incredibly surprising,
that it works as well as it does, that launched a lot of the recent work with neural networks.
If we go to what are now called Hopfield Networks, can you tell me what is associative memory in the mind for the human
side? Let's explore memory for a bit.
Okay, but you mean by associative memory is you have a memory of each of your friends.
Your friend has all kinds of properties from what they look like, just what their voice sounds like, where they went to college, where you met them. Go on and on. What science
papers they've written. And if I start talking about a five foot ten, wire-eating cognitive
scientist, who's got a very bad back. It doesn't take
very long for you to say, oh, he's talking about Jeff Hinton. I never mentioned the name
or anything very particular, but somehow a few facts that are associated with a particular
person and he able to get a hold of the rest of the facts,
or not the rest of them,
other subs out of them.
And it's this ability to link things together,
link experiences together,
which it goes on to the general name
of a sociitative memory.
And a large part of intelligent behavior is actually just large
associative memories at work as far as I can see.
What do you think is the mechanism of how it works in the mind? Is it a mystery to you
still? Do you have inklings of how this essential thing for cognition works?
What I made 35 years ago was of course a crude physics model to show that actually
enable you to understand my old sense of understanding as a physicist because you could say, ah, I understand why this goes to stable states. It's like things going downhill, right? And
that gives you something with which to think in physical terms rather than only in mathematical
terms.
You've created these associative artificial networks.
That's right.
And now, if you look at what I did, I didn't at all describe a system which gracefully learns.
I described a system in which you could understand how learning could link things together,
how very crudely it might learn.
One of the things which intrigues me is I reinvestigate that this to now, to some extent, is,
look, I see you, I'll see you every second for the next hour would have you. Each look at you is a little bit different. I
don't store all those second by second images. I don't store 3000 images. I somehow compact
this information. So I now have a view of you which I can use. It doesn't slobically remember anything in particular, but it compacts the information
into useful chunks, which are somehow these chunks, which are not just activities of neurons,
bigger things than that, which are the real entities, which are useful to you. Useful to describe to compress this information.
You have to compress it in such a way that if I get the information that comes in just like this
again, I don't bother to be re-rided or efforts to re-rided simply do not yield anything because those things are already written.
And that needs to be not looked this up as I've already written as I've started somewhere
already.
There'll be something which is much more automatic in the machine hardware.
Right.
So in the human mind, how complicated is that process?
Do you think so you created?
Feels weird to be sitting with John Howellville calling him hot field networks, but it is weird
But nevertheless, that's what everyone calls him. So here we are
So that's a simplification. That's what a physicist would do. You and Richard Fiamon set down
and talked about a social share of memory.
Now, if you look at the mind,
well, you can't quite simplify it so perfectly,
do you think that-
Well, let me back track this a little bit.
Yeah.
Biology is about dynamical systems.
Computers are dynamical systems. Computers are dynamical systems. You can ask if you
want to model biology, just want to model neurobiology, what is the time scale? There
is a dynamical system in which, a fairly fast time scale in which you could say, the synapses don't change much
during this computation.
So, I'll think of the synapses as fixed, and just through the dynamics of the activity.
Or you can say, the synapses are changing fast enough that I have to have the synaptic
dynamics working at the same time as the system dynamics in order to understand the biology.
Most of the, if you look at the feed forward artificial neural nets, they're all done as
learning, first of all, I spent some time learning and not performing, and I turn off learning and I perform. Right. That's not biology.
And so as I look more deeply at neurobiology, even at
associate of memory, I've got to face the fact that the
dynamics of the synapse change is going on all the time.
And I can't just get by by saying, I'll do the dynamics of activity with fixed synapses.
So the synaptic dynamics of the synapses is actually fundamental to the whole system?
Yeah. Yeah. And there's nothing necessarily separating the time skills. When the time
skills can be separated, it's neat for the physicists
of the mathematicians point of view, but it's not necessarily true in neurobiology.
So you're kind of dancing beautifully between showing a lot of respect to physics and then
also saying that physics cannot quite reach the complexity of biology. So where do you
land? Or do you continuously dance between the two places? I continuously dance
between them because my whole notion of understanding is that you can
describe to somebody else how something works in ways which are honest and believable and still
not describe all the nuts and bolts in detail.
Weather.
I can describe weather as 10 to 32 molecules, lighting in the atmosphere.
I can simulate whether that way or have big enough machine. I'll simulate it accurately.
It's no good for understanding.
I want to understand things. I want to understand things in terms of when wind patterns hurricanes,
pressure to fringes and so on, all things as they're collective.
And and so on, all things as they're collective.
And the physicists, the physicists, in me,
always hopes that biology will have some things
that you can be said about it,
which are both true and for which you don't need
all the molecular details of the molecules colliding.
That's what I mean from the roots of physics by understanding.
So what did, again, sorry, but how field networks help you understand what insight to give
us about memory, about learning?
They didn't give insights about learning, they gave insights about how things having learned could be expressed.
How having learned a picture of you reminds me of your name.
That didn't describe a reasonable way of actually doing the learning. I only said that if you had previously learned the connections of this kind of pattern,
would now be able to behave in a physical way, if I put the part of the pattern in here,
the other part of the pattern will complete over here.
I could understand that physics, if
the right learning stuff had already been put in, and you could understand why then
putting in a picture of somebody else would generate something else over here. But it
didn't not have a reasonable description of the learning process.
But even, so forget learning, I mean, that's just a powerful concept that sort of forming representations that are useful to be robust, you
know, for error correction kind of thing. So this is kind of what the biology
does that we're talking about.
Yeah, and by by people did was simply enable you there are lots there are lots of ways of being robust.
If you think of a dynamical system, you think of a system where a path is going on in time.
And if you think of a computer, there's a computational path, which is going on in a huge
dimensional space of 1s and 0s.
And an error-cracking system is a system which, if you get a little bit off that trajectory,
we'll push you back under that trajectory again.
So you get the same answer and find the fact that there were things that the computation
wasn't being ideally done all the way along the line.
And there are lots of models for error correction, but one of the models for error correction is to say
there's a valley that you're following flowing down and if you push it a little bit off the valley
just like water being pushed a little bit by a rock, it gets back and
follows the course of the river. And that basically the analog in the physical system,
which enables you to say, oh yes, error-free computation and an associative memory are very much
like things that I can understand from the point of view of a physical system.
The physical system can be under some circumstances an accurate metaphor.
It's not the only metaphor. There are other ere-correction schemes which don't have a value and energy behind them. But there was a correction of schemes
such as a method,
and additionally,
a view of a understand that I don't.
So there's the physical metaphor
that seems to work here.
That's right, that's right.
So these kinds of networks actually
led to a lot of the work that is going on now and
your networks, artificial neural networks.
So the follow on work with restricted bolstem machines and deep belief nets followed
on from these ideas of the Hopfield network.
So what do you think about this continued progress of that work towards now
re-vigorated exploration of feed forward neural networks and recurrent neural networks and
convolutional neural networks and kinds of networks that are helping solve image recognition,
natural language processing, all that kind of stuff?
natural language processing all that kind of stuff. It always intrigued me that one of the most long-lived of the learning systems is the bolts
from machine, which is intrinsically a feedback network, and with the brilliance of Hinden's
and Sennowski to understand how to do learning in that.
And it's still a useful way to understand learning and understand
and the learning that you understand in that has something to do with the way that feed forward systems work.
But it's not always exactly simple to express that intuition. But it always amuses me to see Hinton going back to the
will yet again on a form of the Boltzmann machine because really that which
has feedback and interesting probabilities in it is a lovely encapsulation of
something computational.
Something computational?
Something both computational and physical, computational and the very positive related
to feed forward networks, physical in that
both machine learning is really learning a set of parameters
for physics, Hamiltonian, or energy function.
What do you think about learning in this whole domain?
Do you think the F4 mentioned Guy, Jeff Hinton, all the work there with back propagation,
all the kind of learning that goes on in these networks.
How do you, if we compare it to learning in the brain, for example, is there echoes of
the same kind of power that back propagation reveals about these kinds of recurrent networks?
Or is it something fundamentally different going on in the brain.
I don't think the brain is as deep as the deepest networks go.
The deepest computer science networks.
And I do wonder whether part of that depth of the computer science networks is necessitated
by the fact that the only learning is easily done on a machine is feed forward.
And so there is the question of to what extent is the biology which has some feed forward
and some feed back been captured by something which has got many more neurons, but
more depth than neurons.
So, part of you wonders if the feedback is actually more essential than the number of neurons
or the depth, the dynamics of the feedback. I didn't have the feedback.
Look, if you don't have feedback, it's a little bit like a building, a big computer, and
having it running out through one clock cycle.
And then you can't do anything, do you put, you reload something coming in.
How do you use the fact that there are multiple clocks like that? How do I use
the fact that you can close your eyes, stop listening to me and think about a chessboard
for two minutes without any input whatsoever?
Yeah. That memory thing, that's fundamentally a feedback kind of mechanism. You're going back to something. Yes
It's hard. It's hard to understand. It's hard to understand. Let alone consciousness
Because that's a little alone consciousness. Yes, yes, because that's tied up in there too. You can't just put that on another shelf
Every one of the while I got
interested in consciousness and then I go and I've done that for years and asked
one of my betters is that we're their view on consciousness. It's been
interesting collecting them. What is consciousness? Let's try to take a brief step into that room.
Well, I also am Irvin Minski, if you want consciousness.
And Irvin said consciousness is basically overrated.
It may be an epiphenominant.
After all, all the things your brain does, but it's actually hard computations you do
non-consciously.
And there's so much evidence that even the simple things you do, you can make decisions,
you can make committed decisions about them.
The neurobiological can say, he is now committed,
he's going to move the hand left before you know it.
So his view that consciousness is not,
that's just a little icing on the cake.
The real cake is in the subconscious.
Yeah, yeah.
Subconscious, non-conscious.
Non-conscious.
That's the better word, sir.
There's only the Freud-Captured, the other word.
Yeah, it's a confusing word, subconscious.
Nicholas Chater wrote an interesting book,
I think the title of it is The Mind is Flat.
And Flat, in a neural net sense, it might be flat, it's something which is a very broad neural
net without really any layers in depth, or as a deep brain would be many layers and not
so broad.
In the same sense that if you pushinsky hard enough, he would probably have said consciousness is your effort to explain to yourself
That would you have already done
Yeah
It's the weaving of the narrative around the things that already been computed for you
That's right and then so much of what we do
For our memories of events, for example,
if there's some traumatic event you witness, you will have a few facts about it correctly done.
If somebody asks you about it, you will weave a narrative, which is actually much more rich in detail than that, based on some anchor points you have of correct things.
And pulling together general knowledge on the other, but you will have a narrative.
And once you generate that narrative, you are very likely to repeat that narrative and
claim that all the things you have in it are actually the correct things. There was a marvelous example of that in the
Watergate-Stlash impeachment era of John Dean. John Dean, your de-Young to know,
had been the personal lawyer of Nixon. And so John Dean was involved in the cover-up, and John Dean ultimately realized the only way to keep himself out of jail for a long time was actually to tell some of the
truths about Nixon. And John afterward, some of the tapes,
the secret tapes that were from which these Don Mouser, Jean, was recalling these conversations
were published. And one found out that John Dean had a good but not exceptional memory, what he had with
his inability to paint vividly and in some sense accurately the tone of what was going
on.
By the way, that's a beautiful description of consciousness. Do you, like, where do you stand in your today?
So perhaps it changes day to day, but where do you stand on the importance of consciousness
in our whole big mess of cognition?
Is it just a little narrative maker or is it actually fundamental to intelligence?
That's a very hard one.
What I asked Francis Crick about consciousness, he launched forward in a long, final
log about mental and the piece and Mendel knew that there was something,
and I hope biologists understood that there was
something in inheritance, which was just
very, very different.
And the fact that inherited traits
didn't just wash out into a gray,
but this or this and propagated,
that that was absolutely fundamental to the biology. And it took generations of biologists to understand that there was genetics, and it took another
generation or two to understand that genetics came from DNA.
But very shortly after Mendel, thinking biologists did realize that there was a deep problem
about inheritance.
And Francis would have liked to have said, and that's why I'm working on consciousness.
But of course, he didn't have any smoking gun in the sense of Mendel.
And that's the weakness of his position.
And if you read his book, what you wrote with Koch, I think.
Yeah, Christoph Koch, yeah.
I find it unconvincing for the smoking gun reason.
So I go on collecting views without actually having taken a very strong one myself,
because I haven't seen the entry point, not seeing the smoking gun.
From the point of view of physics, I don't see the entry point,
whereas the neurobiology, once I understood the idea of a collective,
an evolution of dynamics, which could
be described as a collective phenomenon, I thought, ah, there's a point where I know
about physics is so different from any neurobiologist that I have something that I might be able
to contribute.
And right now there's no way to grasp a consciousness from a physics perspective.
From my point of view, that's correct.
And of course, people, everybody else, you can think very buddily about things.
The closely related question about free will, you believe you are free will. Physicists will give an offhand answer, and then backtrack, backtrack, backtrack, where
they realize that the answer they gave must fundamentally contradict the laws of physics.
That nature answering questions of free will and consciousness naturally lead to contradictions
from a physics perspective.
Because it eventually ends up with quantum mechanics,
and then you get into that home-ess of trying to understand
how much, from a physics perspective,
how much is determined, already predetermined,
how much is already deterministic about our universe.
And it's lots of different.
And if you don't push quite that far, you can say,
essentially all of neurobi far, you can say,
essentially all of neural biology, which is relevant,
can be captured by classical equations of motion.
Right.
Because in my view of the mysteries of the brain
are not the mysteries of quantum mechanics.
For the mysteries of what can happen
when you have a dynamical system driven system with 10
of the 14 parts. The complexity is something which is that the physical complex systems
is at least as badly understood as the physics of phase coherence in quantum mechanics.
the physics of phase coherence in quantum mechanics. Can we go there for a second?
You've talked about attractor networks and just maybe you could say what are attractor
networks and more broadly what are interesting network dynamics that emerge in these or other
complex systems?
You have to be willing to think in a huge number of dimensions because in a huge
number of dimensions, the behavior of a system can be thought as just the motion of the point
over time in this huge number of dimensions. And an attraction network is simply a network
where there is a line, and other lines converge jotted in time. That's the essence of an
attraction network. That's how you mean a highly, highly dimensional space.
And the easiest way to get that is to do it in a highly dimensional space, where some
of the dimensions provide the dissipation, which means which, the client of a physical system,
trajectories can't contract everywhere,
the object can track in some places and expand in others.
There's a fundamental classical theorem of statistical mechanics,
which goes under the name of Leoville's theorem,
which says you can't contract everywhere.
You have to contract somewhere, expand somewhere else.
And it's an interesting physical system.
You get driven systems where you have a small subsystem,
which is the interesting part, and the rest of the contraction of expansion,
the physicists would say is entropy flow in the southern part of the system. But basically, attraction networks are dynamics, funneling
down so that you can't be any, so that if you start somewhere in the dynamical system,
you will soon find yourself on a pretty well determined pathway which goes somewhere.
You start somewhere else, you'll wind up on a different pathway.
But you don't have just all possible things.
You have some defined pathways, which are allowed and under which you will converge.
And that's the way you make a stable computer, and that's the way you make a stable behavior. So in general, looking at the physics of the emergent stability in these networks, what
are some interesting characteristics that, what are some interesting insights from studying
the dynamics of such high dimensional systems?
Most dynamical systems, most driven dynamical systems, I've driven there, coupled somehow
to an energy source, and so through dynamics he's going because it's coupling to the energy
source.
Most of them, it's very difficult to understand it all with the dynamical behavior is going
to be. You have to run be. You have to run it.
You have to run it.
There's a subset of systems which has,
exactly, hundreds of mathematicians as a lealbin of function.
And those systems, you can understand convergent dynamics
by saying you're going downhill on something or
other. And that's what I found with ever knowing what the optimal functions
were in the simple model I made in the early 80s. Was it energy functions that
you could understand how you could get this channeling under pathways without having to follow the dynamics in an infinite detail.
You started rolling a ball that's awful of a mountain, it's going to wind up in the
bottom of the valley.
You know that's true without actually watching the ball fall rolled down.
There's certain properties of the system that when you can know that.
That's right.
And not all systems behave that way.
Most don't, probably.
Most both don't, but it provides you with a metaphor for thinking about systems which are
stable and who to have these attractors behave even if you can't find the alien up in
a function behind them or the energy function
behind them, it gives you a metaphor for thought.
Speaking of thought, if I had a glint in my eye with excitement and said, you know, I'm
really excited about this something called deep learning and neural networks, and I would
like to create an intelligent system and come to you as an advisor.
What would you recommend?
Is it a hopeless pursuit to use neural networks to achieve thought?
Is it what kind of mechanism should we explore, what kind of ideas should
we explore?
Well, you look at the simple networks, with one past networks, they don't support multiple
hypotheses very well.
I've tried to work with very simple systems, which do something which you might consider to be thinking.
What has to do with the ability to do mental exploration before you make it take a physical action?
Almost they like we were mentioning playing chess, visualizing,
simulating inside your head, different outcomes. Yeah, yeah, and
Now you could do that at a feed-forward network because you pre-calculated all kinds of things
But I think the way neurobiology does it hasn't pre-calculated
everything I think the way neurobiology does, it hasn't pre-calculated everything.
It actually has parts of a dynamical system, which you're doing exploration in a way which is...
There's a creative element.
Like there's a creative element.
There's a creative element and in a simple-minded neural net new question is a question within this space, you
can actually rely on that persistent, pretty well to come up with a good suggestion for
what to do.
If on the other hand, the query comes from outside the space, you have no way of knowing how the system is going
to behave. There are no limitations on what could happen. And so with the artificial
neural net world is always very much, I have a population of examples. The test set must
be drawn from the equivalent population. The test data as examples, which are from
a population, which is completely different. There's no way that you could expect to get
the answer right.
Yeah. And so what they call outside the distribution, that's right. That's right. And so if you see a ball rolling across the street and dusk, if that wasn't in your,
your, your training set, the idea that a child may be coming close behind that is not going
to occur to the neural net.
And it is to our, there's something in the neural biology that allows that. Yeah. There's something in the way of what it means to be outside of the population of the training set.
The population of the training set isn't just sort of the set of examples.
There's more to it than that.
It gets back to my question of,
what is it to understand something?
Yeah.
You know, in a small tangent,
you've talked about the value of thinking
of deductive reasoning in science
versus large data collection.
So sort of thinking about the problem.
I suppose it's the physics side of you of going back to first principles and thinking,
but what do you think is the value of deductive reasoning in a scientific process?
Well, look, they're obviously scientific questions in which the root, the answer to it,
comes through the analysis of what hell of a lot of data.
Right. Cosmology of a lot of data. Right.
Cosmology, that kind of stuff.
And that's never written in the kind of problem in which I've had any particular insight.
Though I must say if you look at cosmology, it's one of those.
If you look at the actual things that Jim Peoples, one of this years,
Nobel Prize, Physiophysics, one's from the local physics department, the kinds of things
he's done, he's never crunched large data, never, never, never. He's used the encapsulation
of the work of others in this regard. Right.
But ultimately, we boil down to thinking through the problem.
Like, what are the principles under which a particular phenomena operates?
Yeah.
And look, physics is always going to look for ways in which you can describe the system,
and we would rise as above the details and to the hard-died and the will biologist, biology
works because of the details.
In physics, through the physicists, we want an explanation, which is right, in spite of
the details.
And there will be questions which we cannot answer as physicists because the answer cannot be found that way.
There's not sure if you're familiar with the entire field of brain computer interfaces.
That's become more and more intensely research and develop recently, especially with companies like Neuralink with Elon Musk.
Yeah, I know there have always been the interest both in things like getting the eyes to be
able to control things or getting the thought patterns to be able to move what had been connected with them, which is now connected through a computer.
That's right. So in the case of neural link, they're doing a thousand plus connections,
where they're able to do two-way, activate and read spikes in your neural spikes. Do you have
hope for that kind of computer brain interaction in the near or maybe in far future of being
able to expand the ability of the mind of cognition or understand the mind.
As interesting watching things go, when I first became interested in neurobiology. Most of the practitioners thought you would
be able to understand neurobiology by techniques which allowed you to record only one cell
at a time. One cell. People like David Yubel, very strongly at that point of view. And that's been taken over by a generation
couple of generations later by a set of people who says, not until we can record from 10 to
the 4 or 10 to the 5 is the time, who we actually be able to understand how the brain actually works.
And in a general sense, I think that's right.
You have to look, you have to begin to be able to look for the collective modes, collective
operation of things.
It doesn't rely on this action potential or that cell.
It relies on the collective properties of this set of cells connected with this kind
of patterns and so on. And you're not going
to succeed in the thing what those collective activities are without recording many cells at once.
The question is how many at once? What's the threshold? And that's the, let's see.
Yeah, and look as being pursued hard in the motor cortex, the motor cortex does something
which is complex and yet the problem you're trying to address is fairly simple.
The neurobiology does it in ways that different, from the way an engineer would do it. An engineer would put in six highly accurate
stepping motors, a controlling a limb, rather than 100,000 muscle fibers, each of which
has to be individually controlled. And so understanding how to do things in a way which is
much more forgiving and much more neural, I think would benefit the engineering
world. The engineering world touch. Let's put a pressure sensor to it rather than an array of
a of a gazillion pressure, pressure sensors, none of which are accurate. all of which are perpetually recalibating themselves.
So you're saying your hope is your advice for the engineers of the future is to
embrace the large chaos of a messy, aeropron system like those of the biological systems.
Like that's probably the way to solve some of these things.
I think you'll be able to make
better computations, less robotics that way than by trying to force things into a
into a robotics for joint motors are powerful and stepping motors are accurate.
But then the physicists, the physicists, the new will be lost forever in such systems. Because there's no simple fundamentals to exploring systems that are so large in...
Well, you see that.
And yet there's a lot of physics, the Navi Stokes equations, the equations of nonlinear hydrodedemics, huge amount of physics
in them. All the physics of atoms and molecules has been lost, but it's been replaced by this other
set of equations, which is just as true as the equations at the bottom. Now, those equations
are going to be harder to find in the Robology.
But the physicist in me says there are probably some equations of that sort.
They're out there.
They're out there.
And a physicist is going to contribute anything.
It may contribute to trying to find out what those equations are and how to capture them from the biology.
those equations are and how to capture them from the biology. Would you say that's one of the main open problems of our age? Is to discover those equations?
Yeah, if you look at... there's molecules and there's psychological behavior. And these two
were somehow related. They're layers of detail, they're layers of collectiveness
and to capture that in some vague way,
several stages on the way up to see how these things
that can actually be linked together.
So it seems in our universe,
there's a lot of, a lot of elegant equations that can describe
the fundamental way that things behave, which is a surprise. I mean it's compressible into equations.
It's simple and beautiful. But there isn't it's still an open question whether that link is equally
between molecules and the brain is equally
compressible into elegant equations.
But your sense, some, you're both a physicist and a dreamer.
You have a sense that, yeah, I can only dream physics
dreams, physics dreams.
There was an interesting book called Einstein's Dreams,
which alternates between chapters on his life and descriptions of the way time might have
been but isn't. The linking between these being of course ideas that Einstein might have
had to think about the essence of time
as he was thinking about time.
So speaking of the essence of time and your biology, you're one human, famous, impactful
human, but just one human with a brain living the human condition, but you're ultimately
mortal, so call of us, has studying the mind as a mechanism,
changed the way you think about your own mortality?
It has really, because particularly as you get older in the body, comes apart in various ways.
body comes apart in various ways. I became much more aware of the fact that what is somebody is contained in the brain and not in the body that you worry about burying. And And it is to a certain extent true that for people who write things down, equations, dreams,
notepads, diaries, fractions of their thought does continue to live after they're dead
and gone, as to their body is dead and gone. And there's a sea change in that going on in my lifetime between what my father died
when, except for the things that were actually written by him, that were very few facts about
him who have been recorded.
A number of facts that are recorded about each and every one of us
forever now as far as I can see in the digital world. And so the whole question of
what is death? Maybe different for people a generation ago and a generation
people a generation to go and a generation to head. Maybe we have become immortal under some definitions.
Yeah.
Yeah.
Last easy question.
What is the meaning of life?
Looking back, you've studied the mind, the weird descendants of apes. What's the meaning of our existence on this little earth? understand. Interconnected somehow perhaps. Is there a slippery but is there
something that you uh, despite being slippery can hold long enough to express? I've been amazed at how hard it is to define the things in a living system, in the sense
that 100 in Adam is pretty much like another.
But one bacteria is not so much like another bacteria, even of the same nominal species.
In fact, the whole notion of what is the species gets a little bit fuzzy.
And the species exists in the absence of certain classes of environments.
And pretty soon, when it winds up with the biology, which the whole thing is living, but
when there's actually any element of it, which by itself would be said to be living, but whether there's actually any element of it, which by itself would be said
to be living, becomes a little bit vague in my mind.
So, in a sense, the idea of meaning is something that's possessed by an individual, like a conscious
creature, and you're saying that it's all interconnected in some kind of way that
there might not even be an individual, while kind of this complicated mess of biological
systems at all different levels, where the human starts and when the human ends is unclear.
Yeah, yeah, and we're in neurobiology where the, oh, you say the neocortex does the thinking,
but there's a lot of the things
that are done in the spinal cord.
And so what is the essence of thought
that just gonna be neocortex can't be, can't be?
Yeah, maybe to understand and to build thought,
you have to build the universe
along with the neocortex. It's build the universe along with the new cortex.
It's all interlinked through the spinal cord.
John is a huge honor talking today.
Thank you so much for your time.
I really appreciate it.
Well, thank you for the challenge of talking with you.
And if you're interested in seeing whether you can win a five minutes out of this, just
go here on the stands to anywhere.
Beautiful. Thanks for listening to this conversation with
John Hopfield. And thank you to our presenting sponsor, cash app,
downloaded, use code Lex podcast, you'll get $10 and $10 will go
to first an organization that inspires and educates young minds to
become science and technology innovators of tomorrow. If you
enjoy this podcast, subscribe on YouTube, give it 5 stars on Apple Podcast, support
it on Patreon, or simply connect with me on Twitter at Lex Friedman.
And now let me leave you with some words of wisdom from John Hopfield in his article titled,
Now What?
Choosing problems is the primary determinant of what one accomplishes in science.
I have generally had a relatively short attention span in science problems. Thus, I have always
been on the lookout for more interesting questions, either as my present ones get worked out,
or as it get classified by me as intractable, given my particular talents. He then goes on to say,
what I have done in science relies entirely on experimental and theoretical studies by experts.
I have a great respect for them, especially for those who are willing to attempt
communication with someone who is not an expert in the field. I would only add that experts are good at answering questions.
If you're brash enough, ask your own. Don't worry too much about how you found them.
Thank you.