Lex Fridman Podcast - Daniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI
Episode Date: January 14, 2020Daniel Kahneman is winner of the Nobel Prize in economics for his integration of economic science with the psychology of human behavior, judgment and decision-making. He is the author of the popular b...ook "Thinking, Fast and Slow" that summarizes in an accessible way his research of several decades, often in collaboration with Amos Tversky, on cognitive biases, prospect theory, and happiness. The central thesis of this work is a dichotomy between two modes of thought: "System 1" is fast, instinctive and emotional; "System 2" is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 02:36 - Lessons about human behavior from WWII 08:19 - System 1 and system 2: thinking fast and slow 15:17 - Deep learning 30:01 - How hard is autonomous driving? 35:59 - Explainability in AI and humans 40:08 - Experiencing self and the remembering self 51:58 - Man's Search for Meaning by Viktor Frankl 54:46 - How much of human behavior can we study in the lab? 57:57 - Collaboration 1:01:09 - Replication crisis in psychology 1:09:28 - Disagreements and controversies in psychology 1:13:01 - Test for AGI 1:16:17 - Meaning of life
Transcript
Discussion (0)
The following is a conversation with Daniel Cotman, winner of the Nobel Prize in Economics
for his integration of economic science with the psychology of human behavior,
judgment, and decision making. He's the author of the popular book Thinking Fast and Slow
that summarizes in an accessible way his research of several decades,
often in collaboration with Amos Tversky, a cognitive biases, prospect theory,
and happiness.
The central thesis of this work is the dichotomy between two modes of thought, what he
calls system one is fast, instinctive, and emotional, system two is slower, more deliberative,
and more logical.
The book delineates cognitive biases associated with each of these two types of thinking.
His study of the human mind and his peculiar and fascinating limitations are both instructive
and inspiring for those of us seeking to engineer intelligence systems.
This is the Artificial Intelligence Podcast.
If you enjoy it, subscribe on YouTube, give it 5 stars and Apple podcasts, follow on Spotify,
Supporting on Patreon, or simply connect with me on Twitter.
Alexa Friedman spelled F-R-I-D-M-A-N.
I recently started doing ads at the end of the introduction.
I'll do one or two minutes after introducing the episode and never any ads in the middle
that can break the flow of the conversation.
I hope that works for you and doesn't hurt the listening experience.
This show is presented by CashApp.
The number one finance app in the App Store.
I personally use CashApp to send money to friends, but you can also use it to buy, sell,
and deposit Bitcoin in just seconds.
CashApp also has a new investing feature.
You can buy fractions of a stock, say $1 worth, no matter what the stock price is.
Roker services are provided by CashApp investing, a subsidiary of Square and Member SIPC.
I'm excited to be working with CashApp to support one of my favorite organizations called
First, best known for their first robotics and legal competitions.
They educate and inspire hundreds of thousands
of students in over 110 countries and have a perfect rating and charity navigator, which
means that donated money is used to maximum effectiveness. When you get cash out from the
App Store, Google Play, and use code Lex Podcast, you'll get $10 and cash out will also
donate $10 to first, which again is an $8.10 the first which again is an
organization that I've personally seen inspire girls and boys the dream of
engineering a better world and now here's my conversation with Daniel Conneman You tell a story of an SS soldier early in the war, World War II, in Nazi occupied France
and Paris where you grew up.
He picked you up and hugged you and showed you a picture of a boy, maybe not realizing
that you were Jewish.
Not maybe, certainly not.
So I told you I'm from the Soviet Union that was significantly impacted by the war as
well, and I'm Jewish as well.
What do you think World War II taught us about human psychology broadly?
Well, I think the only big surprise is the extermination policy genocide by the German
people.
That's when you look back on it, and I think that's a major surprise.
It's a surprise because... It's a surprise because...
It's a surprise that they could do it.
It's a surprise that they, enough people,
willingly participate in that.
This is a surprise.
Now, it's no longer a surprise,
but it's changed.
Many people's views, I think, about human beings.
Certainly for me, the Ahmed trial, and the teachers do something because it's very clear
that if it could happen in Germany, it could happen anywhere.
It's not that the Germans were special. This could happen anywhere.
So what do you think that is? Do you think we're all capable of evil? We're all capable of cruelty?
I don't think in those terms, I think that what is certainly possible is you can dehumanize
people so that you treat them not as people anymore, but as animals.
And the same way that you can slaughter animals without feeling much of anything,
it can be the same.
And when you feel that, I think the combination of dehumanizing the other side
and having uncontrolled power over other people.
I think that doesn't bring out the most generous aspect of human nature.
So that Nazi soldier, you know, he was a good man. I mean, you know, he,
and he was perfectly capable of killing a lot of people, and I'm sure he
did. But what did the Jewish people mean to Nazis? So what the dismissal of Jewish as
worthy of, again, this is surprising that it was so extreme, but it's not one thing in human nature. I don't want
to call it evil, but the distinction between the in-group and the out-group, that is very
basic. So that's what I think
probably didn't need the Holocaust to teach us that, but the Holocaust is a very sharp lesson
of what can happen to people, what people can do. So the effect of the in-group and the out-group?
You know, it's clear, that those were people, you know, you could shoot them, you could,
you know, they were not human, they were not, there was no empathy or very, very little
empathy left.
So occasionally, you know, they might have been, and very quickly, by the way, the empathy
disappeared if there was initially.
And the fact that everybody around you was doing it, that completely, the group doing
it, everybody shooting Jews, I think, that makes it permissible.
Now, how much, you know, whether it could happen in every culture or whether the Germans
were just particularly efficient and disciplined so they could get away with it. It's an interesting question.
Are these artifacts of history or is it human nature?
I think that's really human nature. You know, you put some people
in a position of power relative to other people
and then they become as human, they become different.
But in general, in war outside of concentration camps
in World War II, it seems that war brings out
darker size of human nature,
but also the beautiful things about human nature.
Well, I mean, but it brings out is the loyalty among soldiers.
I mean, it brings out the bonding, male bonding, I think, is a very real thing that happens.
And so, and there is a student thrill to friendship, and there is certainly a student thrill
to friendship on the risk and to shared risk. And so people have very profound emotions
up to the point where it gets so traumatic that little is left. But
So let's talk about psychology a little bit. In your book, Thinking Fast and Slow,
you describe two modes of thought, System 1, the Fast
and Stinkative, and Emotional 1, System 2, the slower, deliberate, logical one, at the
risk of asking Darwin to discuss the theory of evolution.
You described distinguishing characteristics for people who have not read your book of the
two systems?
Well, I mean, the word system is a bit misleading, but at the same time, it's misleading, it's
also very useful.
But what I call system one, it's easier to think of it as a family of activities. And primarily the way I describe it is there are different
ways for ideas to come to mind. And some ideas come to mind automatically. And the example
is standard examples 2 plus 2 and then something happens to you. And in other cases you've
got to do something, you've got to work in order to
produce the idea. And my example, I always give the same pair of numbers is 27 times 14, I think.
You have to perform some algorithm in your heads and steps. And it takes time. It's a very
difference. Nothing comes to mind except something comes to mind which is the algorithm.
I mean, that you've got to perform.
And then it's work and it engages short to a memory and engages executive function.
And it makes you incapable of doing other things at the same time.
So the main characteristic of system 2 is that there is mental effort involved,
and there is a limited capacity for mental effort,
whereas system one is effortless, essentially.
That's the major distinction.
So, you talk about there, you know,
it's really convenient to talk about two systems,
but you also mention just now,
in general, that there is no distinct two systems in the brain from a neurobiological,
even from psychology perspective. But why does it seem to, from the experience you've conducted,
there does seem to be kind of emergent two modes of thinking. So at some point, these kinds of systems came into a brain architecture,
maybe mammals share it, or do you not think of it in all those terms that it's all a
mush and these two things just emerge?
You know, evolutionary theorizing about this is cheap and easy. So it's the way I think about it,
is that it's very clear that animals have a perceptual system and that includes an ability to
understand the world, at least to the extent that they can predict, they can't explain anything, but they can anticipate
what's going to happen. And that's the key form of understanding the world. And my crude idea is
that I call system two. Well, system two grew out of this, And, you know, there is language,
and there is the capacity of manipulating ideas
and the capacity of imagining futures
and of imagining counterfactuals thing that haven't happened
and to do conditional thinking.
And there are really a lot of abilities
that without language and without the very large brain that we have compared to
others would be impossible. Now, system one is more like what the animals have, but system
one also can talk. I mean, it has language, it understands language. Indeed, it speaks
for us. I mean, you know, I'm not choosing every word
as a deliberate process, the words,
I have some idea and then the words come out
and that's automatic and effortless.
And many of the experiments you've done
is to show that, listen, system one exists
and it does speak for us and we should be careful
about the voice it provides because it's... Well,. Well, you know, we have to trust it
because it's the speed that which it acts system two. If we depend on system two for survival,
we wouldn't survive very long because it's very slow. Yeah, crossing the street. Crossing the street. I mean, many things depend on their being
automatic. One very important aspect of system one is that it's
not instinctive. You use the word instinctive. It contains
skills that clearly have been learned. So that skilled behavior
like driving a car or speaking, in fact, skilled behavior has to be
learned.
And so it doesn't, you know, you don't come equipped with driving.
You have to learn how to drive.
And you have to go through a period where driving is not automatic before it becomes automatic.
So yeah, you construct, I mean, this is where you talk about heuristic
biases is you, to make it automatic, you create a pattern and then
System 1 essentially matches a new experience against a previously seen pattern.
And when that match is not a good one, that's when the cognitive, all the, all the
best happens, but it's most of the time it works. And so it's
pretty. Most of the time, the anticipation of what's going to
happen makes this correct. And most of the time, the plan
about what you have to do is correct. And so most of the time
everything works just fine. What's interesting, actually, is
that in some sense,
system one is much better at what it does, and system two is at what it does.
That is, there is that quality of effortlessly solving, enormously complicated
problems, which clearly exists, so that a chess player, a very good chest player, all the moves that come to their
mind are strong moves. So all the selection of strong moves happens unconsciously and
automatically and very, very fast. And all that is in System One. The system too verifies. So along this line of thinking, really what we are are machines that
construct pretty effective system one.
You could think of it that way. So we're not talking about humans, but if you
think about building artificial intelligence systems robots,
do you think all the features and bugs that you have highlighted in human
beings are useful for constructing AI systems? So both systems are useful for perhaps instilling
in robots? What is happening these days is that actually what is happening in deep learning
is more like a system one product than like a system two
product.
I mean, deep learning matches patterns
and anticipate what's going to happen.
So it's highly predictive.
What deep learning doesn't have, and many people think that this is a critical,
it doesn't have the ability to reason, so there is no system to there.
But I think very importantly, it doesn't have an causality or any way to represent
meaning and to represent real interactions. So until that is solved, you know, what can be accomplished
is marvelous and very exciting, but limited.
That's actually really nice to think of current advances in machine learning as a sensory
system one advances. So how far can we get with just system one? If we think of deep learning and artificial intelligence systems in the morning.
It's very clear that deep mind has already gone way beyond what people thought was possible.
I think the thing that has impressed me most about the developments in AI is the speed.
It's that things at least in the context of deep learning, and maybe this is
about to slow down, but things moved a lot faster than anticipated. The transition from
solving chest to solving go was, I mean, that's bewildering how quickly it went, the move from Alpha Go to Alpha Zero is sort of bewildering
the speed at which they accomplished that. Now clearly, so there are many problems that
you can solve that way, but there are some problems for which you need something else.
Something like reasoning. Well, reasoning and also, you know, one of the real mysteries,
psychologist Gary Marcus who is also a critic of AI. I mean, he, what he points out, and I think point is that humans learn quickly.
Children don't need a million examples, they need two or three examples. So clearly there is a fundamental difference.
And what enables a machine to learn quickly?
What you have to build into the machine because it's clear that you have
to build some expectations or something in the machine to make it ready to learn quickly,
that at the moment seems to be on saw. I'm pretty sure that deep mind is working on it,
but if they have solved it, I haven't heard yet.
They're trying to actually them and open AI,
I'm trying to start to get to use neural networks
to reason.
So assemble knowledge, of course, causality is,
temporal causality is out of reach to most everybody.
You mentioned the benefits of system one is essentially that it's fast allows us to function in the world.
Fast and skilled, you know, it's skill.
And it has a model of successful, but you know, reasoning
by itself doesn't get you much.
Deep learning has been much more successful in terms of, you know, what they can do.
But now, it's an interesting question, whether it's approaching its limits.
What do you think?
I think absolutely.
So I just talked to Gianlachun. He mentioned, you think? I think absolutely. So I just talked to
Gianlic Un, he mentioned, you know, him. So he thinks that
the limits, we're not going to hit the limits with new networks
that ultimately this kind of system on pattern matching will
start to start to look like system two without significant transformation of the architecture. So I'm
more with the majority of the people who think that yes, networks will hit a limit in their
capability.
On the one hand, I have heard him tell them, he says, so basically, essentially, that,
you know, what they have accomplished is not's not a big deal that they have just touched.
That basically, you know, they can't do unsupervised learning in an effective way.
But you're telling me that he thinks that the current within the current
architecture, you can do causality and reasoning.
So he's very much a pragmatist in a sense that saying that we're very far away,
that there's still, yeah, I think there's this idea that he says is we can only see one or two mountain
peaks ahead and there might be either a few more after or thousands more after.
Yeah.
So that kind of idea.
I heard that metaphor.
Right.
Right.
But nevertheless, it doesn't see a the final answer, not fundamentally
looking like one that we currently have. So neural networks being a huge part of that.
Yeah. I mean, that's very likely because because pattern matching is so much of what's
going on. And you can take of neural networks
as processing information sequentially.
Yeah, I mean, there is an important aspect to,
for example, you get systems that translate
and they do a very good job,
but they really don't know what they're talking about.
And for that, I'm really quite surprised.
For that, you would need an AI that has sensation,
an AI that is in touch with the world.
Yeah.
And so for that, maybe even something
resembles consciousness kind of ideas.
This is the awareness of what's going on, so that the words have meaning or can get in touch
with some perception or some action.
Yeah, so that's a big thing for Jan and what he refers to is grounding to the physical space.
So that's what we're talking about the same thing.
Yeah, so how do you ground?
I mean, the grounding, without grounding, then you get, you get a machine that doesn't
know what it's talking about, because it is talking about the world ultimately.
The question, the open question is what it means to ground. I mean, we're very human-centric
and our thinking, but what does it mean for a machine to understand what it means to be in this world?
Does it need to have a body?
Does it need to have a finiteness like we humans have?
All of these elements, it's a very...
You know, I'm not sure about having a body, but having a perceptual system, having a body would be very helpful too.
I mean, if you think about human mimicking humano, but having a perception, that seems to
be essential, so that you can build, you can accumulate knowledge about the world.
So if you can imagine a human completely paralyzed, and there's a lot that the human brain could learn, you know, with a paralyzed body.
So if we got a machine that could do that, that would be a big deal.
And then the flip side of that, something you see in children and something in machine learning world is called active learning.
Maybe it is also
in, is being able to play with the world. How important for developing system
honors or system two, do you think it is to play with the world? We're able to interact with it.
There's really a lot, a lot of what you learn as you learn to anticipate, the outcomes of your actions. I mean, you can see
that, how babies learn it, you know, with their hands, how they learn, you know, to connect,
you know, the movements of their hands with something that clearly is something that happens in the
brain, and the ability of the brain to learn new patterns.
So, you know, it's the kind of thing that you get without official limbs,
that you connect it, and then people learn to operate the artificial limb,
you know, really impressively, quickly, at least, from what I hear.
So, we have a system that is ready to learn the world's action.
At the risk of going into way too mysterious of land, what do
you think it takes to build a system like that? Obviously, we're
very far from understanding how the brain works, but how
difficult is it to build this kind of ours?
I mean, I think that Jan Lecun's answer, that we don't know how many mountains there are,
I think that's a very good answer.
I think that, if you look at what Ray Kurzweil is saying, that strikes me as of the wall, but I think people are much more realistic than that.
Actually, Demise Sabes is and Jan is, so the people are actually doing the work fairly realistic,
I think.
To maybe phrase it another way, from a perspective not of building it, but from understanding
it, how complicated are human beings in the following
sense.
I work with autonomous vehicles and pedestrians, so we tried to model pedestrians.
How difficult is it to model a human being, their perception of the world, the two systems
they operate under, sufficiently to be able to predict whether
the pedestrian is going to cross the road or not.
I'm fairly optimistic about that actually because what we're talking about is a huge amount
of information that every vehicle has and that feeds into one system, into one gigantic system. And so anything that any vehicle learns becomes
part of what the whole system knows. And with a system
multiplier like that, there is a lot that you can do. So human
beings are very complicated, but and, and you know, the system
is going to make mistakes, but human makes mistakes.
I think that they'll be able to, I think they are able to anticipate pedestrians,
otherwise a lot would happen. They're able to, you know, they're able to get into a roundabout
into a roundabout and into traffic. So they must know both to expect, though, to anticipate how people will react when they are sneaking in. And there's a lot of learning that's involved
in that. Currently, the pedestrians are treated as things that cannot be hit and they're not treated as agents with whom you interact in a game
theoretic way. So I mean, it's not, it's a totally open problem and every time somebody tries
to solve it, it seems to be harder than we think. And nobody's really tried to seriously
solve the problem of that dance because I'm not sure if you've thought about the problem of pedestrians
but you're really putting your life in the hands of the driver.
You know, there is a dance, there's part of the dance that would be quite complicated
but for example, when I cross the street and there is vehicle approaching
I look the driver in the eye and I think many
people do that. And you know, that's a signal that I'm sending. And I would be sending that
machine to an autonomous vehicle and it had better understand it because it means I'm
crossing.
So, and there's another thing you do that actually, so I'll tell you what you do, because I've watched
hundreds of hours of video on this, is when you step in the street, you do that before
you step in the street, and when you step in the street, you actually look away.
Look away.
Yeah.
Now, what does that, what does that say is, I mean, you're trusting that the car who hasn't
slowed down yet, will slow down. Yeah. And you're telling him, yeah, I mean, you're trusting that the car who hasn't slowed down yet will slow down.
And you're telling him, I'm committed.
I mean, this is like in a game of tricking, so I'm committed.
And if I'm committed, I'm looking away.
So you just have to stop.
So the question is whether a machine that observes that needs to understand mortality. Here I'm not sure that it's got to understand so much,
it's got to anticipate.
So and here, but you know, you're surprising me because here I would
think that maybe you can anticipate without understanding,
because I think this is
clearly what's happening, playing gore, playing chess. There's a lot of anticipation and there is
zero understanding. So I thought that you didn't need a model of the human, and a model of the human, and the model of the human mind to avoid hitting pedestrians.
But you are suggesting that actually you do.
And then it's, then it's a lot harder.
So this is, and I have a follow question to see where you're intuition lies.
Is it seems that almost every robot human collaboration system is a lot harder than people realize.
So do you think it's possible for robots and humans to collaborate successfully?
We talked a little bit about semi-atonomous vehicles, like in the Tesla, autopilot, but just
in tasks in general.
If you think we talked about current neural networks being kind of system
one, do you think those same systems can borrow humans for system two type tasks and collaborate
successfully?
Well, I think that in any system where humans and the machine interact,
the human will be superfluous within a fairly short time.
That is, if the machine is advanced enough,
so that it can really help the human,
then it may not need the human for a long time.
Now, it would be very interesting if there are problems
that for some reason the machine doesn't cannot
solve, but that people could solve, then you will have to build into the machine and
ability to recognize that it is in that kind of problematic situation and to call the
human. That cannot be easy without understanding. It must be very difficult to program a recognition
that you are in a problematic situation without understanding the problem.
That's very true. In order to understand the full scope of situations that are problematic, you almost need to be smart
enough to solve all those problems.
It's not clear to me how much the machine will need the human.
I think the example of chess is very instructive.
I mean, there was a time at which Casperov was saying that human machine combinations will
beat everybody.
Even stockfish doesn't need people.
And alpha zero certainly doesn't need people.
The question is just like you said,
how many problems are like chess
and how many problems are the ones where
are not like chess, where,
well, every problem probably in the end is like chess.
The question is
how long is that transition period? I mean, you know, that's a question I would ask you in terms of,
you know, autonomous vehicle just driving is probably a lot more complicated than go to solve the
yes. And that's surprising because it's open. No, I mean, you know, that's not surprising to me because the, because there is a hierarchical aspect to this, which is recognizing a situation.
And then within the situation bringing up the relevant knowledge.
And for that hierarchical type of system to work, you need a more complicated system than we currently have.
A lot of people think because as human beings, this is probably the cognitive biases, they think of driving is pretty simple because they think of their own experience.
This is actually a big problem for AI researchers or people thinking about AI because they evaluate
how hard a particular problem is based on very limited knowledge, basically on how hard
it is for them to do the task.
And then they take for granted,
maybe you can speak to that,
because most people tell me driving is trivial,
and humans in fact are terrible at driving
is what people tell me.
And I see humans,
and humans are actually incredible at driving,
and driving is really terribly difficult.
Yeah.
So is that just another element of the effects that you've described
in your work on the psychology side?
No, I mean, I haven't really, you know, I would say that my research has contributed
nothing to understanding the ecology and to understanding the structure of situations and the complexity of problems.
So all we know is very clear that go, it's endlessly complicated but it's very constrained.
So in the real world there are far constraints, and many more potential surprises.
So that's obviously, because it's not always obvious to people.
So when you think about, well, I mean, you know,
people thought that reasoning was hard and perceiving was easy,
but they quickly learned that actually modeling vision was tremendously complicated and modeling
even proving theorems was relatively straightforward.
To push back on that a little bit on the quickly part.
They haven't took several decades to learn that, and most people still haven't learned that.
I mean, our intuition, of course AI researchers have,
but you drift a little bit outside the specific AI field,
the intuition is still perceptible.
Oh yeah, that's all that's right.
That's true.
Intuations, the intuitions of the public
haven't changed radically.
And they are, as you said, they're evaluating
the complexity of problems by how difficult
it is for them to solve the problems.
And that's not very little to do with the complexities of solving them in AI.
How do you think, from the perspective of AI researcher, do we deal with the intuitions
of the public?
So in trying to think,
arguably the combination of hype investment
and the public intuition is what led to the AI winters.
I'm sure that same can be applied to tech
or that the intuition of the public leads to media hype,
leads to companies investing in the tech, and then the tech doesn't make the companies money, and then there's a crash.
Is there a way to educate people?
Is there a defite?
It's called system one thinking?
In general, no.
I think that's the simple answer.
And it's going to take a long time before the understanding of
where those systems can do, becomes, you know, a part and becomes public knowledge.
And then, and the expectations, you know, there are several aspects that are
going to be very complex. And that the, the fact that you have a device that cannot explain
itself is a major, major difficulty.
And we're already seeing that.
I mean, this is really something that is happening.
So it's happening in the judicial system.
So you have system that are clearly better at predicting
parole violations than judges,
but they can't explain their reasoning.
And so people don't want to trust them.
We seem to insist them one, even use cues to make judgments about our environment.
So this explainability point, do you think humans can explain stuff?
No, but...
I mean, there is a very interesting aspect of that. Humans think they can explain themselves.
Right.
So when you say something, and I ask you, why do you believe that? Then reasons will occur to you.
But actually, my own belief is that in most cases, the reasons are very little to do with
why you believe what you believe.
So that the reasons are a story that comes to your mind when you need to explain yourself. But people traffic in those explanations.
I mean, the human interaction depends on those shared
fictions and the stories that people tell themselves.
You just made me actually realize,
and we'll talk about stories in a second,
that not to be cynical about it,
but perhaps there's a whole movement of people trying to do explainable AI.
And really, we don't necessarily need to explain. AI doesn't need to explain itself. It just needs to tell a convincing story.
Yeah, absolutely.
The story doesn't necessarily need to reflect the truth.
It just needs to be convincing.
There's something to that.
You can say exactly the same thing in a way
that sounds in a color, doesn't sounds in a color.
In cell, but the objective of having an explanation
is to tell a story that will be acceptable to people.
And for it to be acceptable and to be robustly acceptable, it has to have some elements of truth.
But the objective is for people to accept it.
people to accept it? It's quite brilliant actually. But so on the stories that we tell,
sorry to ask you the question that most people know the answer to, but you talk about two cells in terms of how life has lived, the experience of self and remembering self.
Can you describe the distinction between the two? Well, sure.
I mean, there is an aspect of life that occasionally,
most of the time we just live,
and we have experiences, and they're better,
and they are worse, and it goes on over time.
And mostly we forget everything.
That happens, or we forget most of what happens.
Then occasionally, you, when something ends or
at different points, you evaluate the past and your former memory, and the memory is schematic,
it's not that you can roll a film of an interaction, you constructs, in effect, the elements of a story about an episode.
So there is the experience and there is a story that is created about the experience.
And that's what I call the remembering.
So I had the image of two selves.
So there is a self that lives and there is a self that evaluates life.
Now the paradox and the deep paradox in that is that we have one system, one self that does the living,
but the other system, the remembering self, is all we get to keep. And basically, decision making and everything that we do is governed by our memories,
not by what actually happened. It's governed by the story that we told ourselves or by the
story that we're keeping. So that's the distinction. I mean, there's a lot of brilliant ideas about
the pursuit of happiness that come out of that.
What are the properties of happiness which emerge from us and remember yourself?
They are there are properties of how we construct stories that are really important. So
that I studied a few but but
But a couple are really very striking. And one is that in stories, time doesn't matter.
There's a sequence of events, so they're all highlights.
And how long it took, they lived happily ever after.
And three years later, something. Time really doesn't matter. In stories, events
matter, but time doesn't. That leads to a very interesting set of problems because time
is all we got to live. Time is the currency of life. And yet time is not represented
basically in evaluated memories. So that creates a lot of paradoxes that I've thought about.
Yeah, they're fascinating. But if you were to give advice on how one lives a happy life,
give advice on how one lives a happy life
Well based on such properties. What's the optimal?
You know, I gave up I abandoned happiness research because I couldn't solve that problem. I couldn't I couldn't see
and in the first place it's very clear that if you do talk in terms of those two cells, then that
what makes the remembering self happy and what makes the experiencing self happy are
different things. And I asked the question of, suppose you're planning a vacation and
you're just told that at the end of the vacation you'll get an amnesic drugs, remember nothing, and they'll
also destroy all your photos.
So there'll be nothing.
Would you still go to the same vacation?
And it turns out we go to vacations in large part to construct memories, not to have experiences
but to construct memories, not to have experiences, but to construct memories.
And it turns out that the vacation that you would want for yourself, if you knew you will not remember, is probably not the same vacation that you will want for yourself, if you will remember.
So, I have no solution to these problems, but clearly those are big issues.
And you've talked about it.
But I've talked about issues.
You've talked about sort of how many minutes or hours you spend about the vacation.
It's an interesting way to think about it because that's how you really experienced the
vacation outside the being in it.
But there's also a modern, I don't know if you think about this or interact with it, there's a modern way to
magnify the remembering self, which is by posting on Instagram, on Twitter, on social networks.
A lot of people live life for the picture that you take, that you post somewhere, and now thousands
of people share and potentially put in a few millions. And then you can relive it even much more than just those minutes. Do you think about that magnification?
You know, I'm too old for social networks. I've never seen Instagram. So I cannot really
speak intelligently about those things. I'm just too old. But it's interesting to watch the exact effects you describe.
I think it will make a very big difference.
I mean, it will make a difference.
And that, I don't know whether it's clear that in some ways, the devices that serve us suppl uh, supplants function.
So you don't have to remember phone numbers.
You don't have, you really don't have to know facts.
I mean, the number of conversations I'm involved with,
where somebody says, well, let's look it up.
So it's, it's a, in a way,
it's made conversations.
Well, it's, it means that it's much less important to know things.
Now, it used to be very important to know things.
This is changing.
So, the requirements of that we have for ourselves and for other people our changing Because of all those supports and be quit and I have no idea what Instagram does
Well, I'll tell you I mean, I wish I could just have
The mind remembering self could enjoy this conversation, but I'll get to enjoy it even more by having watched by watching it and then
Talking to others it'll be about
a hundred thousand people as scary as to say, well listen or watch this, right?
It changes things.
It changes the experience of the world.
You seek out experiences which could be shared in that way.
I haven't seen, it's the same effects that you've you describe and I don't think the psychology of that
magnification has been described yet because it's a new world. You know the sharing
there was a there was a time when people read books and and and you could assume that your friends had read the same books that you read.
So there was kind of invisible sharing theory.
There was a lot of sharing going on, and there was a lot of assumed common knowledge.
And, you know, that was built in.
I mean, it was obvious that you had read the New York Times, it was obvious that you had read the reviews.
I mean, so a lot was taken for granted that was shared.
And, you know, when they were three television channels,
it was obvious that you'd seen one of them,
probably the same.
So sharing it was always there. It was just different.
At the risk of inviting mockery from you, let me say that I'm also a fan of
Sartre and Camo and existentialist philosophers. And I'm joking, of course, about mockery. But
philosophers. And I'm joking, of course, about mockery, but from the perspective of the two selves, what do you think of the existentialist philosophy of life? So trying to really emphasize
the experiencing self as the proper way to, or the best way to live life. I don't know enough philosophy to answer that, but it's not...
you know, the emphasis on experience is also the emphasis in Buddhism.
Right, right.
So, you just have got to experience things and not to evaluate,
not to pass judgment, and not to score, not to pass judgment, not to score, not to keep score.
So if when you look at the grand picture of experience, you think there's something to
that, that one of the ways to achieve contentment and maybe even happiness is letting go of any of the things, any of the procedures of the remembering self.
Well, yeah, I mean, I think, you know, if one could imagine a life in which people don't score
themselves, it feels as if that would be a better life, as if the self-scoring and, you know, how am I doing a kind of question,
is not a very happy thing to have. But I got out of that field because I couldn't solve
that problem. And that was because my intuition was that the experiencing self, that's reality.
But then it turns out that what people want for themselves is not experiences.
They want memories and they want a good story about their life.
And so you cannot have a theory of happiness that doesn't correspond to what people want
for themselves. And when I realized that this was where things were going,
I really sort of left the fear of research.
Do you think there's something instructive about
this emphasis of reliving memories in building AI systems?
So currently artificial intelligence systems
are more like experiencing self.
In that they react to the environment, there's some pattern formation, like learning,
so on.
But you really don't construct memories except in reinforcement learning, every once in
a while, that you replay over and over.
Yeah, but you know, would in principle would not be.
Do you think that's useful?
Do you think it's a feature of bug of human beings that we look back?
Oh, I think that's definitely a feature.
That's not a bug.
I mean, you have to look back in order to look forward.
So without looking back, you couldn't really order to look forward. So without, without looking back, you couldn't,
you couldn't really intelligently look forward.
You're looking for the echoes of the same kind of experience in order to predict what the
future holds.
Yeah.
Though Victor Franco in his book Man's Search for Meaning, I'm not sure if you've read,
describes his experience at the concentration camps during World War II as a way to describe
that finding, identifying a purpose in life, a positive purpose in life can save one from
suffering.
First of all, do you connect with the philosophy that he describes there. Not really. So I can really see that somebody who has that
feeling of purpose and meaning and so on, that that could sustain you. I in general don't have
that feeling and I'm pretty sure that if I were in a concentration camp,
I'd give up and die.
You know, so he talks, he's a survivor.
Yeah.
And, you know, he survives with that.
And I'm not sure how essential to survival, the sense is.
But I do know when I think about myself that I would have given up.
Oh, this isn't going anywhere. And there is a sort of character that manages to survive in
conditions like that. And then because they survive, they tell stories and it sounds as if they survive because of what they were doing.
We have no idea. They survive because the kind of people that they are and the other kind of people who survives and will tell them some stories of a particular kind.
So I'm not...
So you don't think seeking purpose is a significant driver in our being. Oh, I mean it. It's a very interesting question because when you ask people whether it's very important
to have meaning in their life, this is the most important thing.
But when you ask people what kind of a day did you have?
And what were the experiences that you remember?
You don't get much meaning.
You get so far experiences.
Then, and some people say that, for example,
in child, in taking care of children,
the fact that they are your children
and you're taking care of them, makes a very big
difference. I think that's entirely true, but it's more because of a story that we're telling
ourselves, which is a very different story when we're taking care of our children or when we're
taking care of everything. Jumping around a little bit, in doing a lot of experiments, let me ask a question.
Most of the work I do, for example, is in the real world,
but most of the clean, good signs that you can do is in the lab.
So that distinction, do you think we can understand
the fundamentals of human behavior through controlled experiments in the lab?
If we talk about people diameter, for example,
it's much easier to do when you can control lighting conditions.
Yeah, right.
So when we look at driving, lighting variation destroys
almost completely your ability to
use people to amateur.
But in the lab, for, as I mentioned, semi-autonomous or autonomous vehicles, in driving simulation,
we don't capture true honest human behavior in that particular domain.
So what's your intuition?
How much of human behavior can we study in this controlled environment of the lab?
A lot, but you'd have to verify it.
Your conclusions are basically limited to the situation, to the experimental situation.
Then you have to jump that big inductive leap to the real world.
So, and that's the flair, that's where the difference, I think, between the good psychologist and
others that are mediocre is in the sense that your experiment captures something that's important
in the sense that your experiment captures something that's important and something that's real and others are just running experiments.
So what is that?
The birth of an idea to his development in your mind, to something that leads to an experiment.
Is that similar to maybe what Einstein or a good physicist do is your intuition?
You basically use your intuition to build up.
Yeah, but I mean, you know, it's very skilled intuition.
Right.
I mean, I just had that experience, actually.
I had an idea that turned out to be a very good idea a couple of days ago.
And you and you have a sense of that building up.
So I'm working with a collaborator. And he essentially
was saying, you know, what, what are you doing? You know, what's, what's going on? And I was, I really,
I couldn't exactly explain it, but I knew this is going somewhere, but you know, I've been around
that game for a very long time. And so I can that anticipation that, yes, this is worth following
something here.
That's part of the skill.
Is that something you can reduce towards in describing a process in the form of advice
to others?
No.
Follow your heart essentially. I mean, you know, it's like trying to explain what it's like to drive.
It's not.
You've got to break it apart and it's not.
And then you lose.
And then you lose the experience, though.
You mentioned collaboration.
You've written about your collaboration with Amos Tversky that this is you writing, the
12 or 13 years in which most of our work was joint
for years of interpersonal and intellectual bliss. Everything was interesting, almost everything
was funny and there was a current joy of seeing an idea take shape. So many times in those years
we shared the magical experience of one of us saying something which the other one would understand more deeply than a speaker had done. Contrary to the old laws of information
theory, it was common for us to find that more information was received than had been sent.
I have almost never had the experience with anyone else. If you have not had it, you
don't know how marvelous collaboration can be.
So, let me ask you perhaps a silly question.
How does one find and create such a collaboration?
That may be asking, like, how does one find love?
Yeah, you have to be lucky. And I think you have to have the character for that because I've had many collaborations,
I mean none, but there's exciting as with Amosbear.
And I'm having, it's very...
So, it's a skill. I think I'm good at it.
Not everybody's good at it. And then it's the luck of finding people who are also good
at it.
Is there advice for young scientists who also seek to violate this law of information
theory? I really think it's so much luck is involved.
And those really serious collaborations, at least in my experience, are a very personal
experience.
And I have to like the person I'm working with. Otherwise, you know, I mean,
there is that kind of collaboration, which is like an exchange, a commercial exchange
of giving this, you give me that. But the real ones are interpersonal. They're between
people like each other. And who like making each other think and
who like the way that the other person responds to your thoughts, you have to be lucky.
Yeah, I mean, but I already noticed that even just me showing up here, you've quickly
started to digging in a particular problem I'm working on and already new information started to emerge. Is that a process? Just a process of curiosity of
talking to people about problems and seeing I'm curious about anything to do with
AI and robotics and so and I knew you were dealing with that so I was curious
just follow your curiosity jumping Jumping around and the psychology front,
the dramatic sounding terminology of replication crisis,
but really just the at times,
this effect that at times studies do not,
are not fully generalizable. They don't you are being polite
It's worse than that. Is it so I'm actually not fully familiar?
Well, I mean, how bad it is right so
What do you think is the source where do you think I think I know what's going on? Actually, I mean, I have a theory about what's going on and
what's going on, actually. I mean, I have a theory about what's going on. And what's going on is that there is, first of all, a very important distinction between two types of experiments.
And one type is within subject. So the same person has two experimental conditions. And
the other type is between subjects,
where some people are this condition,
other people are that condition.
They are different worlds,
and between subject experiments
are much harder to predict,
and much harder to anticipate,
and the reason,
and they're also more expensive
because you need more people.
So, between subject experiments is where the problem is.
It's not so much and within subject experiments, it's really between.
And there is a very good reason why the intuitions of researchers about between subject experiments are wrong?
And that's because when you are a researcher, you are in a within subject situation.
That is, you are imagining the two conditions and you see the causality and you feel it.
But in the between subjects, conditions, they don't. They live in one condition and the
other one is just nowhere. So, orientations are very weak about between subject experiments.
And that, I think, is something that people haven't realized. And in addition, because of that, we have no idea about the power
of manipulations, of experimental manipulations, because the same manipulation is much more powerful
when you are in the two conditions than when you live in only one condition. And so the experimenters have very poor intuitions about between subject experiments.
And there is something else, which is very important, I think.
Which is that almost all psychological hypotheses are true, that is, in the sense that, you know, directionally, if you have a hypothesis that
A really causes B, that it's not true that A causes the opposite B. Maybe A just has
very little effect, but hypotheses are true mostly, except mostly they're very weak, they're much weaker than you think when
you are having images of, so the reason I'm excited about that is that I recently heard
about some friends of mine who essentially funded 53 studies of behavioral change by 20 different
teams of people with a very precise objective of changing the number of time that people go to the And the success rate was zero.
Not one of the 53 studies worked.
Now what's interesting about that is those are the best people in the field
and they have no idea what's going on.
So they're not calibrated.
They think that it's going to be powerful because they can imagine it, but actually it's just weak because you're focusing on your manipulation and it feels powerful to you.
There's a thing that I've written about that's called a focusing illusion that is that when you think about something, it looks very important. More important than it really is.
More important than it really is,
but if you don't see that,
affects the 53 studies, doesn't that mean
you just report that?
So what's, I guess, a solution to that?
Well, I mean, the solution is for people
to trust their intuitions less,
or to try out their intuitions before, I mean,
experiments have to be pre-registered and by the time you run an experiment, you have
to be committed to it and you have to run the experiment seriously enough in a public. And so this is happening. And the interesting thing is what what happens before
and how do people prepare themselves and how they run pilot experiments. It's going to train
the way psychology is done and it's already happening. Do you have a hope for this might connect to
for this might connect to that this study sample size.
Do you have a hope for the internet?
For the internet? This is really happening, M-Turk.
Everybody is running experiments on M-Turk
and it's very cheap and very effective.
Do you think that changes psychology essentially
because you think you can not essentially because you're think you can
run a 10,000 subject eventually it will. I mean, I, you know, I can't put my finger on how
exactly, but it's that's been true in psychology with whenever an important new method came in,
it changes the field. So an M toTurk is really a method because it makes
it very much easier to do something to do some things.
Is there undergrad students who will ask me, you know, how big and your own network should
be for a particular problem? So let me ask you an equivalent question. How big, how many subjects do you have for it to have a conclusive result?
Well, it depends on the strength of the effect.
So if you're studying visual perception or the perception of color, many of the classic
results in visual color perception, we're done on three or four people, and I think
one of them was colorblind, but partly colorblind. But on vision, you know, it's highly reliable.
Many people don't need a lot of replications for some type of neurological experiment.
When you're studying weaker phenomena, and especially when you're studying them between
subjects, then you need a lot more subjects than people have been running.
And that is one of the things that are happening in psychology now,
is that the power, the statistical power of experiments is increasing rapidly.
Does the between subject as the number of subjects goes to infinity approach?
Well, I mean, you know, goes to infinity is exaggerated, but people, the standard number of subjects who are in experiment, psychology,
with 30 or 40, and for a weak effect, that's simply not enough.
You may need a couple of hundred, I mean, it's that sort of order of magnitude. What are the major disagreements in theories and effects that you've observed
throughout your career that's still stemmed today? It worked on several fields. But what
still is out there as a major disagreement that pops your mind. And I've had one extreme experience of, you know, controversy with somebody who really doesn't like
the work that Amos Tursky and I did, and he's been after us for 30 years. Oh, more, at least
you want to talk about it? Well, I mean, his name is Good, Giger answer. He's a well-known German psychologist.
And that's the one controversy I have, which I, it's been unpleasant and no, I don't particularly
want to talk about it. But is there, is there open questions, even in your own mind, every once
in a while, you know, we talked about semi-atonomous vehicles.
In my own mind, I see what the data says, but I also constantly torn. Do you have things where
you, your studies have found something, but you're also intellectually torn about what it means,
and there's maybe the disagreements with you within your own mind about particular things.
One of the things that are interesting is how difficult it is for people to change their mind.
Essentially, once they are committed, people just don't change their mind about anything that matters.
That is surprisingly, but it's true about scientists.
So the controversy that I described, in other things,
been going on like 30 years, and it's never going to be resolved.
And you build a system and you live within that system, and other systems of ideas
look foreign to you, and there is very little contact and very little mutual influence,
that happens a fair amount. Do you have a hopeful advice or message on that? We think about science,
thinking about politics, thinking about things that have impact on this world. How can we change our mind?
I think that, I mean, on things that matter, you know, political or religious, and people
just don't change their mind.
And by an odd, and there is very little that you can do about it.
What does happen is that if leaders change their mind.
So, for example, the American public doesn't really believe in climate change,
doesn't take it very seriously.
But if some religious leaders decided this is a major threat to humanity, that would have
a big effect.
So that we have the opinions that we have, not because we know why we have them, but because
we trust some people and we don't trust other people.
So it's much less about evidence than it is about stories.
So the way one way to change your mind isn't at the individual level is that the leaders
of the communities you look up with, the stories change and therefore your mind changes
with them.
So there's a guy named Alan Toring, came up with a Turing test.
What do you think is a good test of intelligence?
Perhaps we're drifting in a topic that we're maybe philosophizing about, but what do you
think is a good test for intelligence, for an artificial intelligence system?
Well, the standard definition of the official know, of artificial general intelligence is that it
can do anything that people can do and it can do them better.
Yes.
And what we are seeing is that in many domains, you have domain-specific, and they beat people easily in a specified way.
But we are very far from is the general ability, general purpose intelligence.
So, in machine learning, people are approaching something more general. I mean, for Alpha Zero, it was much more general than Alpha Go.
But it's still extraordinarily narrow and specific in what it can do.
So we're quite far from something that can, in every think like a human exorbitor.
What aspects of the touring task has been criticized as natural language conversation that
is too simplistic, it's easy to quote unquote pass under constraints specified.
What aspect of conversation would impress you if you heard it?
Is it humor?
Is it, what would impress the heck out of you if you saw it in conversation?
Yeah, I mean, certainly, which would be impressive.
Humor would be more impressive than just factful conversation, which I think is easy.
And illusions would be interesting, and metaphors would be sort of impressive, that it's completely natural
in conversation, but that you really wouldn't expect.
Does the possibility of creating a human level intelligence or superhuman level intelligence
system excite you, scare you?
Well, I mean, I'm- I mean, I- I'm- you feel? I find the whole thing fascinating.
Absolutely fascinating.
So exciting.
I think.
And exciting.
It's also terrifying, you know, but I'm not going to be around to see it.
And so I'm curious about what is happening now, but also know that predictions about it are silly.
We really have no idea what it will look like 30 years from now, no idea.
Speaking of silly, bordering on the profound, they may ask the question of, in your view,
asked the question of, in your view, what is the meaning of it all? The meaning of life. These descendant of great apes that we are, why, what drives us as a civilization, as a human
being, as a force behind everything that you've observed and studied? Is there any answer or is it all just a beautiful mess?
There is no answer that I can understand and I'm not actively looking for one.
Do you think an answer exists?
No.
There is no answer that we can understand.
I'm not qualified to speak about what we cannot understand, but there is,
I know that we cannot understand reality.
I mean, there's a lot of things that we can do.
I mean, you know, gravity waves, and that's a big moment for humanity.
When you imagine that ape, you know, being able to, to go back
to the big bang, that's, that's, but, but the why, yeah, the why is bigger than us. The why
is hopeless, really.
Danny, thank you so much. It was an honor. Thank you for speaking today. Thank you.
Thanks for listening to this conversation. and thank you to our presenting sponsor, Cash
App.
Download it, use code LEX Podcast, you'll get $10 and $10 will go to first.
A STEM education nonprofit that inspires hundreds of thousands of young minds to become future
leaders and innovators.
If you enjoy this podcast, subscribe to my YouTube, get 5 stars on Apple Podcast, follow
on Spotify, support it on Patreon, or simply connect with me on Twitter.
And now let me leave you with some words of wisdom from Daniel Coteman.
Intelligence is not only the ability to reason, it is also the ability to find relevant material
and memory and to deploy attention when needed.
Thank you for listening and hope to see you next time.
Thank you.