Making Sense with Sam Harris - #119 — Hidden Motives
Episode Date: March 12, 2018Sam Harris speaks with Robin Hanson about our hidden motives in everyday life. They discuss selfishness, hypocrisy, norms and meta-norms, cheating, deception, self-deception, education, the evolution...ary logic of conversation, social status, signaling and counter-signaling, common knowledge, AI, and many other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Transcript
Discussion (0)
Today I am presenting the audio from the event I did in Denver with Robin Hanson.
Robin's a professor of economics at George Mason University,
and he's the author with Kevin Simler of a very interesting book titled The Elephant in the Brain,
Hidden Motives in Everyday Life. I give more of his bio from the stage, but I really enjoyed this
conversation with Robin. We spoke about all the related issues here of selfishness and hypocrisy and norms and norm
violations, cheating, deception, self-deception, the evolutionary logic of conversation, social
status, signaling and counter-signaling, common knowledge. There's many interesting topics here.
I enjoyed the event. Unfortunately, the audio is a little wonky. We are completely at the mercy of whatever recording we get from these venues.
And there are a few moments where things cut out.
It's a little echoey.
It's not that bad.
Once you start listening, you will acclimate to it.
But it was a good conversation.
And so now I bring you Robin Hanson.
Ladies and gentlemen, please welcome Sam Harris.
Thank you all.
Well, thank you all for coming out.
Really, it's amazing to see you all, or see some fraction of you.
I'm going to jump right into this.
We have a very interesting conversation ahead of us, because I have a great guest.
He is a professor of economics at George Mason University.
He is also a research associate
with the Future of Humanity Institute,
which you might know focuses on existential risk
and other big topics of ethical importance.
He has a PhD in social science from Caltech,
a master's in physics and the philosophy of science.
He did nine years of research with Lockheed and NASA,
studying mostly artificial intelligence and also Bayesian statistics.
And he's recognized for his contributions in economics
and especially in prediction markets,
but he's made contributions in many other fields.
And he has written a fascinating book,
which unfortunately
is not for sale here today, but you should all buy this book because it was really, it's amazingly
accessible and he just touches so many interesting topics. That book is The Elephant in the Brain,
Hidden Motives in Everyday Life. Please welcome Robin Hanson.
Thanks for coming.
We're here, of course.
So your reputation for being interesting precedes you.
I deny it all.
So I want to...
There are many things we can talk about, as you know.
But I want to focus on your book
and I want to move in kind of a linear way through your book because your book is just
so rich.
And I don't think we will do the book justice, but we will try.
The book is really kind of a sustained meditation on selfishness and hypocrisy.
We have these ideas about why we do things, and then we have all of
the evidence accruing for the real reasons why we do these things. And the mismatch there is rather
harrowing to consider, and your book is just an unvarnished look at that. So I want to tour
through this, but perhaps before we get to some of these specific topics, how do you view the
project of your book?
What were you up to in writing?
I should say that you have a co-author on the book, Kevin Simler,
who's not here tonight.
But what were you doing writing this book?
This was what I wish I would have known
when I started my social science career many years ago.
I started out in physics and then went into computer science.
In those areas, I noticed that people were really eager for innovation.
And then I seemed to see that in social science, there were even bigger innovations possible.
And so I moved there, and then I was puzzled to find that people were not very interested in innovations compared to the other areas.
And I kept also finding other puzzles in social science, ways in which our usual theories don't make sense of what's going on.
And our book is an attempt to explain a lot of the major puzzles
in social science and the lack of interest in innovation.
And one of the conclusions is that we're just doing policy wrong,
policy analysis wrong.
But first we have to get into the basics here.
And there's really two levels of it. There's how you as a person might think
about these things, but it's the level of personal hypocrisy and the mismatch between
what your motives actually are and what you may think they are. And then there's this,
the fact that institutions have this kind of structure or this blindness where the institutions
think they're about something and they seem not to be upon analysis, an institution like medicine or a
university. So what's the basic problem? Why is there this mismatch between what we think we're
doing and what we're actually doing? So if you've read many psychology books, you're familiar with
the idea that people are not always honest about what they're doing and why. And you might find that trite and kind of boring by now, because of course
we all know that. But so far people haven't taken that insight to our major social institutions.
And that's what we think is new and original about our book. We say that not only are you not always
honest about whether you like to go to the opera with your spouse or whether you enjoy playing and cleaning up after your kids, you're also
not honest with yourself about why you go to school and why do you go to the doctor
and why you vote and why you do art.
That is, these deviations between what we think we're doing and our actual motives infect
many major social institutions,
and they therefore should make us reconsider the basics
of what these institutions are for
and therefore why we support them
and whether we should subsidize them
and how we should structure them and everything.
Right.
So unlike many conversations I have here,
I have a list of nouns that are kind of a ladder through which we could walk.
Let's start with norms and what you call meta-norms.
What is a norm and why do we have them and what does it mean to protect them or to fail to protect them?
So, animals like chimpanzees and most other social animals, they have a complicated social world.
And they pay attention to each other and they reward others for helping them and punish others if they hurt them.
So they have many regular behaviors.
But humans uniquely have norms in the sense that we have a rule of what you're supposed to do or not supposed to do.
And if somebody else sees you breaking the norm, it's a rule that they're supposed to do or not supposed to do. And if somebody else sees you breaking the norm,
it's a rule that they're supposed to do something about it.
They're supposed to tell other people that you've broken the norm
and then try to find a way to make you stop breaking the norms.
And so humans have these norms about what we're supposed to do
or not supposed to do.
And many of these norms are quite common around the world.
We're supposed to help each other.
We're supposed to not brag, not
be violent to each other. We're supposed to make group decisions together by consensus. And we're not
even supposed to have subgroup coalitions, people who are aligned against the others. These are just
common human norms. And many of these norms are expressed in terms of motives. So there's a rule
that we're not supposed to hit each other on purpose. It's okay to hit accidentally, but not on purpose.
And so we are, because our ancestors had these norms and they were so important,
their social world was their main world, we developed these big brains that we have,
mainly, apparently, for social reasons.
We developed these big brains to deal with our social world.
And we have the biggest brains of all, so our social world must have been the most complicated. But norms were a big part of this world. And so we have this
part of our brain that's all the time thinking about what we're doing and trying to explain
why we're following good motives. And that's, in a sense, the conscious part of your mind.
You are the conscious part of your mind, and you aren't necessarily the one in charge of your mind. You are the conscious part of your mind and you aren't necessarily the one in
charge of your mind. There's this idea that instead of, say, being the president or the king,
you're the press secretary. You don't actually know why you do things, but you're supposed to
make up a good excuse. And you do that. You're constantly looking at what you're doing and
asking yourself, what would be a good explanation for this thing I'm doing? And you're good at that. You're good at coming up with excuses for what
you're doing. But you don't actually know what you're actually doing. You don't realize that
you don't know. Yeah. And this is a very robust but not really celebrated neurological finding.
And it becomes horribly elaborated and people who have
what's called a split brain procedure where as a as a remedy for grand mal seizures you you can cut
the corpus callosum which connects the the two hemispheres of the brain and that prevents the
seizure activity from moving from one hemisphere to the other and what people have found going back
now many decades is that the left, most people, the left linguistically agile hemisphere
confabulates reasons for doing things
when those reasons are brought out in an experimental paradigm,
those reasons are just manifestly not so.
So you can present the right hemisphere of the brain with a demand,
like, you know, get up and walk toward the door,
and then you can ask the linguistically competent left hemisphere, why are you walking toward the
door, and it will confabulate a reason, like, I want to get a Coke.
This is a result from a classic experiment, which I think you cite in your book.
These experiments were done by Roger Sperry and Michael Gazzaniga and Iran Zaydel. And the left hemisphere just continually completes the picture linguistically
without any apparent awareness that those claims are out of register.
They're based on nothing.
This is what the word confabulate means,
just to make up this reason out of whole cloth.
And it seems that, I mean, though most of
us have not had our brains split, we have an ability to give a post-hoc rationalization for
why we did things, which is certainly in an experimental paradigm can be shown to really
have no relationship to the proximate cause of our actions. And it is embarrassing if caught on video.
So we're living with this fact that we are our own press secretary
giving the, at minimum, the most benign,
but often just the most grandiose and apparently noble rationale
for why we're doing what we're doing,
and yet evolution and other modes of logic suggest that that isn't the reason for why we do doing what we're doing, and yet evolution and other modes of logic
suggest that that isn't the reason for why we do much of what we do. Well, since you are in the
habit of just making up excuses, that means you could be wrong a lot, but doesn't mean you are
wrong a lot. Maybe you are mostly right, even though you would be wrong if you didn't know.
So we have to go further than just the possibility of your being wrong to decide you're wrong.
So we have to go further than just the possibility of your being wrong to decide you're wrong.
So we have to wonder, well, how sure can you be about most of your activity, whether it's the real reason you have?
Now, one thing to be clear about is almost any area of life, like going to school or going to the doctor, is big and complicated.
The world's complicated.
So a great many motives are relevant.
And if we average over people, surely thousands of different motives are relevant for almost everything we're doing.
And so what we have to be asking here is what is the main motive?
What's the most dominant motive?
Not what's the only motive.
So, I mean, just as an example, if you say the dog ate my homework as an excuse, that only works because sometimes dogs eat homework.
If dogs didn't exist, it wouldn't make much sense.
Dragon ate my homework, doesn't work.
So these things that we come up with as excuses for our behavior,
they only work as excuses because sometimes they're true.
They have an element of truth. So we're not going to say that your usual motive isn't at all applicable.
The claim is just it's not as true as you think.
And you're not saying that no one has noble motives.
Exactly.
So there is real altruism, there's real nobility.
There's real all of these things, exactly.
Sometimes people get up to get a Coke.
Yes.
But in addition, there are evolutionary reasons
why we would be self-deceived about our motives.
We're actually, and this is based often on the work of Robert Trivers,
who's done a lot of work on self-deception
and the evolutionary logic there,
we are better at deceiving others.
We're better at getting away with norm violations
if we, in fact, are not aware
that our press secretary is not telling the truth,
which is to say that if we, in fact, are self-deceived,
we are better deceivers. So if we want to lie, it's better not to know we're lying, because then we seem sincere.
Right. Well, you can be sincere. Right. The easy way to seem sincere is to be sincere,
even if you're wrong. It's a famous Seinfeld episode, I believe.
You're not lying if you believe it.
I should say that basically,
this is something you and I should probably talk about,
but the jury's out as to whether or not
knowing any of what we're about to talk about is good for you.
Sorry.
There's a psychological experiment being performed on you
and you have not consented.
Memory white pills will be available after the session.
So how do you think about, let's take cheating as, cheating is a classic norm violation.
There's reason to think that our brains have evolved in large measure both to cheat and
to detect cheating in others.
How do you
think about cheating in your line of work?
Well, cheating is again violating norms and so we want to live in a community where
the norms are enforced and we also want ourselves to be exceptions to the rules. So, for example,
most criminals actually think crime is a bad thing.
They just think that their particular acts don't quite count as crimes.
So we all basically would like to make exceptions for ourselves.
So the question is how, and one of the ways we can do it is to not be very honest about what we're doing with ourselves.
This may not be relevant, but it just put me in the mind of it. I've never understood why no one remarks on the fact that... When we think of just reducing our speed limit laws,
what that would do in terms of saving lives.
We could save tens of thousands of lives a year.
But if we made cars that could not exceed the speed limit,
that would guarantee that no one would exceed the speed limit.
But no one would want that speed limit. But no one would
want that. No one who thinks that we should have speed limits would want a car that would
slavishly follow the speed limit. Is that just synonymous with wanting the right to cheat on the
speed limit? Are we all imagining some emergency where you have to speed past the speed limit?
Are we all imagining some emergency where you have to speed past the speed limit? So the whole theme here is that in your head you think you want things.
So in your head you think you want to enforce speed limits.
With your actual actions, you don't.
You want to speed.
And there's a contradiction there.
And you don't want to look at that contradiction.
So you look away.
And that's the elephant in your brain.
As you know, the elephant in the room is the thing we all know is there that you don't
want to look at.
And the elephant in your brain is this contradiction between what you say you want and what you
actually do.
So let's actually raise this issue now, whether this line of thinking or this analysis
has a downside.
So if in fact it's true that we are better fit to our social environment
with a certain amount of ignorance with respect to our own motives, that it's optimal,
there's like an attractor of optimal fitness which entails some measure of self-deception.
And we are in the process, you in the process of writing this book, all of us in the process of talking about it,
are to some degree undeceiving ourselves about these things,
why isn't that bad for us?
And is it worth worrying whether it's bad for us?
So apparently evolution constructed you to be ignorant about why you do things.
It thought, yes, it might be useful if you know why you do things, but that's to be traded off against all these other benefits of not knowing why you do things. So you were constructed not to
know. If the situation you're in in the modern world is much like the situation evolution
anticipated for you, that's probably better in your personal interest, not knowing. You're probably
better off going on with the usual sort of ignorance that the rest of us have had and acting
that way because you'll get along that way and that's what evolution anticipated for you. Now evolution couldn't think
of everything so you could be in an environment today which is not something evolution might have
participated or you might be in an unusual situation. For example you might be a salesperson
or a manager, the sort of person for whom it's really important to understand people's motives
and to be able to read them and understand what's going on. You also might be a nerd, like myself, that as most people can
just intuitively read the social world around them and do the right thing, some of us can't,
and some of us need more conscious analysis in the world to figure out what's going on. And so
you may appreciate this more cynical conscious analysis, even if it has some disadvantages.
But most importantly...
As a self-help seminar, I think that's not going to sell a lot of tickets.
Not that you'd be nerds or anything, but some of us are. But I also just think if you're
going to be a specialist in policy analysis, if you're going to stand up and say, I have
studied education or medicine and I have thought about what changes would be better, it's more your responsibility to know what's actually going on in those worlds,
even if it costs you some degree of social awkwardness to know that.
I think at least social analysts and policy analysts should understand these things.
Yeah, so let's take an institutional example.
Take education.
What is it that we're deceived about with respect to education?
So again, just to be clear, just because you might be deceived about many things doesn't mean you
are. So I need to walk you through arguments to convince you that in fact, in each area,
your motives isn't what you think it is. Now, my colleague, beloved colleague, Ryan Kaplan,
has a book just out called The Case Against Education. And he goes through a whole book
like the treatment of this. Our chapter is just a
summary of that, but a summary is sufficient. The summary is when you ask people, why do you go to
school? If they are answering in front of a public speech or in a letter of application, say they
will tell you the usual story is to learn the material so that you can become a more useful
person later. That's our standard story about school. And there are a number of puzzles in
education that just don't make sense of that theory. And I'm going to offer another theory
that makes more sense of it. Some of these puzzles are you don't actually learn very much at school.
Most of the stuff you learn isn't very useful. Yet people who don't learn useful things are
paid more. So bartenders who go to college make more than bartenders who go to high school.
You do make more for more years of school in terms of your wages,
but the last year of high school and the last year of college
is worth as much as the other three years combined.
But you don't learn more in the last year of high school or college.
I went to Stanford for a while for free without registering or applying
simply by walking in and sitting on classes.
One of the professors gave me a letter of recommendation based on my performance.
Nobody tries to stop you from doing that.
Why?
You can get the very best education for free if you don't want a credential.
That calls into question the idea that you're there for the learning as opposed to something else.
So the alternative theory is that you're there for the learning as opposed to something else. So the alternative
theory is that you're there to show off and to gain a credential that shows that you are
worthy of showing off, that is, you are smart, conscientious, conformist, you're willing to do
the sorts of things that they ask you to do, you take ambiguous instructions with long deadlines,
and consistently over several decades, over several years, complete
mildly boring assignments. Great preparation for future workplaces. And by the end, you
have shown that, and that's something employers value. And that's a standard plausible explanation
for education. Most of you will find that plausible unless you are an education policy expert, in case you will be offended and search for another explanation.
So in most of these areas, most of you will nod and say, yeah, that makes sense,
unless this is your precious area.
For all of us, there is something precious in our lives, something sacred,
and for that we will be more reluctant to accept one of these more cynical explanations
of what's going on there.
But as long as education isn't sacred for you, you'll probably nod and say, yeah, you don't learn much in school.
But so now what is signal and what is noise there? Are employers wrong to value those things?
Should people, what should people do differently as a result of understanding this about
the status quo? Individually, you shouldn't do different. Individually, if you want to convince
an employer in our world that you have what it takes,
you do need to go to school, jump through the hoops, and perform well.
And in fact, you might do that better if you aren't aware that you're just doing arbitrary things
to show off to an employer.
That may be demotivating for you.
You might be better off pretending to yourself and believing that you're learning.
It would be useful.
But the point is you are showing that you have a characteristic, not creating a characteristic.
The school isn't changing you.
It's distinguishing you.
It's like certifying you as different.
So now what's the role of common knowledge in some of these situations?
You should define what common knowledge is.
It's not common knowledge.
What common knowledge is?
So think about cheating.
He asked about cheating.
And think of the rule that you're not supposed to drink alcohol in public.
This is a rule, and there are people who are supposed to enforce this rule, the police.
And you might think, this of course is relatively easy to enforce,
but think of the example of people putting an alcoholic beverage inside a paper bag and drinking it outside.
This happens.
Now, ask yourself, how hard could it be for the police to know that you're drinking alcohol if you're drinking some bottle of a paper bag out of the stores? Of course they know. But you're
giving them an excuse to look the other way. That is, that's not common knowledge. We don't know
that we all know that we all know that it's alcohol.
Somebody could be fooled, and that's enough to pretend that you don't know.
So this is why it's actually much easier to cheat in many ways than you might have thought.
We have all these rules, and we're supposed to enforce them, but we're not very eager to enforce them.
We'd rather go about our business and ignore the rule violations.
And so a rule
violation needs to be kind of blatant. And other people need to see us see the rule violation,
and then we kind of feel forced to do something about it. But if it's not blatant, it's not
something we all can see and know that we know, then you might prefer to pretend you didn't see.
And many of you probably have seen things that are not supposed to happen as you walk by the
street, and you just keep walking, hoping that nobody saw you saw it, because then you could
pretend you didn't see it and go about your business, because it would be a pain and trouble
to stop and try to enforce the rules. Yeah. Well, also, there's so much about our social lives
where we know there's a subtext to what's going on, but if that subtext ever became explicit,
it would destroy the basis of trust or good feeling.
If you said to someone, I'm only inviting you over to dinner tonight because you invited
me last time and I needed to reciprocate.
That's why we're having this dinner. That, on some level, we all know that's going on,
but to make it explicit is sort of antithetical to being friends with people.
Right.
So there's often many levels of what's going on,
and in fact, we expect to see that in movies and stories.
So if somebody, as an an actor was given a script,
and the script said you're at a romantic dinner with somebody else,
and the two of you are there talking to each other,
and what you're saying to each other is I love you, I love you too,
this is great, we're having a wonderful relationship,
this is a wonderful restaurant, isn't this a great night?
The actor will tell you I can't act that.
Because there's just one level there,
and that doesn't seem plausible at all. We expect in a scene like that to be multiple levels.
That is, there's the surface level of I love you, isn't this great, and something else must be going
on. And the actor will actually look for another level so they can act the scene. I'm afraid you'll
leave me, so I'm trying to make sure you don't. Or I'm thinking of leaving you, and so I'm trying
to let you off nice.
Something to make there be two levels of motives,
because that's what we expect to see out of actors and scenes.
So we are really, at some level, we kind of know that people are quite often
pretending one motive and really acting another motive.
There's one thing, one chapter in your book on conversation,
which I found fascinating, because conversation is fairly mysterious in terms of the mismatch between what we think is going on and what
is actually going on and why it would be valued in an evolutionary sense. So let's talk about
what most people think is going on during a conversation and what seems to actually
be going on.
So we're going muda here because, of course, this is a conversation. And we will
try to pretend that this isn't true about our conversation, because that's the central subject of the interview.
The jig is up.
Exactly. So the usual story, if you ask, why are you talking to your friend? Why did you
spend an hour talking? Why didn't you do the dishes or something useful? You might say,
well, we're exchanging information. We each have information the other person doesn't,
and by talking and exchanging information, we can all know more.
And this is the standard rationale for most of our conversations.
What I'm about to tell you applies not just to personal conversation,
but also applies to our news media conversations,
to academic conversations in journals.
In all of them, the standard rationale is information.
That's why you read the newspaper, of course, write to get more information. Now, there are many features
of our conversations that don't fit very well with this explanation. That's, again, my main
argument here is to show you the detailed puzzles that don't fit with the explanation, then offer
you another explanation that fits better. So some of the puzzles here are, if it was about exchanging
information, we would
keep track of debts. I might say, well, I've told you three useful things so far. You haven't
told me any useful thing. It's your turn. We would be more eager to listen than to talk.
It would be our turn to talk and then sigh, okay, I'll find I'll tell you something. We
would be searching for the most valuable things to tell each other, the things that mattered
most to each other. And we would talk about important things instead of the trivialities
that we usually fill our conversations with.
And it would be fine to jump from topic to topic
as long as we were saying something valuable and important
because the point is to communicate information.
But as you know, the usual norm of conversation
is to slowly drift from topic to topic,
none of which need to be very important,
but each time we should say something relevant to that topic.
Now, an alternative explanation in sharing information with this theory is that we are showing off our backpack of tools and resources that we can show we can bring to
bear to any topic you dare to offer. So it's important that the conversation meander in a
way no one of us can control, so that we are each challenged to come up with something relevant to whatever that is.
And by impressing you with knowing something, having a friend or a resource, having a tip,
having some experience that's relevant to whatever you bring up, I show you that if you and I stay
allies and associates in the future, whatever problems you have, I'll have something relevant.
I'm ready for you with resources that would be useful to you because look what I can do no matter
what conversation topic comes up. Yeah, well, the mismatch between desire to listen and desire to
talk is pretty, I mean, I think that's the one that people will find very salient because if it
was really about just getting information, we would be massively
biased toward listening.
We would be stingy with, I mean, we would be pricing out all of our disclosures.
We'd have much bigger ears and smaller mouths.
So then how do you think about gossip and reputation management and what's happening
in that space?
We do, in fact, exchange information. So again, it works as an excuse because it's partly true.
We do exchange information and it is somewhat valuable. It's just not the main thing we're doing.
But often, well, the information we're exchanging is meta to the actual apparent topic. As you may
know, indirectly through what people say, they tell you other things like bragging
about themselves indirectly by telling you about their great vacation in some expensive
rare place. And they talk about each other often in the guise of saying what's been happening,
but we are very interested in knowing about each other and evaluating each other.
And so part of what's going on when we're impressing people is we're not only impressing the people who
immediately see us, we're impressing the other people who will hear about it indirectly.
And so it's important that we impress other people in ways that can transfer through that
gossip to the other people who will hear about it. And we are trying to avoid negative gossip
or negative reputation of things that would make us look bad.
And this is a basic explanation for why a lot of decisions in the world are really quite shallow.
So, for example, as an employer, you might look at an employee and say, this potential employee looks really good.
Yes, they don't have a college degree, but they don't need a college degree for this.
And I can tell they could do the job.
But then you might think to yourself, yes, but other people will hear that I hired this person,
and they will notice that this person doesn't have a college degree, and they will gossip about it.
And then I might look bad for having hired someone without a college degree, and maybe I just don't
want to take that chance. So even if I know that this person could do the job, I still, because
I'm trying to impress this wider audience who will gossip about it, I am pushed to make shallower choices based on less than I know.
Is there anything that you do differently in this area based on having thought about this? I mean,
do you view gossip as a negative character trait that should be discouraged in yourself,
or do you just see it as inevitable or socially useful as a way of
correcting misaligned reputations? I understand and appreciate gossip has an important human role
as a natural nerd. I'm not as inclined or interested in it personally, but that's my So is social status the main metric to which all of this is pegged?
Is that what we're concerned about as subtext virtually all the time?
It's one of the things, but it's actually less than people might think.
So if you're forced to admit you're showing off, often the thing you want to admit
to showing off is how great you are.
That is how smart or conscientious or careful, how knowledgeable.
But plausibly at least half of what you're doing in showing off is showing off loyalty,
not ability. And so perhaps we push medicine
in order to show that we care about people. We participate in politics to show that we're
loyal to our side. We do a lot of things to show loyalty, and that is not something we're
as eager to admit. Because, of course, by trying to be loyal, we are showing some degree
of submission to those we are trying to be loyal, we are showing some degree of submission to those we are trying to... Yeah, so that is a somewhat craven motive to sign on to.
I'm being loyal, right?
Right.
But why is that?
In fact, loyalty is a virtue that we acknowledge.
So humans actually have two different kinds of status,
and it's suspicious and noticeable that we don't make the distinction very often
and we merge them together.
There's dominance and noticeable that we don't make the distinction very often, and we merge them together. There's dominance and prestige.
Dominance is more having power over someone,
and prestige is earning respect.
And the difference of these actually show up in where your eyes go and how you look.
When somebody has dominance over them, you are not supposed to look them in the eye.
Looking them in the eye shows defiance.
When somebody has prestige, you are supposed to look at them. They are like, we presumably up here, you are looking at us.
We are claiming we have prestige, and you're not supposed to look away.
I wish you wouldn't put it that way.
Yes, well.
How embarrassing.
And so people want to get prestige, and they don't want to admit to accepting dominance
or submitting
to dominance, but of course we do.
And so...
But do people admit to wanting prestige?
More so.
They might admit to accepting prestige, although not to seeking it, of course.
Now in ancient history, most societies had kings and their neighbours had tyrants. Tyrants dominated because they, bad guys over there,
had dominance and those people were submitting to dominance and what a terrible thing they had
to suffer. But we have a king who has prestige and it's okay for us to look up to and obey our king
because he's worthy of the status. And so this is often how people come to terms with their bosses.
So from a distance, people say how terrible it is
that we all obey our bosses at work. But each of us at work often makes peace with that by saying,
well, my boss is okay. He's earned that right to be in that role. And I'm okay with doing what he
says. Right. So now, I don't want to spend a lot of time on politics, but obviously,
everything you've written about
is relevant to politics.
And as I was reading the book,
it seemed somewhat mysterious to me
that in the current moment,
someone like Trump seems to violate
more or less every rule you mention in your book.
I mean, the things we've evolved not to do
or not to do brazenly, like brag, right?
Or lie without any hope of
being believed yeah or or advertise our most crass motives in place of more
noble ones that could have been plausible right right he seems to get
away with all of this so how do you explain the success of what's
essentially the anti-evolutionary algorithm? Sure. So let's start with something called
counter-signaling. So ordinarily, if you have an acquaintance and you are trying to show that you
like an acquaintance, you will do things like, at them, being polite, flattering them, opening the door for them, offering them some
food. Those are ways we ordinarily show someone that we are trying to be friendly.
When you have really close friends, however, often you go out of your way to do the opposite.
You insult them. You trip them. You don't show up for some meeting. Why way to do the opposite. You insult them. You trip them.
You don't show up for some meeting.
Why do you do the opposite for a close friend?
So that's in part to show that you are more than an acquaintance.
Once it's clear that you are at least an acquaintance, people might wonder, how close are we?
And doing the opposite can show that you are more than an acquaintance. You are a paper that discussed this was called Too Cool for School.
As you know, many students try to show how studious, how good they are at school by studying hard and doing well.
And then some students try to show that they can ace everything without trying.
And that's, again, countersignaling.
So he's managed to convince half the country that he's their best friend by revealing all of these. But remember, politics is about loyalty signaling. And at one level,
we might all want politicians who are high status. We might all want politicians who are articulate
and tall and went to a good school and smart and say all the right polite things and have
stamina, et cetera. And so in general,
we would all want the same thing there. But if you want to show that your side is different
and you are being especially loyal to your side, you may have to go against the norm.
So as you may know, when the election of Trump, there was a subset of our society who felt
neglected, who felt that their voice was not
being heard and that the politician establishment was not catering to them. And so Trump stood up
and said, I will cater to you. And he went out of his way to show loyalty to that group by
countersignaling in many ways, by doing the opposite of what an ordinary politician might do to appeal to everyone to show I really am appealing to you and you in particular
and I'm going out of my way to raise the cost of appealing to other people to appeal more
to you to show that I really am loyal to you.
And he did convince that group that he was unusually loyal to them and they voted for
him on that basis and he has successfully countersignaled his way
into the presidency. The rest of the world and other people are saying, but this is not the
usual leader, and of course the people who voted for him said, yes, that's exactly how I knew he
was trustworthy on my side, is that he countersignaled the usual signals of overall political
competence.
But we often do that to signal loyalty.
We often go out of our way to pay costs to signal loyalty.
So one of our chapters is on religion,
a topic that I know my guest host up here has written a lot about.
And one of the standard stories about religion
is you may agree to unusual rituals
and to believe strange things
in order to convince people that the people you share those rituals
and strange beliefs with, that you are tied to them,
and that it will be expensive for you to leave them,
and that they, therefore, rely on you.
Actually, viewing back to Trump, just for a second,
viewing a lot of this through the lens of loyalty
explains a few other things.
Because when you look at how people in his inner circle,
or people who have to function like his press secretary,
try to make, with as brave a face as possible,
try to put some positive construal on his lying or his mistakes
or his misrepresentations of fact,
that does function as a kind of loyalty test.
I mean, whenever people with real reputations have to get out there and put both feet in their mouths
so as to pretend that the president didn't do likewise, it is a kind of, I mean,
it looks like a bizarre hazing ritual from here, but it does signal loyalty.
But again, those people across the border, they have tyrants and we have kings.
It's easy to criticize the other side for being excessively loyal and submissive,
but this happens all across the political spectrum.
It's not just on the Trump side.
Yeah.
I don't know what that yeah meant.
I wasn't accepting that at all.
That was confabulation, in case you were wondering.
So, Dana, you're an economist by day.
Let's spend a few minutes on incentives,
because it seems to me that many of the problems in our lives
are the result not of bad people doing bad things because they're bad,
but all of us good people or more or less good people struggling to function in systems where
the incentives are not aligned so as to get us to do the right thing most of the time or make it
easy enough to do the right thing. I think you have a line in your book about incentives being like the wind. I mean, you can decide to row into it, or you can tack against
it, but it's better to have the wind at your back. And so how do you think about incentives, and
what's the low-hanging fruit here? But what is it that we could be doing differently in any area
of consequence? So our book is about motives and money and power and respect are things we have
as motives and incentives are often aligned with those things. So we often do the things that give
us more of these things we want, but we'd rather not admit that those are our highest priorities.
And so we're usually reluctant to overtly just do what it takes to get the money in respect.
So in most areas of life, we have to put some sort of gloss of some higher motive that we have to pretend we're trying to do.
And that means often very direct, simple incentive schemes don't work. They're too obvious.
very direct, simple incentive schemes don't work.
They're too obvious.
Just like your incentive to reciprocate the dinner,
that's an incentive you have,
but you have also an incentive to not admit that too directly because otherwise you would force them to admit
that they mainly wanted the reciprocation.
And so this is an issue with incentives
that many of the problems we have in the world
happen because we have insufficient incentives to do the right thing, but often that's because
we don't want to admit how important incentives are, so we don't want to admit that we need
incentives so we don't restructure things to produce the incentives we need.
And because we want to pretend we don't
need the incentives.
So, for example, your doctor's incentive to give you the best treatment can often be compromised
by the fact they might, under one incentive system, just want to treat you more just because
they get paid every time they treat you, or another incentive system might need to treat
you less because they have to pay out of their pocket every time they treat you.
Under either case, their incentives might not be well aligned with you, but you could have set up
some sort of more direct incentive system where they had a stake in your health, but you might
not be comfortable with asking for that because that might show that you didn't trust your doctor.
You might rather on the surface pretend like you trust your doctor and they trust you and you have a nice, comfortable relationship.
This is also a problem in financial investment, actually.
An awful lot of people invest an awful lot in intermediaries who take a lot of money but don't offer that much in return.
And people often just like the relationship they have with the intermediary and they don't want a distrusting relationship that would have some explicit stronger financial incentives, so they accept
a weak relationship. People often want to feel like you had a relationship, and that relationship
is degraded by the idea that you might not have trusted them. I'm a researcher in academia,
and most money comes in the form of grants where they say
apply for the grant and then they might give you the grant.
We've long known that prizes are often more effective.
A prize is where they say if you do the following thing, then we'll give you this much money.
And a prize can give stronger incentives for people to do things, but a prize is less trusting.
And you as the granting agency often want to just form a relationship with someone
and then take credit for them as if we were buddies. And this prize sort of makes an arms-length
distance where clearly I don't trust you if I'm going to only pay you if you do this measurable
thing. And so we'd rather have this closer relationship than to have a stronger incentive.
Is there a meta level to many of these considerations where it can be
reasonable to not follow the purely rational line through all of these
problems? It sounds like what would happen is if we took all of this to
heart we would try to bootstrap ourselves to some new norms that
paid better dividends by or seem more rational economically or otherwise,
or in terms of health outcomes. And yet, given human nature, we might find the price of anchoring
ourselves to those new norms to be unacceptable for one reason or another.
So the way I would summarize this is to say our usual institutions let us pretend to be trying to get the thing we pretend to want
while actually under the surface giving us the things we actually want. Policy analysts typically
try to analyze how to give policy reforms that would give us more of the things we pretend to
want. And we're usually uninterested in that because we know we don't actually want more of the things we pretend to want. And we're usually uninterested in that because we know we
don't actually want more of the things we pretend we want. If you could design a policy reform that
let us continue to pretend to get the things we pretend to want while actually getting more of
what we actually want, we'd like that. But we can't admit it. If we stumble into it, we'll stay there.
But if the policy analysts were just to out loud say,
well, this is a system that will give you more of this thing
as what you actually want, but admit it, don't you?
We don't want to admit it, and then we won't want to embrace that.
So yes, what we want to do is pay for the appearance
of the thing we're pretending to want,
and we're often paying a lot for that appearance.
I would love to see a transcript of what you just said there.
So I'm going to ask you some rapid-fire kind of bonus questions here.
I want to leave a lot of time for Q&A
because though conversation isn't about just exchanging information,
you have a lot of information to exchange,
and I want to get the audience involved.
But if you had one piece of advice
for a person who wanted to succeed
in your area of work, what would that be?
I am an intellectual,
and my measure of success would be insight.
There are other measures of success.
You could have a prestigious position. You could make a measures of success. You could have a prestigious
position, you could make a lot of money, you could get a lot of friends. But if the measure
of success is insight, then a number of strategies, one of which is just to look for neglected
areas. So as we talked about in conversation, there's a strong norm in ordinary conversation
to follow the conversation, to talk about what everybody else is talking about. And academics do that, news media does that, and we do that in ordinary
conversations in a group of people. But for intellectual contribution, if you jump right
in on what everybody else is talking about, your chance of making a large impact are pretty
small. You're adding a small amount to what everybody else is talking about. If you go
talk about what somebody else isn't talking about, find something important but neglected, your contribution can be quite large, even if you're not especially
brilliant or well-tooled. And so one very simple heuristic if you want to produce intellectual
insight is just to look at what other people aren't looking at that seems important. And
hope that later on they'll come around to your topic and realize that you did make a
contribution. But how long would you stay in that important area?
Waiting for people to come around?
You don't have to stay.
You have to stay long enough to make a contribution and then you can go off looking for another
area to make a contribution to.
What if anything do you wish you had done differently in your 20s, 30s or 40s?
You can pick the relevant decade.
Well I wandered around a bit, much like Sam, in that I started my PhD program at the age
of 34 with two kids, eight, zero, and two.
It's a relatively late start.
That was, in some sense, the price for continuing to switch because other areas seem to actually
be more objectively important and have more promise.
important and have more promise. But as I said before, this book that I'm out with here is summarizing the thing I wish I would have known at the beginning of that social science career,
which is that we are just often not honest with ourselves about our motives. So the thing I'm
most known for actually is something called prediction markets, betting markets on important
topics. And they do work well. And they give people something they say they want, which is more accurate estimates and
information on important topics. And it turns out people are usually not very interested in them.
Even though you can show over and over again in many ways that they work,
and they're cheap, et cetera. Part of why I didn't realize that that would happen is I
took people at the word for what they want.
So you wish you hadn't spent so much time on prediction?
Well, I wish I would have understood the constraint that people are not honest about
what they want and thought about that constraint when I was initially trying to design institutions.
So I've read many other ideas and worked on ideas for reforming politics and medical purchasing
and information aggregation, et cetera. And in each
case, I assumed the usual story about what we're trying to do and worked out a better answer. And
we actually can not only work out better answers, we can show them and not only in math, but lab
experiments and field experiments. We do actually know many ways to make the world better substantially
and the world's not interested, most of them, because we know how to make the world better
according to the thing that people say they want, to learn more at school, to get
healthier at the hospital, to get more policy and politics. But in fact, emotionally, at people's
heart, they kind of know that's not what they want. And so they're not interested. So I wish
I would have known that 20 years ago. And this book is hopefully to somebody at a younger career,
somebody can pick this up. You might know a 20 year old who's been saying for a while, everybody's bullshitting. Nobody's telling the truth.
Where can I find out what's really going on? I'm hoping our book can be that book.
So 10 years from now, what do you think you'll regret doing too much of or too little of at
this point in your life? I mean, if I knew that, I would presumably be doing something different.
Do you actually think that's true?
Isn't that just one of the problems?
For instance, you know you want to lose weight,
you know how to lose weight,
but you still can't get the ding-dong out of your head?
The major issue would be if I'm neglecting the long run
for the short run, right?
I don't know if I am, but yes,
if I am neglecting the long run, then I would regret not investing more in the short run, right? I don't know if I am, but yes, if I am neglecting the long run,
then I would regret not investing more in the long run. But I am primarily investing in this
long run effort to produce intellectual insight. And I actually think there are scale economies
in that. So the more fields you learn, the more mental models and tools you have is to learn new
fields. So you can actually learn new fields faster, more fields you have. So if your intellectual project is to learn many fields
and then find ways to combine the insights from them together, that's something you continue to
do more and better as you get older. And so I'm enjoying that wave and not thinking I'm over at
all. What negative experience, one that you would not wish to repeat, has been most valuable to you?
Most valuable to me? Negative experience?
Or changed you for the better?
But it's got to be negative. You wouldn't want to repeat it.
Well, so early in my academic career, I sort of really just failed to do the simple standard thing of
applying to the best colleges. I'm not sure what went wrong, but somehow my family or me somehow
just did not go through the process of applying to good colleges far away. We just sent me to the
local college, which was easy for me. Okay. Too easy compared to my colleagues. So I had lots of
free time. So perhaps I might have
thought I should have gone to a more challenging college, and then people would have challenged me,
but that made me who I am in the sense that with all that free time, I just started studying stuff
on my own. I sort of made up my own topics and made up my own questions and just started going
in and working on things. And so actually, I a physics undergraduate major and the first two years of physics classes are going over all the major topics and
then the second, the last two years are going off over all the major topics
again with more math. And I had all these questions that the math was not
answering and so what I did in the last two years of college was to just play
with the equations. Just rearrange them, try different ways and by spending the
semester rearranging the equations I could ace the exams. Just rearrange them, try different ways. And by spending the semester
rearranging the equations, I could ace the exams. But I didn't do any of the homework.
And so professors who had a formula, like so much percentage homework, so much percentage exams,
they didn't know what to do with me exactly. And so I got low grades in some classes,
although people were willing to give me letters of recommendations. But basically,
Although people were willing to give me letters and recommendations. But basically, that formed me.
That is, I became the person who didn't, like, do what I was told.
I wasn't following a path that people had led for me.
And I wasn't going down learning the things I was supposed to learn.
I was just making up my own problems and my own questions and working them out for myself.
And in the end, that has some advantages.
But I'm not sure that was best overall.
I'm going to that was best overall.
I'm going to put that in the bragging category.
What worries you most about our collective future?
We are collectively ignorant compared to what we could be. We are a vast population, a vast world, a lot of smart people, very capable people. We have many great tools,
and we just don't pull that together into a consensus that we can use very well.
We fail to do something we could do quite easily. My work on prediction markets was one attempt
to try to create an
institution which would allow us to collect what we know together effectively and efficiently.
And it would work if anybody was interested. But we're not very interested. And so part of
my intellectual work is just try to diagnose why aren't we interested as part of the understanding
how could we do better. And I think this fact that we're all trying to show off to each other
is part of it. And if I ask, well, what's going wrong with our showing off? I would say the problem is we are showing off to audiences that are too
ignorant. That is, if we focused on a really smart audience, a really knowledgeable audience,
we're trying to show off to them, then we would be forced to show off in better ways.
So for example, we haven't talked much about it, but basically I've said medicine is mostly about
showing that we care rather than helping people to get healthy.
So when grandma's sick, you make sure she gets expensive medical treatment, the sort that everybody would say is the reasonable thing to do, even if it's not actually very effective.
But as long as your audience doesn't know it's not very effective, they will still give you credit for being caring about grandma.
If your audience knew that the medicine you were pushing hurt her instead of helping her, they
would not consider you as such a caring person. So the more that our audience knows about
what actually works and has what effects, the more we would all be pushed to do things
that actually had good effects as part of the process of trying to show off and show
that we care. Similarly in politics.
Actually, before we move on to that,
say more about the mismatch in medicine.
How is it that we know,
or how is it that you think you know that it's more about caring than about results?
So again, the structure is a set of puzzles
that don't make sense from the usual point of view.
So it turns out we have data on variations in health
and variations in medicine,
and there's almost no relationship.
That is, geographic areas that spend more on medicine
or have people do more doctor visits, those areas are not healthier.
We also even have randomized experiments
where some people have been given randomly a low price of medicine,
and they consume more, and other people have a high price,
and they consume less, and then there's no difference in health
between these groups.
So at a very basic level, there's very little, if not any, correlation between health and medicine. Not only that,
there are other things that correlate strongly with health that people show very little interest
in. Well, there must be a lower bound to that, though, because some medicine is life-saving,
clearly, right? Where are you putting the line between... Well, it's not a line that is...
There's a whole mix of medicine,
and some of the stuff helps,
and that means other stuff hurts.
So if you could just get the stuff that helps
and avoid the stuff that hurts,
why then you could do better?
But people show relatively interest in doing that.
And so some medicine hurts.
Not only does it do xeroid, on average, hurts.
We are not interested in exercise, air quality.
But what's the measure of people not being interested in the information that would allow
them to get better medicine? We have experiments and studies where people
have been given access to information and asked if they would be willing to pay much
for it and even just given it and seen if it affects their behavior. And consistently, if you give people privately information about
the quality of medicine, they just aren't interested and don't act on it.
And they won't pay for it, right? They certainly won't pay for it, exactly.
So there was a study of people about to undergo heart surgery where a few percent of people
undergoing heart surgery die. So that means you face a few percent risk of death.
That should be a serious situation.
They said, we have statistics on the local surgeons
and the local hospitals in terms of what the percentage is
of those patients dying there, and it varies by quite a bit.
Twice as much in some places than other places.
Would you like this information?
Only 8% were willing to pay 50 bucks.
And those who were just given the information didn't act on it.
Why is it that I think everyone I know is in the 8%?
Well, that's what they're pretending.
A way to understand this is to think about Valentine's, which happened recently.
On Valentine's, it's traditional to try to show that you care about someone by, say,
buying them a box of chocolates.
Now, when you do this, do you ask how hungry they are when you think about how large a box to buy?
No.
Plausible, you need to buy as much chocolate as it takes to show you care more than somebody else, regardless of how hungry they are, which is like medicine.
We just give people a lot of medicine, even though the extra medicine isn't very useful.
And if you ask, well, how do I know which quality of chocolate to get? You know that you need to give a quality of chocolate a
signal that's a common signal of quality. If you happen to privately know that this is a great kind
of chocolate or they happen to privately know a certain kind of thing is a great kind of chocolate,
that won't actually affect whether you interpret this as a generous act. The interpretation of
generosity is based on a common signal of quality.
So if medicine is a way to show that we care, then similarly what we want is common signals
of quality. We aren't actually very interested in private signals of quality of medicine,
which is what we actually see. All right, back to rapid fire questions.
I'm taking too long. No, no, I've been asking follow-ups as well.
I'm taking too long.
No, no, I've been asking follow-ups as well.
If you could solve just one mystery or problem... If you'd like to continue listening to this podcast,
you'll need to subscribe at samharris.org.
You'll get access to all full-length episodes
of the Making Sense podcast
and to other subscriber-only content,
including bonus episodes and AMAs
and the conversations I've been having
on the Waking Up app.
The Making Sense podcast is ad-free
and relies entirely on listener support.
And you can subscribe now at samharris.org. Thank you.