Making Sense with Sam Harris - #317 — What Do We Know About Our Minds?
Episode Date: April 20, 2023Sam Harris speaks with Paul Bloom about the state of psychological science. They discuss fiction as a window onto the mind, recent developments in AI, the tension between misinformation and free speec...h, bullshitting vs lying, truth vs belonging, reliance on scientific authority, the limits of reductionism, consciousness vs intelligence, Freud, behaviorism, the unconscious origins of behavior, confabulation, the limitations of debate, language, Koko the gorilla, mental health, happiness, behavioral genetics, birth-order effects, living a good life, the remembered and experiencing selves, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Transcript
Discussion (0)
Thank you. of the Making Sense podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed
to add to your favorite podcatcher,
along with other subscriber-only content.
We don't run ads on the podcast,
and therefore it's made possible entirely
through the support of our subscribers.
So if you enjoy what we're doing here,
please consider becoming one.
Well, recent developments in AI have been interesting.
I am sure I will do many more podcasts on this topic, but for the moment, some people have asked whether GPT-4 and its rapid adoption
have changed my views at all about AI and AI risk.
As some of you know, I did a TED Talk on the topic of artificial general intelligence in 2016,
and that's available on YouTube and elsewhere, presumably.
And nothing has really changed about my concern for AGI and alignment,
artificial general intelligence, and the problem of creating it such that it is aligned with our
interests. It's probably a worse problem now than I thought it was, because the main change here is
that the suddenness with which AI has improved and the way in which we have
blown past all of the landmarks that AI safety people have carefully erected, that has alarmed
me and many other people. Because in all my conversations with people like Nick Bostrom and
Max Tegmark and Eliezer Yudkowsky and Stuart Russell, it was more or less an explicit
expectation that as we cross the final yards into the end zone of human-level intelligence,
even under conditions of an arms race, which are not at all ideal for solving the alignment problem,
but even in that case, there would be a degree of caution that would sober everyone up. And so,
for instance, the most powerful AI models wouldn't be connected to the internet, or so it was thought,
and they obviously wouldn't have APIs. They wouldn't be put into the hands of millions of
people at the outset. But with GPT-4, we've blown past all of that. And so now it's pretty
clear that we're developing our most powerful AI more or less in the wild, without fully
understanding the implications. So in my view, this does nothing to suggest that we're better
placed to solve the alignment problem. And that problem seems to me to be as big as ever. And it
has also magnified the near-term risk of things going haywire due to unintended consequences
and potential malicious uses of narrow AI. With GTP4, it's almost like we've done our first
above-ground nuclear test, and we've seen the flash of very impressive AI.
And now many of us are just waiting for the blast wave of hoaxes and lies to knock everything over.
Now, I hope I'm wrong about this,
but I'm half expecting the Internet to be eventually inundated
by fake information, by lies and half-truths, to a degree that could render it
totally unusable. I mean, just imagine not being able to trust the authenticity
of most photos and videos and audio and text. I mean, imagine what the internet becomes
what the internet becomes when AI-generated fan fiction crowds out everything else.
Then imagine the cultic entanglement with all this misinformation on the part of billions of people globally. It seems like it could be ivermectin and adrenochrome and dogecoin and catfishing scams
and ransomware and who knows what else for as far as the eye can see.
And even the best-case scenario could still look totally uncanny.
I mean, let's say we solve the misinformation problem, though how we're going to do that is anybody's guess.
But even if we did, what will people want when all valid information can be produced by a machine?
All art and science and philosophy, when even the smartest and most creative people can
be taken out of the loop. What will we want then? And for some things, I think we just
want results. I don't care where the cure for cancer comes from.
I just want it.
So there's no future in artisanal oncology.
Just give us the winning algorithm.
But what about nonfiction writing?
If you just want the answer to a specific question,
I think AI will be fine.
If you ask ChatGPT to tell you the causes of World War II, it does a pretty good job.
But this will never substitute for reading Churchill, provided you care to know how the
world looked to Churchill himself, and not to some credible simulacrum of Churchill.
So I don't think anyone knows how all of this is going to transform our relationship to information.
But what I'm experiencing personally now is a greater desire to make contact with the real world.
To see my friends in person, to travel, to be out in nature, to just take a walk.
And it may sound self-serving to say this, but podcasts and audiobooks are becoming more and more important for this.
I still spend a tremendous amount of time in front of a screen and reading physical books,
but I now spend almost as much time listening to audio, because it's the difference between
being stuck at my desk and taking a three-hour walk or a hike. And being able to do that and still call it work
is just such an amazing have-your-cake-and-eat-it-too experience.
And while all of this is still being enabled by a smartphone,
the effect on my life is quite different from being married to one's phone for other reasons.
Listening to audio really is different than
endlessly checking email or Slack or Twitter or something else that is fragmenting your attention.
Anyway, it's pretty clear we're witnessing an AI arms race and gold rush and that things are about
to get very interesting. And it seems quite reasonable to worry that the landscape of incentives
is such that we might wind up someplace
truly undesirable.
In fact, someplace that actually no one wants to be.
And we might arrive there
despite everyone wanting to avoid such an outcome.
So there's a lot to figure out,
and I am sure I will do a few more podcasts on this topic
before I'm replaced by a bot that does a far better job of it.
And now for today's podcast.
Today I'm speaking with Paul Bloom.
Paul is a professor of psychology at the University of Toronto,
and also a professor emeritus of psychology at Yale.
His research explores the psychology of morality, identity,
and pleasure, and he is the recipient of many awards and honors, including most recently the
million-dollar Klaus J. Jacobs Research Prize. He's written for many scientific journals,
such as Nature and Science, for the New York Times, The New Yorker, The Atlantic Monthly,
and science for the New York Times, the New Yorker, the Atlantic Monthly, and elsewhere.
He is the author of eight books, including Against Empathy, Just Babies, How Pleasure Works, The Sweet Spot, and his new book is Psych, the Story of the Human Mind, which
we discuss in this conversation.
We cover many topics here, including fiction as a window onto psychology,
recent developments in AI,
the tension between misinformation and free speech,
the difference between bullshitting and lying,
truth versus belonging,
reliance on scientific authority,
the limits of reductionism,
consciousness versus intelligence,
Freud, behaviorism, The Unconscious Mind, Confabulation, The Limitations of Debate, Language, Coco the Gorilla, Mental Health, Happiness, Behavioral Genetics, Birth Order Effects, Living a Good Life, The Remembered, the remembered and experiencing selves, and other
topics. Anyway, it's always great to talk to Paul. And now I bring you Paul Bloom.
I am here with Paul Bloom. Paul, thanks for joining me again.
Great to talk to you again, Sam. I've lost count, but I am confident that you are my returning champion and most frequent guest.
So congratulations if you need yet another honor to add to the trophy that you keep on the mantle.
Yeah, it's a funny thing to put in your CV.
I'd like to see that.
Please put it in your CV.
I would like to see the reactions to that.
Yeah, some dean's going to be scratching his head, but I do take it as an honor. I like talking with
you. Well, people love hearing from you. So this is not altruism directed in your direction. This
is pure wise selfishness on my part. But you have a new book, which is the nominal occasion for this conversation, and that
book is Psych, the Story of the Human Mind, which we'll talk about. You have produced essentially a
Psych 101 course in super accessible, non-boring format for the general public, so that's great,
and people will enjoy it. That's exactly, that's great. And people enjoy it.
That's exactly, that's a nice way of putting it. I aspire to do exactly that, which is present the whole story of psychology. But, you know, I hate reading textbooks. I couldn't bear
to write one. And I tried to put it in a way that people could enjoy it. And also, textbooks have a
sort of neutrality and objectivity. And, you know, I aspire towards that.
I try to tell the story in kind of a straightforward way.
But I also often give myself the luxury to weigh in on different debates.
You can't do that in a textbook.
No, this is not at all textbook-like, but it does cover the full sweep of what we know or what we think we know or what we are embarrassed not yet to know about
the human mind. Yeah. And there's a lot we don't know. I know there's some other topics we might
want to touch before we jump into the book, but how do you feel about the state of our understanding
of the human mind at this point? I guess you and I have spoken about this before, I think specifically with
respect to parenthood and how surprised we were to realize, even you being a developmental
psychologist, how little science informed our day-to-day experience of parenting. How do you
feel about the relevance of science to living a good life altogether at this point?
Guardedly positive.
I wouldn't have written a book if I didn't feel like psychology had interesting things to tell us
about questions that matter a lot.
Like, you know, how to live a life of happiness.
How much can we trust our memories?
How does language work?
Even questions which have become quite urgent these days with the dawn of AI and whatever revolution we're now going through.
I think psychology has a lot to say about it. On the other hand, I try to be honest in the book.
A lot of our findings are not as robust as we thought they were. And I still believe,
and I don't know who said it,
maybe Chomsky said this, which is that you could learn a lot more from a good novel or a good TV
series or a good movie from a psychology textbook. If somebody was going to say, what's a marriage
like? What's it like to raise teenagers? What's it like to grow old? I wouldn't point them to a
psychology textbook. I'd point him to some
good novels. Yeah, that's interesting. I used to be a big reader of fiction, and then at some point
things flipped, and now I've got to think I'm 20 to 1 nonfiction to fiction, or probably worse than
that. It could be 50 to 1. But in recent years, I have arrived at that epiphany myself.
It's just there's so much to learn about human life through fiction.
And it seems strange to say that because it is fiction.
But what you're seeing are the best attempts of some of the smartest
and most creative people to capture the substance of human
experience. And some of the most compelling attempts at that are, by definition, what we
have singled out as the most valuable forms of literature. And I guess we could add film and
television here as well. But it seems strange to say it, but it is,
in some cases, our most accurate window on to, at minimum, the lives of others.
Yeah. And I think a good writer, a good filmmaker, has insights into the lives of others,
often from their own experience. And there's something about it which is often more powerful
and more transcendent than what you get through psychological research.
You know, you see a movie like Tar, and you learn about artistic enterprise and about cancellation, about good and evil.
The Banshees movie, the lovely meditation on friendship.
And, you know, I don't know whether things will ever be different, whether there'll be a point where I'll say, no, no, check out the research, it'll tell
you more. There's certainly things the research could tell you that the novelists never could.
And so maybe it's a matter of staying in our lane.
Well, what do you, this is going to be a disconcertingly large question, but what do
we know about the human mind at this point? The year is 2023. If
you had to distill what we know or what we think we know at this point to somebody who really knew
nothing about the last 150 years of mind science, what do you think we know?
We don't have a theory of the human mind, and I don't think we ever will.
Not because of our inadequacies, but because the mind is many things.
And so in some way, if you ask, what do we know about a human body?
I have a feeling that an anatomist or a physiologist would say, well, you know, let me tell you
about the heart.
Let me tell you about the spleen.
Let me tell you about the ankle bones.
And so we know a fair amount about the different components of the mind. We know
some surprising things about memory, surprising about personality, language, motivation, sex.
And generally, so trying to maybe stalling for time here, I'll try to answer your question.
We know the mind is the brain. We don't exactly know how the brain gives rise to consciousness,
but we know how the brain gives rise to consciousness, but we know how the
brain gives rise to intelligence. It's not so dissimilar to any other intelligent machine that
we now possess. We know that a lot of our mental life is the product of natural selection. We know
a lot of it is the product of cultural evolution. We know, and here I'll give a shout out to Freud,
we know a lot of the most interesting stuff
isn't accessible to our consciousness.
We know we're often conflicted beings.
We know emotion, we know,
and I think we know,
a lot of my colleagues would disagree with me,
that we could be extraordinarily rational creatures
with a capacity for reason and imagination,
creativity that far exceeds anything else on the planet.
But we can also be fooled.
We can fool ourselves.
So a lot of things like that.
We've set out a nice menu of topics we can hit.
So I think we should take those piece by piece.
But before we do, maybe—
It's honestly a problem for a book like this.
I'm looking forward to talking to you about this, but previously we've talked about very focused topics of my other books like empathy or topics of mutual interest on the mics, we were talking about my deleting my Twitter account, which is not disconnected to what I find most interesting and troubling about what's happening with AI at the moment.
with AI at the moment. And I did notice in your book, the one thing that it was clearly dated,
you know, and it was dated as of, you know, two months ago.
Embarrassingly dated, yes.
But I mean, really, you could not have, you would have had to have been Nostradamus to have foreseen how quickly that particular, I think, paragraph was going to age. But, you know, AI has moved on
quite a bit since you published your book.
And how has it struck you, AI specifically for the moment?
So just to fess up, I have a few paragraphs in my book where I sort of dismiss statistical
attempts to model the human mind and say, oh, these could never work. And I think recent events
got me a bit flat-footed on this.
I'm kind of like, to be honest, about when I get things wrong
and when things surprise me.
And AI, what has happened with GPT-4 and Bing
has been a huge shock to me.
If one of our conversations a couple of years ago,
you asked me what's going to happen,
when will we have a system capable of having a perfectly normal conversation and saying
intelligent things?
I'd say, I don't know, 20 years, 50 years, maybe never.
And yet here we are.
And so I'm kind of stunned by it.
Like a lot of people, and I've heard you devote a few podcasts talking to people like my friend
Gary Marcus.
Like a lot of people, I'm worried about it. I don't know where I stand for people who want to sort of halt research for
a period, but I think it's an idea worth taking seriously. I'm not really necessarily endorsing
the idea that it will kill us all. But, and you made the argument a while ago, if the odds are
like 5% or 10%, that's worth taking rather seriously.
And as a psychologist, I wonder how much the success of models like GBT4 tell us about how our minds work.
Yeah, yeah.
much at all, or certainly need not, to constitute its own form of intelligence that disrupts our lives or benefits us immensely, depending. Yeah, I mean...
I think that's a deep point. Yeah, I really think... My answer to the question is that
humans do not learn, do not achieve our intelligence in anything like the way
these large language models do. There's no resemblance to the development of a child.
And yet, they have an intelligence of some sort.
And so maybe there's more.
I mean, actually, I do think that this suggests there's more than one way to become smart.
Yeah, I mean, there's a few red herrings here I think we should dispense with.
One is the confusion about the importance of consciousness here and any connection necessary or otherwise between consciousness and intelligence.
We simply don't know how and when consciousness emerges and whether it comes along for the ride at a certain level of complexity and a certain level of intelligence or not. But there's simply no
question that we have built intelligent machines and we're continuing to build them. And they're
intelligent, i.e. competent, whether or not there is ever anything that it's like to be those
machines. I think it's an important question in its own right, but it's quite separable from whether
intelligence itself is substrate independent and whether it can be aligned or unaligned
with human interests and whether we might be building systems that we may one day lose
control of, right?
It's just that consciousness is a completely separate question there.
And it has ethical importance because we're building machines that are conscious and we're building machines that can suffer or be made happy. And, you know, that's an important thing to have done or to avoid doing.
for me is that I think we're in danger of just losing sight of whether the question of consciousness is even interesting anymore, because we'll be in the presence of machines that are passing the
Turing test perfectly. They're virtually doing that. They're doing that now in a text-based way.
And at a certain point, they're going to seem conscious, and we're going to treat them as
though they were conscious, whether or not we ever know the ground truth there.
I agree, every word of that.
The question of what it is to become intelligent
is kind of a bread-and-butter scientific question.
You know, computers can do intelligent things,
brains can do intelligent things.
We have some conception of how,
how we could build a machine that could play chess
or carry on a conversation,
and how our brains do that, too.
The question of consciousness, as you put it, is entirely independent.
But also, it's going to be important because, you know,
there was a guy at Google, Blake Limone, I think,
who was working with a chatbot and became convinced that it was sentient.
And, you know, Google, I think, put him on leave or something
because he was protesting that it was now held as a slave.
It should have its own rights, its own legal protection.
And he came in for a lot of mockery,
which a lot of it I think was unfair.
But the question he struggled with
is something which is going to happen
more and more and more and more.
We're going to build these machines.
It's going to be increasingly complicated.
And when, say, we're in a situation where each of
us owns one that regularly interacts with us, has wonderful conversations with us, seemingly has
empathy and compassion for us, gives us good advice, we talk to it all the time, it will be
inescapable to see it as conscious. And so people will ask, is this correct? And it's of moral importance.
If it's conscious, it comes under the scope of what you've called a moral landscape.
You can't do bad things to it.
You shouldn't.
But we have no idea how to find out.
And that's going to be a deep problem.
And that's a problem which is going to bite us on the ass pretty soon.
The only thing that has changed for me since the emergence of
ChatGPT and its cousins is that I've grown more concerned about the near-term chaos and harms of
AI that falls short of AGI, artificial general intelligence. I just think these tools are so powerful and so
disorienting in and of themselves that, I mean, I just want to think about turning this technology
loose to produce misinformation, which many people will. I mean, unless the AI becomes a perfect remedy for that sort of thing, it just seems like
our information landscape is going to get so gummed up with what is essentially persuasive spam
that I just don't know how we talk or think about anything in public. Ironically, what seems a step in the direction of democratizing the search for
knowledge, I think will quickly pendulum swing into even greater gatekeeping because, you know,
only trusted sources of information. It's like, one example here is that, you know, if you,
when you think about the, you know, deep fakes and deep fakes of video and audio and just images, photos, becoming so persuasive that you simply can't tell whether this is a real video of Putin declaring that he's launched all his missiles or not.
Only an AI could do it and maybe an AI can't do it.
could do it and maybe an AI can't do it. I just think at a certain point, we're going to declare epistemological bankruptcy and say something like, okay, well, if an image hasn't come from
Getty Images, we can't trust that it's an actual image of anything, right? And there'll be a hundred
versions of that sort of thing where you're just, what we're seeing is a greater siloing
of information and a greater role for gatekeepers.
I mean, it obviously could play out differently, but that's sort of what I'm expecting here,
because what digital information is going to be taken at face value when, again, I'm thinking,
like, this is not years away, this is weeks or months away.
Yeah. I mean, the gatekeepers themselves may be AIs. We might be envisioning the beginning of an
arms race where people are using them to distribute, you know, false news and misinformation.
And other people are using it to filter it out.
You could imagine, and I think that the science fiction writer Neil Stevenson had a scenario like this, which we'll all have a personal system that uses our own preferences to filter things out and try to separate the fakes from the originals.
But it might reach a point, no matter how smart you are,
how smart an AI is, it can't tell a fake from the original.
And then you go back to where does it come from?
Where's the imprint?
And I could just see the world's going to change in that regard.
And I wanted to ask you, do you use GPT-4, 3, or Bing in your
everyday life? Not yet. You know, insofar as I have played around with it, I've been underwhelmed
by what has come back to me. I'm overwhelmed by the prospects for manufacturing semi-persuasive
disinformation, and also just getting confused. It's like you ask it a question
and it will confidently give you an answer. And then when you see that some of its answers are,
in fact, hallucinations, it's disconcerting to think about ever relying on it in a fully
confident way. I mean, I got to think it's only going to get better with respect to its error
rate. But it just seems that we're very close to someone being able to ask, you know, Chad GPT-4
or let's say 5, you know, write me a medical journal article in the style of JAMA about,
of JAMA about how dangerous mRNA vaccine technology is and give me exactly 110 references.
And the better that gets, it's just you could produce fake journal articles by the ream and just populate the world with them. I just don't know the sheer scale of it, right?
I mean, the fact that we might find ourselves in a world where most of what is online is fake,
I just think that's possible, and I'm not sure what we're going to do about it.
And you're right that somewhat paradoxically, this could force a move back to more respect,
more weight, more value given to sort of trusted traditional authorities.
Where, you know, if you hear about a video, you see a video, you might then have to go to the New York Times website to see if it's confirmed or not confirmed.
You go back to people or whatever, whoever you trust.
But in some way, this is a very old problem.
The problem of forging
signatures and legal documents and so on, but social media magnifies it a thousand times over.
So I actually don't know if this is a change of topic or the same topic, but you did,
you did leave Twitter and I've heard you talk about why. It seemed like your reasons for
leaving Twitter were a little bit independent of what we're talking about now.
Yeah. Well, the misinformation piece was important, but it was really misinformation
as applied to me. I mean, I became, you know, the trending topic of the day, and it was a
distortion of what I actually said and, you know, in certain cases meant to say,
because in this case, I wasn't speaking especially clearly.
I mean, the reason why I left was I just noticed that I had reached a kind of tipping point where
Twitter was obviously making my life worse, right? And it was just unambiguous. Whatever
story I could tell myself about the benefits of the good parts or just the necessity of staying engaged with it as a
source of information, kind of taking the pulse of the world moment to moment, as I
imagined I was doing, checking Twitter compulsively.
It just, it was making me a worse person.
In particular, it was making me see the worst of other people in a way that I was, I became convinced was a
distortion of what the way the world is. I mean, the people, people are not as bad as they were
appearing to me to be on an hourly basis, you know, day after day, week after week, month after
month. And really, I mean, I was on for 12 years and it was just getting worse and worse, but it did reach a tipping point when, you know,
Trumpistan, you went berserk in response to something I'd said on another podcast.
And a couple of things were interesting about that. One is that while in, you know, red-pilled
Twitter, there had been a, just a complete, you know, run on the bank of my reputation. I was completely defenestrated. In my world,
and really in any place I care about, nothing had happened. And so it was strange to see,
there's this phrase, Twitter isn't real life, which I think can be misleading because I think
Twitter can get people elected president and lots of things can happen.
And if you weren't on Twitter, you didn't know they were happening for quite some time.
But there is a sense in which, at least for my life, it's not real life or it became unreal.
And having gotten off of it, I'm amazed at the difference in my life. And it's not just
the obvious difference of I'm not hearing from 10,000 psychopaths on a daily basis or people
who are effectively behaving like psychopaths. It's just my sense of what the world is has
changed. Now, it could be that there's a bit of a delusion creeping in, in that, you know, I'm not in touch with certain forms of information moment to moment, but I don't know. I just,
it's like a, it's almost a pre-internet existence. I mean, I spend a ton of time online and in front
of my computer as it is, so it's not pre-internet, but something has been stripped out of my life that was a digital phantom or a golem
or, you know, something awful, which, you know, I just, it's staggering to me how big, it's like,
I can honestly say that getting off Twitter is one of the most important things I've done in the
last decade, right? So it's just, it's an obscenity to me that I'm even in a position to say that,
right? Like that I managed to get so confused about what I should be doing with my attention
that I could affect such a comprehensive change in my life by simply deleting my Twitter account.
That's just, it's staggering to me. So it seems there could be two things going on regarding your
interactions with people, and probably both are true. One is going off Twitter, you simply spend less time dealing
with strangers, often malevolent or psychopathic strangers. The second is something which I've
discovered, which is sometimes you see somebody online, and maybe they have an extreme political
view. Maybe they're very into Donald Trump, or maybe they're just extremely woke or extremely anti-woke.
And then you meet them in person,
and they're invariably more nuanced, complicated, kinder,
more interesting, less caricatured.
I'm sure there's exceptions.
I'm sure there's people who are just as bad in real life or maybe worse.
But there's something about the dynamic of social media
that really does,
at times, bring out the worst in us. I got to say, I was a bit tempted to follow your lead,
but there's two things. One thing is, I don't have your status, your celebrity status. I don't have
that particular problem of being dredged over by crazy people. And the second thing is that I waste a lot of time on Twitter,
but I do find it's often extremely informative as to what's going on in my world.
Yeah. Yeah. That's what kept me hooked for all those years because I was following hundreds of
smart, creative people who are constantly surfacing interesting articles or paintings.
I mean, it was my news feed, you know?
So do you have?
I won't put you on the spot of NASA.
Now I know nothing, yeah.
But no, I'm actually more asking,
is there some character which has four followers,
a little egg shape that's you, just following the same people?
No, no.
No, I'm really off. I mean,
I occasionally have had to check it for, you know, to do research for a podcast or just to
get in touch with a specific piece of information, but no, like I go for weeks without looking at
Twitter, the website, and it's not that I haven't lost anything because, again, I was seeing articles and other things discovered for me by smart people that I'm surely not discovering myself now.
But it really does center on the point you just made, which is just the distorted sense of other people I knew I was getting but couldn't fully correct for.
I knew I was getting, but couldn't fully correct for.
Because in some cases, these aren't just strangers who I know,
if I met them over dinner, they'd be better than they seem to me on Twitter.
These are people who I actually know and have had dinner with,
but I could see what they were doing on Twitter,
and it was changing my opinion of them. These are now awful human beings who I used to like over dinner,
but I can't believe they're behaving this way. Right. So it just, I felt like I had been enrolled in a psychological experiment that was, that had gone awry, you know, probably five years ago at least. And it just took me a very long time to find reason enough to just bolt for the door.
long time to find reason enough to just bolt for the door. And that's, yeah, but when you add the
AI component to it and the misinformation component to it, I'm very worried about our collective ability to have a fact-based discussion about anything. I mean, even the
topics I've just raised, I mean,, my claiming confidently that we have a misinformation
problem is the other side of a debate which smart people are having now, which I think we just can't
possibly bring to a satisfactory resolution. The other side is we've got people talking about, you know, media and social media censorship. And every, you know, reference
to misinformation or disinformation is a covert way of, you know, the deep state and the odious
establishment trying to suppress the populist democratic epistemology that is struggling to be born, right?
We're trying to force the Overton window into a certain shape and position
and make it impossible for people to talk about or think about topics that fall outside of it.
So we can't even agree about whether misinformation is a thing at
this point. Yeah. Yeah. I mean, I was going to say just to go back a little bit in response to
you going off, but Sam, you miss so much. And the truth is, to some extent, you do miss some
things. You miss some discoveries. You miss some very funny things, very clever things.
But you also miss stuff that you probably shouldn't be attending to in the first place,
not because it's necessarily mistaken, but because it's the outrage of the day.
It's people getting furious because something happened in this school in Nebraska,
or somebody said this, and they're getting, you know.
And in a few days, that will go on and move to the next.
And the amount of mental energy, and I'm speaking personally here, that I get caught up in.
For issues which I actually have no expertise and no intrinsic interest in.
But we're wired for gossip.
And hearing, oh my God, this person said this, and now the world's coming to an end.
And everybody, it just captivates us.
And it's appealing, I think, to our worst selves.
Yeah. It also gives you the sense that you're supposed to form an opinion about everything,
right? Especially when you have a big platform, you know, when you have hundreds of thousands or
millions of people following you, you know, something will happen and you'll feel like,
okay, this is an invitation to comment. And it's interesting not to have that
space for that kind of micro commentary in my life anymore. Like now I have a podcast where
I can decide whether I want to talk about something, but that's a very different decision
than whether to retweet something or comment on it. And the time course of the decision is different. You know,
lots of ephemeral things just fall away before you have even decided whether or not you,
they were worthy of your attention or, you know, worthy to surface in your commentary about
anything. And yeah, I mean, I was just, you know, I'm missing a lot on Twitter, no doubt,
but what I, what I was missing when I was on Twitter were things like books, right?
It's becoming harder to read books, you know?
And so, yeah, it's kind of the pace of one's response to the information one is taking in.
And it's, I don't know, I mean, it's definitely a net good. It's not that it comes with zero cost, but I recognize that people have
very different experiences on Twitter or any other social media site where they happen to be. And
some people who are just putting out happy memes are getting nothing but love back and they have
no idea what I'm talking about. But I just, yeah, I'm worried that we have built tools that we don't know how to control, and they may in fact not be controllable by us, and they're controlling us, right? They're making certain types of conversation impossible. They're making it difficult or impossible to solve coordination problems that we really have to solve in order to get anything important done in the world.
And I just think they have created what seems like just unbridgeable divides in our politics.
This could have always been the case, right? And there might be analogies to the invention of the
printing press that made the same kind of indelible changes in how we did things or failed to do things. But
I don't know. I just think the way in which the outrage machine has no off button and the pace
of our engagement with the story of the day, the outrage of the day, and the way in which that gets
memory hold because it's supplanted by a new outrage of the next day the way in which that gets memory hold because it's supplanted by a new
outrage of the next day. And the way that the cycle time of those changes completely obscures
longstanding problems that we just do not have the bandwidth to think about.
It really just seems like we have built information tools that we just can't use effectively.
So I know a lot of people, I see what you're saying.
I agree with a lot of it.
I know a lot of people who are deeply concerned about exactly what you're talking about,
particularly now with AI adding something else to the mix.
And I share that concern.
But all of the solutions that get proposed often make me a bit queasy.
John Haidt suggests that the social media basically doesn't have a like or retweet button.
You modify the structure so that you don't get a sort of amplification and piling on.
Gary Marcus thinks the government should get involved in sort of controlling the runaway flow of misinformation.
away flow of misinformation. Robert Wright suggests, doesn't think it should be mandated,
but suggests that we should redesign social media to pretty much force people to eat their vegetables and get exposed to alternative use. And I don't know, where do you stand on all of that?
Yeah, honestly, I don't have any kind of remedy worked out in my head. Personally,
any kind of remedy worked out in my head. Personally, I have just simply defected,
and that makes the most sense. I'm trying to find a way of interacting with information and producing it that seems like it has real integrity, and it's getting harder to do. And I just see how siloed everyone has become in their preferred echo chamber. And it's, you know, while I don't feel that that has happened to me in any kind of comprehensive way, I can just, I certainly see people perceiving me on any given topic to have been stuck in some kind of bubble and take, you know, COVID as a clear
case, right? It's like, they're the people who think that COVID, the disease is no big deal and,
you know, or even a hoax. And those same people tend to think that the vaccines for COVID are
just the crime of the century and going to kill millions. And then you just flip those two toggles for the other half of our society. And it's,
is there a conversation to have between those two camps on some medium that could possibly converge
on a shared set of truth claims to which everyone would, you know, in the fullness of time,
give assent? There's a half a
dozen other topics that come to mind that are equally polarizing. In the current environment,
I'm just not sure convergence is remotely possible.
Yeah. And to the extent this gets better, I don't really see a natural market solution.
It's on parallel between somebody
saying, oh my God, restaurants, fancy restaurants, fast food places serve food that's extremely bad
for us. It's salty, it's fatty, it's high calorie. And so why don't we just create these restaurants
that serve much healthier food with vegetables? Well, you could do that, but no one's going to go to them.
And similarly, if you could create a new social media site that does things better,
that discourages exaggeration and caricature, that brings together people with real expertise,
but Twitter is so much more fun. Yeah. Well, I do think there are some changes that I've banged on about a lot on previous podcasts, which I think
would make a huge difference. I don't know that it makes enough of a difference at this point,
but I do think the business model to which the internet got anchored is largely at fault, right?
So just the fact that we have to game people's attention algorithmically so as to maximize ad revenue.
That's the business model for so many of these sites. That's a problem. And I do think that if
people just subscribed to Twitter and there were no ads and there was no anonymity and there was
very clear terms of service, it could be much better than it is.
But again, it does suffer the analysis you just gave it,
which is the more you solve the problems I'm worried about,
in some measure, the more boring it might get, right?
There will be an eat your vegetables component to it.
it might get, right? There will be an eat your vegetables component to it. But what we have now is just the privileging of misinformation and outrage by the algorithms. And it's, yeah.
There's another dimension of this, which has worried me in a different way,
which is so many of the algorithms are becoming, I don't know if the word is bespoke,
they're becoming geared for us. And for me, my example is I wake up in the middle of the night,
have a bad habit of checking my email.
And then I sometimes find myself on YouTube.
And more than once, an hour has gone by where it was lost time.
Because the YouTube algorithm knows what I like.
I like K and P sketches.
I like certain movie trailers. I like this and that.
And, and I just lose time. And, and this is not a unique experience to me. I, you know, that I
forget his name, the guy who ran Netflix said that our, our enemy isn't other streaming services.
It's sleep. You know, I feel that, that the real, the world that's outside of our screens and
involves the outside and other people is at a serious disadvantage relative to the algorithm-driven feed that you get from Twitter or YouTube or a million other sources.
And you could choose your dystopia.
Some people now, I think, are thinking of a sort of a Skynet matrix dystopia of AI.
There's another dystopia where we're all just kind of blobs with our VR things perched in front
of our faces, just whittling away our lives. Yeah. Well, it's definitely worth stepping back
and taking stock because, I mean, just again, personally, as I said, I'm embarrassed at how
long it took me to recognize what Twitter had become in my life.
And it's really, you know, I was by no means the worst, you know, casualty of the platform that I can think of.
I mean, there are people who have much more of a Twitter problem than I ever had.
But it's, I mean, it's insane to say it, but something like 100% of the truly bad things that have happened in my life in the last 10 years have come from Twitter.
If I said 90%, I'm sure I'm underestimating it.
It's completely crazy just what a malign influence it has been on my life.
And it took me years to just get fed up with it because of, to some degree, what you just noticed with respect to the YouTube algorithm.
It's the steady drip of titillating isn't quite the right word, but it's reinforcing
information of some kind, right?
isn't quite the right word, but it's reinforcing information of some kind, right? And yeah,
and the fact that you, you know, on Twitter, it can feel like an entirely wholesome way of satisfying your desire to be in touch with the world and to have your curiosity slaked. I mean,
for the longest time, it seems like it's that. But yeah, it's quite a change.
I'm wondering what you are most concerned about at this moment.
And then we're going to take a turn to your book. But what are you actually thinking about, whether it's professional capacity or a personal one?
What's worrying you these days?
capacity or a personal one, what's worrying you these days? What's top of mind as far as changes in our society that you're finding bewildering or disconcerting?
Yeah, I don't know where to begin. And some of it might be, you know, we're not getting any younger.
There's a common lament of the old is, oh my God, things have gone to hell. Back in the good old days, you know.
And I think there could be, I think maybe the balance, the complaining we've been doing.
I mean, AI done right could be a godsend, could transform the world in such wonderful ways.
And so much of the social media, so we have really, I think, done a fair job
of pointing out the bad side,
but it's rescued so many people from loneliness.
People have found communities, people have found love
and putting aside the misinformation problem
and addiction problem, we're social beings
and some people are not situated
that they can get their social satisfaction
out with actual people in the real
world. So they do it online. And I think there's a satisfaction to be had for that too. I mean,
to some extent, this speaks to both the positives and negatives of what we're talking about.
And it goes back to your comment about all the bad things happening to you happening over Twitter,
which is we are extremely social animals and our reputation is extremely important to us.
What people think of us.
I think only psychopaths say, I don't care what people think about me and mean it.
I mean, basically having people say terrible things about you, lying about you is horrible,
is horrible.
And in some way, it's far more horrible than bodily pains or bodily damage you know you ask people i
don't know would you rather the whole world think of you as a child molester or would you rather
lose an arm i think people would vote for losing the arm yeah and and you know so so and similarly
that you know people people the reputational boonsons and connecting with people and so on has this euphoric feeling for many people.
And it can be unhealthy and addictive, but I think when done properly, it can be a real plus of these algorithms.
It's interesting.
This could be the way it strikes many people, or this could just be my own personal idiosyncrasy, but the worst
thing about, you know, reputational maintenance and, you know, caring about what other people
think, the thing that really is my kryptonite is the misrepresentation of what my views actually
are. Like, maybe everyone cares about this to the degree I do, but I don't quite see it. So it's not just
people saying awful things about you. The truth is, if someone accurately characterizes who I am
or what I've done or what I think, and they hate me for it, that's fine, right? And so let's say
I'm an atheist, right? And so someone hates me because I'm an atheist, right? And so, like, you know, so let's say, you know, I'm an atheist,
right? And so someone hates me because I'm an atheist, right? So a fundamentalist Christian
will say awful things about me because of my atheism. Okay, great. There's no problem with
that. And, you know, there's some outrageous views I might have, and if someone's accurately
characterizing them and they think they totally, you know, holding that view totally discredits me as a person.
Okay.
Again, no problem with that.
But it's just the line or about, you know, what I think that just gets under my skin in a way that is fairly life deranging. And that's why when I see this larger problem of misinformation at scale, where you just can't figure out what is true in this blizzard of purported facts, it really worries me that things can go completely off the rails.
off the rails. It's not related to your tremendous dislike of Trump, which of course is shared by many people. But I think there's a certain feature of your dislike of Trump that connects to your
anger about the lies and the misinformation, which is Trump is notoriously, famously,
undeniably a bullshitter. He's not a a liar he doesn't care enough to lie he has an utter
disinterest in in the truth yeah he'll just say whatever's whatever works for him and if it's true
it's true if it's false it's false he doesn't he doesn't care and there's something and and
it seems to it seems like like he he started a trend that that he a lot of people both for him
and against him have a sort of ethos that,
well, it could be true. It's the sort of thing one would say. And epistemological crisis is a
fancy term, but it's genuinely frightening when people just stop caring about the truth. Because
you can't reason properly. You can't do politics properly, you can't do science properly, you can't do society properly. And I think that's the problem
with, that's one of the major problems with the world we live in now.
Yeah, that's a distinction that you're referencing, courtesy of Harry Frankfurt,
the philosopher. He wrote this very short book, just really an essay, but it's a wonderful little book titled On Bullshit.
And we've discussed him before on the podcast, but to remind people that I think it really is
an important distinction. He makes the point in the book that the difference between a liar and
a bullshitter is that a liar has to be tracking what the truth is in order to insert his lie in a calculated way in the
space provided. And he's observing all of the norms of reasoning that his audience is relying
upon because he's, again, he's trying to lie in a way that is undetected and undetectable
by a logical human being. So he's not gratuitously contradicting himself.
He's trying to conserve the data as much as he can.
He is tracking truth and expectations of consistency
and every other epistemological norm
in order to do his nefarious work,
whereas the bullshitter is just talking and just creating a mood and
isn't spending any time trying to track what is true or even trying to avoid contradicting
what he said five minutes ago because he just, like, it's a complete, it's complete epistemological
anarchy, right?
There's just, there are no standards, there's no authorities, there's no hierarchies. There's no ground truth to be aware of. It's just a blizzard of opinion. was exposed that something like half of our society simply doesn't care about torrents of bullshit
on the most important topics and the most trivial
being spread every hour of the day across the landscape
with no concern for truth in sight.
One way to put it is that liars respect the truth.
Liars might respect the truth more so
than somebody who reflexively is honest and never thinks about it. A liar works hard to orchestrate
their statements so that it appears true to people. And so really works at it, says,
I got to fool people. And a bullshitter just bullshits. You know, I have a part in my book, maybe it's the part of the book where I think I
disagree with most of my colleagues, where it's about rationality.
And here I'm going to defend, I'm not going to defend bullshit, but I'm going to defend
people who participate in at some level where sometimes people argue, well, those who believe or purport to believe conspiracy
theories and wild views and misinformation are somehow being irrational. But unfortunately,
it's not as simple as that, where what rationality is, I think, is using logic and probability and
empirical facts to achieve your goals. Now, if your goal is to
get things right, then we should be working to find the truth and appealing to science and working
on our logic. But often for many people, the goal is to get along. And if you're in a community,
and I don't know, everybody there believes that, I don't know, take an old example,
Barack Obama was born in Kenya and is not an American citizen,
has no legal right to be president. And that's what everybody here believes. Well,
there's not much truth to it. So if you care about truth, you're not going to believe it, but
you probably want to get along to people around you. And so you're sort of in this dilemma where
the world as a whole would be better if everybody tried to get things right. But as individuals in society, following the common practices, believing what other people believe, is actually fairly rational.
Yeah, I mean, it's changing the nature of the game. I mean, we're equivocating on what
rational means in these two contexts. But yeah, I would agree that it's like a hierarchy of needs problem. You need not to be
exiled from your community or burned as a witch more than you need to have an intellectual
integrity, at least in this moment. But for me, that's a statement of a kind of social pathology,
right? That's a community that is not as good as it might be.
It's certainly not as in touch with norms of error correction
that would keep it in touch with reality in an ongoing way.
And what you're describing has much more in common with religion
than it has in common with science or any other really rational
enterprise. I mean, these are like assenting to certain pseudo-truths represents a kind of loyalty
test. I mean, any invidious comparison we're going to make between religion and politics and science,
on the other hand, is going to swing on these kinds of distinctions. I mean, just the difference between wishful
thinking and a host of other cognitive biases and being dispassionately guided by evidence
and argument. That's interesting. I appreciate the distinction. I think of it more, though,
as a continuum. So religion is one extreme where, you know, unless you publicly, you know, agree assent to the claims made of one true God, they may kick you out of town or burn you at the stake.
Politics, you know, is close to religion in that regard, where, you know, if you're a member of a political party and you're campaigning and everything, you should believe certain things and you'll be punished if you're not.
You should believe certain things, and you'll be punished if you're not. But I think even something like science, science in sort of a pure sense has norms of rejecting authority and norms of skepticism throughout.
But day to day, if somebody is too skeptical about claims, they're going to get kicked out of the club. Yeah. Yeah. Well, that's something that I have struggled to
make sense of in public for audiences that seem deeply confused about what the norms are here.
And it's hard to, I mean, this is really the sense in which science is not a science, it's an art,
right? It's like, there is no way we can, at least I don't think there's a way we can make an algorithm of this process where we value authority and then we discount its relevance to any specific claim, right?
Yeah, that's right. in science, whenever you make a breakthrough, you're very often proving some prior consensus
wrong. And we know that, you know, a Nobel laureate in any discipline can be wrong and
doesn't need to be taken seriously if he or she is wrong. And, you know, a lowly graduate student
can be right. And the rightness or wrongness of any claim has absolutely
nothing to do with the CVs or the reputations of the people making those claims. And yet,
as a time-saving device, we routinely rely on authority and consensus because,
probabilistically, what 97% of chemists believe about the structure of a given substance is our best bet at understanding
what that substance is by the lights of chemistry. And that remains true until a flaw in the
consensus is discovered by some lone genius who then overturns it. So it's a specialization
problem and a time management problem. We can't help but
rely on authorities because most of the time it makes perfect sense to do that.
That's exactly right. A lot of cognitive neuroscientists could do excellent work,
but don't fully understand some of the statistics that they're using. Their collaborator may
understand it better. They may not fully understand the physics of the fMRI machine that they use.
And that's fine.
And the graduate student who says, I refuse to work on the study until I understand all
of this and could justify it for myself will have a short career.
You know, you got to defer.
Yeah.
I mean, there's just no way to be a true polymath at this point. Although, ironically, AI promises to make that increasingly possible if, in fact, we can outs know, ChadGPT25, if any graduate student at any point can say, all right, you know, explain this to me and explain it again and get it down to 100 words.
And, okay, like when you think of how quickly you would be able to drill down on, you know, to first principles on anything that interests you.
to first principles on anything that interests you, and you can outsource the burden of having to remember all of that stuff to the AI, it's possible that we could, you know, have more of a
comprehensive ownership of the full set of facts that, you know, impinge upon any question. But still, there'll be some boundary there where you
are just accepting that, in this case, you're accepting that the AI is integrating all of the
authorities in a way that actually works.
And that brings us back to the limitations of current ai
i a little while ago wrote an article where i wanted to get some good quotes from psychologists
who actually from scholars in general who believed that the replication crisis showed psychology to
be deeply flawed and and so i i asked uh gpt3 and it came out with with two amazing quotes exactly
what i wanted one from gert gigaran, one from Nassim Taleb.
And I knew they sounded exactly in the style of those people.
And of course, neither of them existed.
It just plucked them out of thin air.
And these sort of hallucinations are a problem.
I don't think I felt rather betrayed.
I felt lied to.
Get ready. It's going to get worse.
Yeah, that's right.
Okay, so when I asked you what we know about the human mind, you gave me several facets of
the answer to this sweeping question. One was evolution and its implications. Another was the brain as the evolved organ
that is producing everything we know as the mind in our case.
Another was the insight, often credited to Freud,
but there have been many variants of it,
that much of what goes on in us and as us that is mental is not actually conscious.
So there's this divide, this boundary line between consciousness and what, following Freud, we have learned to call the unconscious.
And that could be misleading in a variety of ways.
learned to call the unconscious, and that could be misleading in a variety of ways.
One of my favorite Wittgenstein quotes is how he is said to have responded to this notion of Freud's,
and he says, this is, I think, fairly close to verbatim, imagine the difficulties we would experience if we had a language that constrained us to say that when you see nobody in the room, you say,
Mr. Nobody was in the room, right? So it's just, it's the reification of absence, right? That's
the reification of nothing being there. In this case, we could be concerned that there's a
reification of the parts of ourselves that we don't experience as though as a storehouse of potentially conscious
mental states. And then there's just this, I guess, related issue of reductionism and emergence,
right? So the mind and anything, any part of it we would want to discuss, you know, take an emotion
or an act of cognition is an emergent phenomenon which, when understood at the level
of its micro-physical constituents, seems to, to some minds, seems to promise a smooth
reduction to more basic facts, which are the real facts.
But in other cases, that seems like a fool's errand. And even in the presence of perfect AI and infinite computing resources,
we're never going to be talking about human-scale experience
purely in terms of neurotransmitters and synaptic connections,
much less atoms and subatomic particles.
That's a deep point.
I think my first main chapter is on the brain, and I say,
the mind is the brain. I talk about that, talk about the history of that,
as best we understand how that works.
I didn't spend the rest of the chapter sort of saying,
a lot of people then think that, wow, so the real science is neuroscience.
And in the end, we're not going to talk in terms of beliefs and desires and emotions at all.
It's all going to be, you know, serotonin.
If you'd like to continue listening to this conversation,
you'll need to subscribe at SamHarris.org.
Once you do, you'll get access to all full-length episodes of the Making Sense Podcast, along with other subscriber-only content, including bonus episodes
and AMAs and the conversations I've been having on the Waking Up app. The Making Sense Podcast
is ad-free and relies entirely on listener support, and you can subscribe now at SamHarris.org.