Making Sense with Sam Harris - #218 — Welcome to the Cult Factory
Episode Date: September 24, 2020In this episode of the podcast, Sam Harris speaks with Tristan Harris about the ways in which social media is fracturing society. They discuss the rise in teen depression and suicide, political polari...zation, conspiracy theories, information warfare, the decoupling of power and responsibility, the distinctions between platforms and publishers, the cancellation of Alex Jones, social media-inspired ethnic cleansing, concerns about the upcoming presidential election, culture as an operating system, and other topics. SUBSCRIBE to continue listening and gain access to all content on samharris.org/subscribe
Transcript
Discussion (0)
Welcome to the Making Sense Podcast.
This is Sam Harris.
Just a note to say that if you're hearing this, you are not currently on our subscriber
feed and will only be hearing partial episodes of the podcast.
If you'd like access to full episodes, you'll need to subscribe at samharris.org.
There you'll find our private RSS feed to add to your
favorite podcatcher, along with other subscriber-only content. And as always, I never want money to be
the reason why someone can't listen to the podcast. So if you can't afford a subscription,
there's an option at SamHarris.org to request a free account,
and we grant 100% of those requests. No questions asked.
and we grant 100% of those requests. No questions asked.
Welcome to the Making Sense Podcast. This is Sam Harris. Okay, very brief housekeeping today.
Just a couple of announcements. First, I will be doing another Zoom call for subscribers,
and that will be on October 7th. I'm not sure if that's going to be an open-ended Q&A or whether the questions will be focused on a theme. I'll
decide that in the next few days. But anyway, the last one was fun, and hopefully the fun will
continue. So I will see you on October 7th, and you should be on my mailing list if you want those
details. Also, there's a few exciting changes happening over on the waking up side of things,
so pay attention over there if you're an app user. And I think that's it. Okay.
Well, today I'm speaking with Tristan Harris. Tristan has been on the podcast before,
and he is one of the central figures in a new documentary, which is available on Netflix now,
and that film is The Social Dilemma, which discusses the growing problem of social media and the fracturing of society,
which is our theme today.
So, as you'll hear, I highly recommend that you watch this film,
but I think you'll also get a lot from this conversation.
I mean, if you're looking out at the world and wondering why things seem so crazy out there,
social media is very likely the reason, or it's the reason that is aggregating so many other reasons. It's the reason why we
can't converge on a shared understanding of what's happening so much of the time. We can't
agree about whether specific events attest to an epidemic of racism in our society,
or whether these events are caused by some other derangement in our thinking, or just bad incentives,
or bad luck. We can't agree about what's actually happening. And amazingly, we are about to hold a presidential election that it seems our democracy
might not even survive. Really, it seems valid to worry whether we might be tipped into chaos
by merely holding a presidential election. It's fairly amazing that we are in this spot, and social media
is largely the reason. It's not entirely the reason. A lot of this falls on Trump,
some of it falls on the far left, but the fact that we can't stay sane as a society right now, that is largely due to the fact that we are
simply drowning in misinformation. Anyway, that is the topic of today's conversation.
And I was very happy to get Tristan back on the podcast. Apologies for the uneven sound. Pre-COVID, we were bringing everyone
into studios where they could be professionally recorded. Now we're shipping people Zoom devices
and microphones, but occasionally the technology fails and we have to rely on the Skype signal. So what you're hearing today is Skype.
It's actually pretty good for Skype, but apologies if any of the audio sounds subpar.
And now I bring you Tristan Harris.
I am here with Tristan Harris. Tristan, it's great to get you back on the podcast.
It's really good to be back, Sam. It's been a while since the first time I was on here.
Yeah. We will cover similar ground, but a lot has happened since we last spoke. And to my eye,
everything has gotten worse. So there's more damage to analyze and try to prevent in the future. But before we jump in,
remind people who you are and how you come at these things. What's your brief bio that's
relevant to this conversation? Yeah, well, just to say briefly, I guess one of the reasons why
we're talking now and most relevant to my recent biography is the new Netflix documentary that just came out
called The Social Dilemma, in which all these technology insiders are speaking about
the Frankenstein that they've created. We'll get into that later. Prior to that, I was a
Google design ethicist coming in through an acquisition of a technology company that I'd
started called Apture that Google acquired. And after being at the company for a little while,
migrated into a role of thinking about how do you ethically steer 2 billion people's attention when you hold the
collective human psyche in your hands. And then prior to that, as is also discussed in the film,
I was at Stanford and studied computer science, human-computer interaction, but specifically at
a lab called the Persuasive Technology Lab, which I'm sure we'll get into, which relates to just
sort of a lifelong view of how is the human mind vulnerable to psychological influence and have had
a fascination with those topics from cults to sleight of hand magic to mentalism and heroes
like Darren Brown, who's a mutual friend of ours, and how that plays into the things that we're
seeing with technology. Yeah. So I just want to reiterate that this film, The Social Dilemma, is on Netflix now. And yeah, that's the proximate cause of
this conversation. And it really is. It's great. It really covers the issue in a compelling way.
So I highly recommend people go see that. They don't have to go anywhere, obviously,
just open Netflix. And there's no irony there. I would count Netflix as, I'm sure they're an offender in some way,
but their business model really is distinct from much of what we're going to talk about.
They could have made the choice to, they're clearly gaming people's attention
because they want to cancel churn and they want people on the platform
and deriving as much value from the
platform as possible. But there is something different going on over there with respect to
not being part of the ad economy and the attention economy in quite the same way.
That's a distinction we could draw later on. But is there a bright line between
proper subscription services like that and what we're going to talk about?
Yeah, I mean, I think the core question we're here to talk about is in which ways and where
are technology's incentives aligned with the public good? And I think the problem that brings
us here today is where technology's incentives are misaligned with the public good through the
business model of advertising and through models like user-generated content. Clearly, because we live in a finite attention economy where there's only so much
human attention, we are managing a commons, a collective environment. And because Netflix,
like any other actor, including politicians, including conferences, including you or I,
or this podcast, or my podcast, we're all competing for the same finite resource.
And so there's a difference, I think, in how different business models engage in an attention economy, but a business model in which the cost of producing things that are going to reach exponential numbers of people, exponential broadcast in the case of Netflix, but also in the case of these other companies.
There's a difference when there's a sense of ethics or responsibility or privacy or child's controls that we add into that equation. And I'm sure we'll get
more into those topics. Right. Okay. So let's take it from the top here. What's wrong with social
media at this point? If you could boil it down to the elevator pitch answer. What is the problem that we're going to unspool over
the next hour or so? Well, it's funny because the film actually opens with that prompt,
the blank stares of many technology insiders, including myself, because I think it's so hard
to define exactly what this problem is. There's clearly a problem of incentives, but beneath that,
there's a problem of what those
incentives are doing and where the exact harms show up. And the way that we frame it in the film
and in a big presentation we gave at the SF Jazz Center back in April 2019 to a bunch of the top
technologists and people in the industry was to say that while we've all been looking out for the
moment when AI would overwhelm human strengths. And when we get the
singularity, when would AI take our jobs? When would it be smarter than humans? We missed this
much, much earlier point when technology didn't overwhelm human strengths, but it undermined
human weaknesses. And you can actually frame the cacophony of grievances and scandals and
problems that we've seen in the tech industry, from distraction to addiction to polarization
to bullying to harassment to the breakdown of truth,
all in terms of progressively hacking more and more
of human vulnerabilities and weaknesses.
So if we take it from the top,
our brain's short-term memory system
have seven plus or minus two things that we can hold.
When technology starts to overwhelm
our short-term and working memory, we feel that as a problem called distraction. Oh my gosh, I can't remember
what I was doing. I came here to open an email. I came here to go to Facebook to look something up,
but now I got sucked down into something else. That's a problem of overwhelming the human limit
and weakness of just our working memory. When it overwhelms our dopamine systems and our reward
systems, we feel that is a problem called
addiction. When it taps into and exploits our reliance on stopping cues that at some point,
I will stop talking and that's a cue for you to keep going. When technology doesn't stop talking
and it just gives you the infinite bottomless bowl, we feel that as a problem called addiction
or addictive use. When technology exploits our social approval and giving us more and more social
approval,
we feel that as a problem called teen depression, because suddenly children are dosed with social
approval every few minutes and are hungry for more likes and comparing themselves in terms of
the currency of likes. And when technology hacks the limits of our heuristics for determining what
is true, for example, that that Twitter profile who just commented on your tweet five seconds ago,
that photo looked pretty real.
They've got a bio that seems pretty real.
They've got 10,000 followers.
We only have a few cues that we can use to discern what is real.
And bots and deepfakes, and I'm sure we'll get into GPT-3, actually overwhelm that human weakness.
So we don't even know what's true.
So I think the main thing that we really want people to get is through a series of misaligned incentives, which we'll further get into, technology has overwhelmed and undermined human weaknesses.
And many of the problems that we're seeing as separate are actually the same.
And just one more thing on this analogy.
It's kind of like, you know, collectively, this digital fallout of addiction, teen depression, suicides, polarization, breakdown of truth.
polarization, breakdown of truth.
We think of this as a collective digital fallout or a kind of climate change of culture
that much like the oil extractive economy
that we have been living in an extractive race for attention,
there's only so much.
When it starts running out,
we have to start fracking your attention
by splitting your attention into multiple streams.
I want you watching an iPad and a phone
and the television at the same time
because that lets me triple the size
of the attention economy. But that extractive race for attention creates this global climate
change of culture. And much like climate change, it happens slowly, it happens gradually,
it happens chronically. It's not this sudden immediate threat. It's this slow erosion of the
social fabric. And that collectively, we called in that presentation human downgrading, but you
can call it whatever you want. The point is that if you think back to the climate change movement, before there was climate change
as a cohesive understanding of emissions and linking to climate change, we had some people
working on polar bears, some people working on the coral reefs, we had some people working on
species loss in the Amazon. And it wasn't until we had an encompassing view of how all these
problems get worse that we start to get change. And so we're really hoping that this film can act as a kind of catalyst for a global response to this really destructive thing that's happened to society.
you've already put into play, because you and I are going to impressively agree throughout this conversation on the nature of the problem. But I'm channeling a skeptic here, and it's actually
not that hard for me to empathize with a skeptic, because as you point out, it really takes a fair
amount of work to pry the scales from people's eyes on this point. And the nature of the problem,
pry the scales from people's eyes on this point. And the nature of the problem,
though it really is everywhere to be seen, it's surprisingly elusive, right? So if you reference something like a spike in teen depression and self-harm and suicide,
there's no one who's going to pretend not to care about that. And then it really is just the
question of what's the causality here?
And is it really a matter of exposure to social media that is driving it?
And I think, I don't think people are especially skeptical of that.
And that's a discreet problem that I think most people would easily understand and be
concerned about.
But the more general problem for all of us is harder to keep in view.
And so when you talk about things, again, these are things you've already conceded in a way.
So attention has been a finite resource always.
And everyone has always been competing for it.
So if you're going to publish a book, you are part of this race for people's attention.
If you were going to release something on the radio or television, it was always a matter of trying race for people's attention. If you were going to release something on the
radio or television, it was always a matter of trying to grab people's attention. And as you say,
we're trying to do it right now with this podcast. So when considered through that lens,
it's hard to see what is fundamentally new here, right? So yes, this is zero-sum. And then the question is, is it good content or not?
I think people want to say, right? This is just a matter of interfacing in some way with human
desire and human curiosity. And you're either doing that successfully or not. And what's so
bad about really succeeding, just fundamentally succeeding in a way that,
yeah, I mean, you can call it addiction, but really it's just what people find captivating.
It's what people want to do. They want to grant their attention to the next video that is absolutely enthralling. But how is that different from, you know, leafing through the
pages of, you know, a hard copy of Vanity Fair in the year 1987 and feeling that you really want to read the next
article rather than work or do whatever else you thought you were going to do with your afternoon.
So there's that. And then there's this sense that the fact that advertising is involved and really
the foundation of everything we're going to talk about.
What's so bad about that? So really, it's a story of ads just getting better.
You know, I don't have to see ads for Tampax anymore, right?
I go online and I see ads for things that I probably want or nearly want
because I abandoned them in my Zappos shopping cart, right?
So what's wrong with that?
And I think most people are stuck in that place.
Like they just we have to do a lot of work to bring them into the place of the conversation
where the emergency becomes salient.
And so let's start there.
Gosh, there's so much good stuff to unpack here.
So on the attention economy, obviously, we've always had it. We've had television competing for attention, radio, and we've had evolutions of the attention economy before. Competition between books, competition between newspapers, competition between television to more engaging television to more channels of television. So in many ways, this isn't new. But I think what we really need to look at is what was mediating, where that attention went to. Mediating is a big word.
Smartphones, we check our smartphones, you know, a hundred times or something like that per day.
They are intimately woven into the fabric of our daily lives. And ever more so because of,
if we pre-establish addiction or just this addictive checking that we have,
that any moment of anxiety, we turn to our phone to look at it. So it's intimately woven into where
the attention starting place will come from.
It's also taken over our fundamental infrastructure
for our basic verbs.
If I want to talk to you or talk to someone else,
my phone has become the primary vehicle
for just about for many, many verbs in my life,
whether it's ordering food or speaking to someone
or figuring out where to go on a map.
We are increasingly reliant on this
central node of our smartphone to be a router for where all of our attention goes. So that's the
first part of this intimately woven nature and the fact that it's part of the social infrastructure
by which we rely on. We can't avoid it. And part of what makes technology today inhumane is that
we're reliant on infrastructure that's not safe or contaminated for many reasons that we'll get into later.
A second reason that's different is the degree of asymmetry between, let's say, that newspaper editor or journalist who is writing that enticing article to get you to turn to the next page versus the level of asymmetry of when you watch a YouTube video and you think, yeah, this time I'm just going to watch one video and then I've got to go back to work.
watch a YouTube video and you think, yeah, this time I'm just going to watch one video and then I got to go back to work. And you wake up from a trance, you know, two hours later and you say,
man, what happened to me? I should have had more self-control. What that misses is there's literally
the Google, you know, Google's billions of dollars of supercomputing infrastructure on the other side
of that slab of glass in your hand, pointed at your brain, doing predictive analytics on what
would be the perfect next video to keep
you here. And the same is true on Facebook. You think, okay, I've sort of been scrolling through
this thing for a while, but I'm just going to swipe up one more time and then I'm done.
Each time you swipe up with your finger, you're activating a Twitter or a Facebook or a TikTok
supercomputer that's doing predictive analytics, which has billions of data points on exactly the
thing that'll keep you here. And I think it's important to expand this metaphor in a way that you've talked about on,
I think, in your show before about just the power, increasing power and computational power of AI.
When you think about a supercomputer pointed at your brain, trying to figure out what's the
perfect next thing to show you, that's on one side of the screen. On the other side of the screen is
my prefrontal cortex, which has evolved millions of years ago and doing the best job it can to do goal articulation, goal retention and memory and
sort of staying on task, self-discipline, et cetera.
So who's going to win in that battle?
Well, a good metaphor for this is let's say you or I were to play Garry Kasparov at chess.
Like, why would you or I lose?
It's because, you know, there I am on the chessboard and I'm thinking, okay, if I do this, he'll do this. But if I do this, he'll do this. And I'm playing out a because there I am on the chessboard and I'm thinking, okay, if I do
this, he'll do this. But if I do this, he'll do this. And I'm playing out a few new moves ahead
on the chessboard. But when Gary looks at that same chessboard, he's playing out a million more
moves ahead than I can. And that's why Gary's going to win and beat you and I every single
time. But when Gary, the human, is playing chess against the best supercomputer in the world,
no matter how many million moves ahead that Gary can see, the supercomputer can see billions of moves ahead.
And when he beats Gary, who is the best human chess player of all time, he's beaten the human brain at chess because that was kind of the best one that we had.
of asymmetry that we now have, when you're sitting there innocuously saying, okay, I'm just going to watch one video and then I'm out, we have to recognize that we have an exponential degree
of asymmetry, and they know us and our weaknesses better than we know ourselves, to borrow also from
a mutual friend, Yuval Harari. So I guess I still think the nature of the problem will seem
debatable even at this point. Because again, you're talking about
successfully gaining attention,
making various forms of content
more captivating, stickier.
People are losing time, perhaps,
that they didn't know they were going to give over
to their devices.
But they were doing that with their televisions anyway. I mean, these statistics, long before we had smartphones, these statistics on
watching television were appalling. I forget what they were. There was something like, you know,
the average television was on seven hours a day in the home. You know, so that the picture was
of people in a kind of Aldous Huxley-like dystopia just plugged in to the boob tube
and being fed bad commercials and therefore being monetized in some way that strikes people
as not fundamentally different from what's happening now.
I mean, yes, there was less to choose from.
There were three different types of laundry detergent, and it was not a matter of
a really fine-grained manipulation of people's behavior, but it was still, if you wanted,
from the perspective of what seems optimal, it still had a character of propagandizing people
with certain messages that seem less than optimal. I'm sure you could talk
about teens or just people in general having body dysmorphia around ideal presentations of
human beauty that were unrealistic, whether Photoshop was involved at that point or not.
It was just good lighting and good makeup and, you know, selection effects that make people feel obliged to aspire to irrational standards of beauty.
All of these problems that we tend to reference in a conversation like this seemed present. I think
the thing that strikes me as fundamentally new, and this is brought out in the film by several people, relates to the
issue of misinformation and the siloing of information, which really does strike me as
genuinely new. And there are a few analogies here that I find especially arresting. One thing that Jaron Lanier said, he says it in the film,
and he said it on this podcast a year or so ago,
which I think frames it really well,
is just imagine if Wikipedia
would present you with information
in a way that was completely dependent
on your search history,
all the data on you that had been collected
that's
showing your biases and your preferences and the ways in which your attention can be gamed,
so that when each of us went to Wikipedia, not only was there no guarantee that we'd
be seeing precisely the same facts, rather there was a guarantee that we wouldn't be,
right?
That we're in this sort of this shattered epistemology now. And we built this
machine. So the very machinery we're using to deliver information, really the only, what is
almost the only source of information for most people now, is a machine that is designed to
partially inform people, misinform people, spread conspiracy theories and lies faster than
facts, spread outrage faster than disinterested, nuanced analysis of stories. So it's like we have
designed an apparatus whose purpose is to fragment our worldview and to make it impossible for us to fuse our cognitive horizon
so that if you and I start out in a different place, we can never converge in the middle of
the psychological experiment. And that's the thing that strikes me for which there is no
analog in all previous moments of culture. Yeah, that's 100% right. And I mean, if we
jump to the chase about what is most
concerning, it is the breakdown of a shared reality and the breakdown, therefore, of our
capacity to have conversations. And, you know, you said it that if we don't have conversation,
we have violence. And when you shatter the epistemic basis of how do we know what we know,
and I've been living literally in a different reality, a different Truman show, as Roger
McNamee would say, for the last 10 years.
And we have to keep in mind, we're about 10 years into this radicalization, polarization
process where each of us have been fed really a more extreme view of reality for quite a
long time.
That what I really want people to do isn't just to say, is technology addictive or these
small questions.
It's really to rewind the tape and to ask, you know, how has my mind been fundamentally warped? And so just to go back to the points you
made a second ago, you know, so what, you know, YouTube is, is giving us information. Well,
first on that chess match I mentioned of, you know, are we going to win? Are they going to win?
70% of the billion hours a day that people spend on YouTube is actually driven by the
recommendation system. But what the recommendation system is choosing for us. Just imagine a TV channel where you're not choosing
70% of the time. Then the question becomes, as you said, well, what is the default programming
of that channel? Is it Walter Cronkite and some kind of semi-reliable communal sense-making,
as our friend Eric would say? Or is it actually giving us more and more extreme views of reality?
So three examples of this several years ago, if you were a teenager and looked at a diet video on YouTube,
several of the videos on the right-hand side would be FinSpo anorexia videos
because those things were better at keeping people's attention.
If you looked at the 9-11 videos, it would give you Alex Jones Infowars 9-11 conspiracy theories.
YouTube recommended Alex Jones conspiracy theories
15 billion times in the right-hand sidebar, which is more than the combined traffic of the New York
Times, Fox News, MSNBC, Guardian, et cetera, combined. So the scale of what has actually
transpired here is so enormous that I think it's really hard for people to get their head around
because also each of us only see our own Truman Show. So the fact that I'm saying these stats, you might say, well, I've
never seen a dieting video or anorexia video, or someone else might say, I've never seen those
conspiracy theories. It's because it fed you some different rabbit hole. You know, Guillaume Chaslow,
who's the YouTube recommendations engineer in the film talks about an interview we did with him on
our podcast, how he, you know, the algorithm found out that he liked seeing these videos of plane landings. And it's this weird, addictive corner of YouTube where people like to
see plane landings or the example of flat earth conspiracy theories, which were recommended
hundreds of millions of times. And, you know, because we've been doing this work, Sam, for
such a long time, and I've talked to so many people, you know, I hear from teachers and
parents who say, you know, suddenly all these kids are coming into my classroom and they're
saying the Holocaust didn't happen, or they're saying the earth is flat. And it's like, where are they
getting these ideas, especially in a time of coronavirus where parents are forced to sit
their kids in front of the new television, the new digital pacifier, which is really just YouTube.
You know, they're basically at the whims of whatever that automated system is showing them.
And of course, the reason economically why this happened is because the only way that you can broadcast to 3 billion people in every language is you don't pay any human editors, right?
You take out all of those expensive people who sat at the New York Times or Washington Post editorial department or PBS editorial department saying what's good for kids in terms of Saturday morning or Sesame Street.
And you say, let's have a machine decide what's good for people. And the machine cannot know the difference between what we'll watch versus what we actually
really want.
And the easiest example there is if I'm driving down a freeway on the 5 in LA, and according
to YouTube, if my eyes go off to the side and I see a car crash and everybody's eyes
go to the side, they look at the car crash, then the world must really want car crashes.
And the next thing you know, there's a self-reinforcing feedback loop of they're
feeding us more car crashes. And we keep looking at the car crashes. They feed us more and more.
That's exactly what's happened over the last 10 years with conspiracy theories.
And one of the best predictors of whether you will believe in a new conspiracy
is whether you already believe in one. And YouTube and Facebook have never made that easier
than to sort of open the doorways into a more paranoid style of thinking.
And just one last thing before handing it back is, you know, I think this is not to vilify all conspiracy thinking.
You know, some conspiracies are real or some notions of, you know, what Epstein did with, you know, running a child sex ring is all real.
So but we need a more nuanced way to see this, because when you're put into a surround sound rabbit hole where everything is a conspiracy theory, everything that's ever happened over
the last 50 years is part of some master plan. And there's actually this secret cabal that
controls everything. And Bill Gates and 5G and coronavirus, this is where the thing goes off
the rails. And I think this really became apparent to people once they were stuck at home, where
you're not actually going out into the world, you're not talking to as many neighbors. And so the primary meaning-making and sense-making system
that we are using to navigate reality are these social media products. And I think that has
exacerbated the kind of craziness we've seen over the last six months.
Yeah, well, you're really talking about the formation of cults. And I know you've thought a lot about cults.
And what we have here is a kind of cult factory or a cult industrial complex
that we have built inadvertently.
And again, the inadvertence is really interesting
because it relates directly to the business model.
It's because we have decided that the only way to pay
for the internet or the primary way to pay for the internet is with ads. And when we'll get into
the mechanics of this, that is the thing that has dictated everything else we're talking about.
And it really is incredible to think about because we have created a system where,
indisputably, some of the smartest people on earth, I mean, this is really where some of our brightest minds are using the most
powerful technology we've ever built, not to cure cancer or mitigate climate change or respond to a
very real and pressing problem like an emerging pandemic,
they're spending their time trying to get better at gaming human attention more effectively to sell
random products and even random conspiracy theories, right? In fact, they're doing all of
this not merely in a mode of failing to address other real problems,
like mitigating climate change or responding to a pandemic.
The consequences of what they're doing is making it harder to respond to those real problems.
Climate change and pandemics are now impossible to talk about
as a result of what's happening on social media.
And this is a direct result of how social media is being paid
for, or how it has decided to make money. And as you say, it's making it impossible
for us to understand one another because people are not seeing the same things. I mean,
on a daily basis, I have this experience of looking at
people out in the world, you know, on my own social media feed, or just reading news accounts
of what somebody is into. I mean, let's say somebody is into QAnon, right? And this cult is
not too strong a word, this cult of indeterminate size, but massively well-subscribed at this point,
of people who believe that not only
is child sexual abuse a real problem out there in the world, as more or less everyone believes,
but they believe that there are uncountable numbers of high-profile, well-connected people,
you know, from the Clintons on down, who are part of a cannibalistic cult of child sexual slavery,
you know, where they extract the bodily essences of children so as to prolong their lives, right? I mean, it's just, it's as crazy as crazy
gets. And so when I, as someone who's outside this information stream, view this behavior,
people look, frankly, insane to me, right? And some of these people have to be crazy, right?
This has to be acting like a bug light for crazy people,
at least of some sort.
But most of the people are presumably normal people
who are just drinking from a fire hose of misinformation
and just different information from the information I'm seeing.
And so their behavior is actually inexplicable to me.
And there's so many versions of this now. I don't think it's too much to say that we're
driving ourselves crazy. We're creating a culture that is not compatible with basic sanity. I mean,
we're amplifying incommensurable delusions everywhere all at once. And we've created a system where
true information, real facts and valid skeptical analysis of what's going on isn't up to the task
of dampening down the spread of lies. And maybe there's some other variable here that accounts for it, but it's
amazing to me how much of this is born of simply the choice over a business model.
Well, I think this is, to me, the most important aspect of what the film hopefully will do is
right now we're living in the shattered prism of a shared reality where we're each trapped in a separate shard.
And like you said, when you look over at someone else and say, how can they believe those crazy
things? How can they be so stupid? Aren't they seeing the same information that I'm seeing?
And the answer is they're not seeing the same information that you're seeing. They've been
living literally in a completely different feed of information than you have. And that's actually
one of the other, I think, psychological, not so much vulnerabilities, but we did not evolve to assume that every person you
would see physically around you would, inside of their own mind, be actually living in a completely
different virtual reality than the one that you live in. So nothing from an evolutionary perspective
would enable us to have empathy with the fact that each of us have our own little virtual reality in
our own minds, and that each of them have our own little virtual reality in our own
minds, and that each of them could be so dramatically, not just a little bit, but so
dramatically different. Because another aspect you mentioned when you brought up cults at the
beginning of what you said was the power of groupthink and the power of an echo chamber,
where many of the things that are going on in conspiracy theory groups on Facebook,
I mean, the pandemic video spread actually through a massive network of QAnon groups. There's actually
been a capturing of the new spirituality and sort of in psychedelics type community into the QAnon
world, interestingly, which are now- Great, that's what these people need, acid.
Yeah. That doesn't sound like a good addition to an already mad world. But I think if we zoom out, it's like the question is, who's in control of human history
right now?
Are human beings authoring our own choices?
Or by the fact that we've seeded the information that feeds into 3 billion people's brains
meant that we have actually seeded control to machines because the machines control the
information that all 3 billion of us are getting.
It's become the primary way
that we make sense of the world. And to jump ahead and mind read some of the skeptics out there,
some people saying, well, hold on a second, weren't there filter bubbles and narrow partisan
echo chambers with Fox News and MSNBC and people sticking with those channels? Yes, that's true.
But I would ask people the question, where are the editorial departments of those television
channels getting their news from? Well, they're just living on Twitter. And Twitter's algorithms are recommending, again,
that same partisan echo chamber back to you. If you follow, as you had Renee DiResta on your
podcast, who's a dear friend and amazing colleague, talking about how radicalization spreads on social
media. And she worked back in the State Department in 2015, where they noticed that if you followed
one ISIS terrorist on
Twitter, the suggested user system would say, oh, there's suggested people you might want to follow.
And it gives you 10 more suggested ISIS terrorists for you to follow. Likewise, if you were a new mom,
as she was several years ago, and you joined some new mom groups, specifically groups for
making your own baby food, kind of a do-it-yourself organic moms movement, Well, Facebook's algorithm said, well, hold on, what are other suggested groups we might
show for you that tend to correlate with users in this mom group that keeps people really engaged?
And one of the top recommendations was the anti-vaccine conspiracy theory groups.
And when you join one of those, it says, well, those groups tend to be also in these QAnon
groups and the chemtrails groups and the flat earth groups. And so you see very quickly how these tiny little changes, as they say, and Jaron says in the beginning of the film,
you know, the business model of just changing your beliefs and identity, just 1%, you know,
changing the entire world. 1% is a lot. It's like climate change quite literally, right?
Where you only have to change the temperature a tiny bit and change the basis of what people
are believing. And it changes the rest of reality.
Because as you know, from confirmation bias, when you have a hammer, everything looks like a nail
and technology is laying the foundation of hammers that are looking for specific kinds of nails.
Once you see the world in a paranoid conspiratorial lens, you're seeing, you're looking for evidence
that confirms that belief. And that's happening on all sides. It's really a thing that's happened
to all of us. This is why my biggest hope, really, in the global impact of the film, and this is not a marketing
push, it's really a social impact push. I genuinely am concerned that there may be no other way
to put Humpty Dumpty back together again than to show the world that we have created,
that we need a new shared reality about that breakdown of our shared reality. There are many aspects to the ad model, and I think people can get it doesn't take much work to convince people, as we've I hope we have begun to hear that the shattering of shared reality is a problem.
It's at minimum a political problem. I mean, whether it's a social problem for you,
out in the world or in your primary relationships, to see the kind of hyper-partisanship we see now
and the just inability to converge on an account of basic facts that could mitigate that partisanship,
I think people feel that that is a kind of assault on democracy.
And then when you add the piece that bad actors like the Russians or the Chinese or anyone can
decide to deliberately game that system, I mean, just the knowledge that Russia is actively
spreading Black Lives Matter information and pseudo-information so as to heighten the anguish and polarization
on that topic in America.
I mean, just the fact that we've built the tools by which they can do that, and they
can do it surreptitiously, right?
We don't see who's seeing these ads, right?
You don't see the 50,000 people who were targeted in a specific state for a specific reason.
That is new and sinister, and I think
people can understand that. But when we're talking about the problem with sharing information or
using our information in these ways, and I think we should get clear about what's happening here,
because this is a distinction several people make in the film. It's not that these platforms sell our data,
right? They don't really sell our data. They gather the data, they analyze the data,
and what they sell are more and more accurate predictions of our behavior to advertisers.
Right. And the ability to, and as that gets more refined, you really have a, as close as we've ever come to advertising
being a kind of sure thing, right?
Where it really, you know, it really works.
And even there, people, I think most people won't necessarily care about that because
if you tell them, listen, that the thing you really thought you wanted and went out and
bought, you were played by the company.
The company placed an ad with Facebook and Facebook delivered it to you because you were
the perfect target of that ad.
I think the person can, at the end of the day, own all of that process and say, and
just subsume it with their satisfaction at having bought the thing they
now actually want, right? So yeah, I actually, but I wanted a new Prius, right? I mean, it was time.
I needed a new car, right? Whether it's confabulatory or not, there's some way in
which they don't necessarily feel violated. And I think people think they care about privacy, but we don't
really seem to care about privacy all that much. I mean, we care about convenience and we care
about money. I mean, at bottom, nobody wants to pay for these things. No one wants to pay for
Facebook. They don't want to pay for Twitter. They don't want to pay for most of what happens
on the internet. And they're happy to be enrolled in this psychological experiment
so that they don't have to pay for anything. And the dysfunction of all of that is what we're
trying to get across here. But I'm always amazed that you focus on it and parts of this monstrosity
begin to disappear. It's very hard to keep what is wrong with this in view every moment all
at once. And so maybe for the moment, let's just focus on, you know, information and privacy and,
and the ad model and, and just how we should think about it.
Well, when we talk about the advertising model, you know, people tend to think about the good
faith uses, like you're talking about, you know, a Prius or a pair of shoes. What dismisses the geopolitical World War III information warfare that's happening right now?
Because, you know, a line I say often is, you know, while we've been obsessed with protecting our physical borders as a country, we've left the digital border wide open.
I mean, if Russia or China tried to fly a cruise missile or a bomber plane into the United States, they'd be blasted out of the sky by the Pentagon. But when they try to fly an information bomb into the United States in our
virtual infrastructure of Facebook, they're met by a white glove that says, yes, exactly which
zip code and which African-American sub-district would you like to target? And that is the core
problem. We are completely unprotected when it comes to the virtual infrastructure. So if you go to the roads and the air and the telephone lines that we use here in this country, they're completely
air-capped from Russia or China. But when most of the activity happening in our country happens in
a virtual digital online environment, as Marc Andreessen says, software is eating the world,
meaning software and the digital world are consuming more and more of the physical world and the physical ways that we used to get around and the physical conversations we used to have.
That digital environment is basically the big five tech companies.
It's all happening through the landscape of YouTube, TikTok, Facebook, et cetera.
And how does an empire fall?
You use the power of an empire against itself.
After World War II, we had all
these nukes and the big powers couldn't do conventional wars with each other. So they had
to use subtler methods, plausible deniability, proxy wars. It'd be waging economic warfare,
diplomatic warfare. But if you're Russia or Iran or Turkey, and you don't want to see the US in a
position of global dominance, would you do a forward-facing attack on the country with all the nukes?
Obviously not.
But would you take the already existing tensions
of that country and turn the enemy against himself?
That's what Sun Tzu would say to do.
That's what Chinese military strategy would say to do.
And Facebook just makes that a trillion times easier.
So if I was China, I would want extreme right
and extreme left groups to proliferate
and fight each other.
And we know that this is basically happening and this has been stoking up groups on all sides.
You know, I can go into your country and create an army of bots that look just as indistinguishable from regular people.
If I'm China, I'm running TikTok and I can, you know, manipulate the political discourse in your country with the fact that I have 300 million Americans, you know, on my service.
It might even be bigger than that, if I'm remembering correctly.
So I think, you know, the advertising model isn't just that it enables these good faith users. I
think people have to recognize the amount of manipulated and deceptive activities that are
almost, like you said, untraceable. I mean, the fact that I'm saying all this to you and
the listeners out there would sound like a conspiracy theory until you know the researchers
who are tracking these things. Because if you're just looking at your own feed, I'm living in California. I'm not actually part of a targeted group. So I don't
really see these things. And it's actually invisible to me, anybody who is. So again,
our psychological vulnerabilities here, technology is not allowing us to empathize
with people who are closest to being harmed by these systems.
Yeah. Okay. So I think people can get the central fear here, which is
that it seems at best difficult, more likely impossible, to run a healthy democracy on
bad information. I mean, if we can do it for a few years, we probably can't do it for a century.
for a few years, we probably can't do it for a century. Something has to change here. We can't be feeding everyone lies or half-truths, different lies and different half-truths, all at once,
24 hours a day, year after year, and hope to have a healthy society, right? So that's a discernible piece of this problem that I think virtually everyone will
understand. And then when you add the kind of the emotional valence of all these lies and half-truths,
people get that there's a problem amplifying outrage, right? I mean, the fact that the thing that is most captivating to us is the feeling of
in-group outrage pointed outward toward the out-group for whom we have contempt growing
into hatred. That's the place we are so much of the time on social media. That runs the gears of
this machinery faster than any other emotion.
And if that changes tomorrow, if it turns out that sheer terror is better than outrage,
well, then the algorithm will find that and it'll be amplifying terror.
But the thing that you have to be sure of is that it's contained in the very word. A dispassionate take on current events
is never going to be the thing that gets this machinery running hottest. And so I think people
can get that. But when we talk about possible remedies for this problem, then I really think
it's hard to see a path forward. So there mean, there are a few ways to come at this,
but one is the distinction between...
If you'd like to continue listening to this podcast,
you'll need to subscribe at samharris.org.
You'll get access to all full-length episodes
of the Making Sense podcast
and to other subscriber-only content,
including bonus episodes and AMAs
and the conversations I've been having on the
Waking Up app. The Making Sense podcast is ad free and relies entirely on listener support.
And you can subscribe now at SamHarris.org.