Big Technology Podcast - Netflix's 'Social Dilemma' Star Tristan Harris on the Film and Its Criticisms
Episode Date: October 7, 2020You won’t find a more controversial film in Silicon Valley than The Social Dilemma. The film, now available on Netflix, features confessions from early consumer internet employees who rue the destru...ction their inventions have wrought. To address the film and its critiques, Tristan Harris, its star and the co-founder of the Center for Humane Technology, sat down for an interview on the Big Technology Podcast with no questions off limits.
Transcript
Discussion (0)
How are you?
I'm all right.
How are you doing?
I'm good.
I appreciate you doing this.
I feel like I'll probably lose some credibility because there were parts of the movie that I really liked.
And it seems like, at least in the tech circles, it's gotten a little bit of pushback.
But I'm excited to be able to talk about it with you.
Yeah, absolutely.
And, yeah, we'll talk about, I'm sure, any of the aspects, feel free to be critical.
This is not meant to be, you know, one-sided propaganda.
It's meant to be an honest assessment.
of what is the situation.
So yeah.
Totally.
Okay, well, we'll get into it.
Let me just roll the music and we can get started.
Hello and welcome to the big technology podcast, a show for cool-headed, nuanced conversation of the tech world and beyond.
Joining us today is the star of the Social Dilemma movie, the co-co-execkelmovey, the co-edded,
co-founder of the Center for Humane Technology and someone who's work I've been following for a while,
and I'm looking forward to a good, intense conversation about the movie and his work.
And we'll just go for it.
So Tristan Harris, welcome to the show.
Thanks for having me.
Good to be here.
Okay.
So let's just, you know, start kind of with your life.
Your house burned down in the middle of the Sonoma fires or Santa Rosa fires.
Is that right?
Um, yeah, that, that just happened a few days ago. So we're reeling from that. It's a pretty, pretty significant event. Yeah. How are you holding up and where are you staying? Um, well, you know, luckily we have a lot of different friends. So, um, it's, it's kind of a big deal. I mean, it was our family's house. And we lost basically everything that we own. So it's a good exercise and impermanence, uh, non-attachment. Um, and, um, and, uh,
you know, we've still been, I've still been out there doing these interviews because
it's all coincident happening at the same time as when the film, obviously, and these
issues in society and the election and how social media is impacting all of that are
occurring. So, you know, just taking it day by day and figuring out, you know, soon what
the future is going to look like. Definitely. Well, you know, I'm sorry it happened. It's sort of,
I mean, living in the Bay Area, it's so crazy to see all these natural events happening.
And then it really brings it home when you speak with someone who it's happened to.
So I hope you hang in there on that front.
And in such a crazy time, I do appreciate you still hopping on the line to speak with me.
Yeah. Thanks.
Okay. So let's talk a little bit about the movie.
So I'd like to talk a little bit about your thesis before we get into the discussion of what's, you know, some of the criticism.
and then sort of start to debate the points back and forth.
But let's talk a little bit, you know, what your main thesis is.
For me, it was sort of like the tech companies are controlling our lives through algorithms.
Is that sort of right?
Like you said, one line that stuck with me, you said social media is a drug.
We have a need to connect people and it preys on that.
So how close did I get to the actual thesis of what you were bringing in with the movie?
Well, I think the major point of the film is,
that a business model that is infused in the social communications infrastructure that
three billion people live by and are dependent on is misaligned with the fabric of society
and specifically poses a kind of existential threat to democracy and a functioning society,
the life support systems that make up a society. Because if we don't have a common
conversation or a capacity to trust and to have shared faith in the same information
and to reach agreement, then nothing else works in a society.
And while we've had polarized and hyper-partisan media on television and radio before,
social media has become the background upstream place that even television radio,
Fox News, MSNBC get their information on Twitter, et cetera.
So I think that this business model of doing whatever is best for engagement will always privilege giving each person their own reality that each time I flick my finger up to see what comes next, Facebook, YouTube, and TikTok would do better to confirm my view of reality that I'm even more morally righteous and correct about my views and the other side is wrong.
Then if every single time I flicked my finger, it challenged my view of reality.
Each flick is challenge, challenge, challenge, challenge.
Because the business model is directly coupled with how much time I can get you to spend,
it has created a Frankenstein that is kind of run amok and pulled apart the fabric of society everywhere.
And I think the film makes that point hopefully very clearly.
Yeah, and we talk about the business model.
Then there's also the side of it, which is that the way that they sort of manifest
that business model is using essentially the way that I interpret in the movie using algorithms
in order to inflame tensions so that we can spend more time on these platforms. They'll make money.
Society crumbles as it goes on. Does that sort of sum up the right way?
Yeah, we kind of profit off of our own self-destruction because the more conflict there is,
the more people die, the more attention-grabbing stuff there is, the more tribalism there is,
the more outrage and conspiracy thinking there is that degrades the
epistemic and information ecology, the more money they make. The truth is quite boring and
usually not nearly as interesting as being able to assert that Trump does or doesn't have COVID and
it's a conspiracy theory or Biden was wearing an earpiece and it's a conspiracy theory. These
kinds of fanciful things to say are much more attention grabbing than to just assume that these
are two people who are very busy and are showing up for a debate. And that's really the core issue
is that society, you know, social media companies would say we're holding back a mirror to society.
And this would be their critique. In fact, I think Facebook just responded to the social dilemma.
We haven't had a chance to really look at it yet. But in general, what they'll say is we are
holding up a mirror to society. If you have tribal conflict or racism or, you know, people disagreeing
about climate change, we are just holding up a mirror to the fact.
that those fault lines and divisions already exist.
And this is an incomplete and misleading thing for them to say
because they are, in fact, holding up a mirror to society,
but that mirror is a funhouse mirror
that warps and distorts the image that we see in return.
Specifically, it amplifies bully-like behavior, harassment, hate speech,
conspiracy thinking, addiction, outrage,
you know, as Justin says in the film, because this is due to our values-blind economic logic,
that whatever makes the most money or gets the most engagement is what wins.
So long as that's true when corporations go unregulated, you end up with, you know,
a whale is worth more dead than alive and a tree is worth more as lumber than as a tree.
In this business model, human beings, when we are the product, are worth more
when we're addicted, outraged, polarized, narcissistic, and disinformed than if we're thriving
citizens or children. No doubt. And I won't argue with you. I actually agree. Like, you know, it's funny that
the film has definitely gotten a lot of blowback in Silicon Valley. And I sort of watched it after the
criticism started to emerge. And of course, like, you're definitely going to expect a decent
amount of criticism, you know, from industry. But one of the things I agree with is that these
platforms do tend to inflame tensions. And but, but, you know, I wonder again, like whether,
well, we'll get into some of the criticism. I wonder whether. Well, we'll get into some of the criticism. I wonder
it's actually, you know, the platform is doing this or whether there's something bigger going on.
But I don't think they help. So, so let's get to the first, you know, issue that people have brought
up with this view, which is, you know, some have said that it's sort of replacing, you know,
one conspiracy theory with another. I want to read some of the stuff that I've seen and get you
to respond if you're all right with it. So Casey Newton, who writes the interface newsletter for
the verge, although at this point it's going to be platformer on Substack said, you know, this is a
cartoon super villain view of the world that strikes me as kind of a mirror image of the right-wing
conspiracy theories, which hold that a cabal of elites are manipulating every world event in
secret. And then Kevin Ruse from the New York Times says, I can see how someone who believes in QAnon
could effectively replace one conspiracy theory that cabal controls the media with another
the cabal controls the media in California. What do you think about those claims?
You know, I really respect Casey and Kevin's work a lot, but what's interesting is it seems to be a really misrepresented view of what the film says.
I mean, the film doesn't say there's a group of 10 tech insiders who are deliberately and maliciously mustache twirling, you know, all the way home to the bank in bringing out the worst in society or trying to control the media.
It doesn't say that at all.
In fact, it says these platforms have a mind of their own, and they've become a kind of digital Frankenstein.
that no one knows how it works, but all we know is that it tends to reward the worst aspects in
society. And it's the insiders coming to say, look, I help build this thing. And there's more
authority, and the human mind in terms of what's persuasive. It's one thing if you have many
researchers, who, by the way, there are so many researchers, and especially, you know, black and
women of color who've been sounding the alarm on some of the social impacts of technology for a long
time and how it's effect marginalized communities. And, you know, but the film is rhetorically
powerful because there are, it's the first time that I think the insiders who were there at that
time can say, you know, here are some of the harms that are emerging from these decisions
that were made innocuously. And no one knew that it would lead to this, this harm. So I don't
actually think that the film, you could draw the conclusion from the film that there's a secret
cabal of insiders that are trying to manipulate you. You know, maybe there's some extra marketing
for getting you to watch the film saying, you know, they're all trying to do this to you. But
really, if you look at the full content.
Yeah, but it's also baked into the film.
Like, you see those three guys who are standing there and, you know, saying, how are we going
to manipulate this person?
And I hear your perspective, and I also hear their perspective where I saw those scenes.
And I was like, well, it's, you know, maybe Tristan didn't say this explicitly, but it certainly
seems like these feminizations are sort of part of the problem that the film is trying to address.
Well, that's interesting.
Let's make sure we meet it head on because I really care about, you know, authentic debate here.
So you've got those three AI characters played by the guy from Mad Men, right?
And there's three AIs.
There's the growth AI that's trying to figure out how do I get you to invite more people, tag more people,
you know, recommend more people, things like that.
You've got the engagement AI that's trying to figure out what can I show you that's going to keep you scrolling and eliminate the bottom on the bowl and remove the stopping cues and things like that.
Then you've got the advertising AI that's trying to figure out how do we make sure that each session is as profitable as possible.
This is really not far from the truth at all.
There are, in fact, a growth team that actually, in Facebook's case, built something
called the PMYK, people you may know, or PYMK, excuse me, people you may know, where I actually
talked to a Facebook insider from very early on who was there, who was proud at the fact
that when you just let people add their friends on Facebook autonomously on their own, they
would end up and hover around an average of about 150 friends, which sort of replicate
the Dunbar number, the fact that we generally in tribes, you know, in the Savannah would end up
with about 150 close relationships. But that wasn't enough. When you run a growth team and you have an
AI that you need to figure out how, you know, we need to get people using the platform a lot. And as
Chamath, the head of growth in the in the film says, how, what was the key to addicting a user and
getting you for life? It was very simple. He said, all we had to do was get you to seven users in 10
days. And then we had you for life. That's literally what that playbook said. Yeah. So how did they
grow the number of users that you had? Well, they actually kind of injected the user with
social growth hormone, almost like we inject cows to make them produce more milk. So we said,
what if we could get you to invite and grow to more friends? And the way they did that is by literally
building an AI saying, who are friends who, if you were to get them to join, would likely get
them to use the platform the most. So, for example, if I'm Twitter, I would say, yes, you followed
these first 10 users, maybe Ashton Coucher, Demi Moore, or whatever, the first celebrities they
how'd you follow are. But then when it recommends, here's more users you might want to follow,
it picks those users based on which of them would be the most engaging so that you would come back
the most often. And it has models of which users, if you were to follow them, would keep people
coming back more often. So that's actually a fair and pretty accurate representation. Again,
not with a mustache twirl, but with an authentic set of these were the growth goals of the company.
And there's a similar thing going on with advertising. Well, let me just ask you this. Yeah,
isn't there a disconnect them between, you know, we talked earlier about,
like, you know, okay, so there's no cabal. It is sort of these systems that we don't know how
they work, but then isn't it a disconnect to portray it as or some sort of inconsistency
to portray it as these three guys who are, you know, standing there. Yeah, they represent
algorithms. But the whole idea is we want to show this honestly and accurately isn't it sort of
misleading to portray it with three people who are having a conversation, you know, with each other
trying to manipulate you. Yeah, I see that point. And, you know, I'm speaking to you as
a subject in the film, not the director or filmmaker.
We don't have the creative control of what they chose to do here.
I think that's a fair critique.
I think that, you know, in the cuts of the film that I saw, you know, these are meant
to be systems that are simply tracking various features and then recommending things to
you, which I think that's also what the dialogue represents.
Now, if they look like there's a little bit of mustache twirling in the way the character
shows that, maybe that's something that I'm not picking up as much because the way that
the script was written, it, you know, it's supposed to be just sort of amoral algorithms that are
maximizing for each of their own goals. Right. So now, you know, the, it's interesting because
so we talked a little bit about this, uh, sort of QN on like, which I, I don't, I don't know.
I don't would, I wouldn't see it as a conspiracy theory type of thing. Actually, I think the
film brings up some really, you know, important points that we ought to be thinking about and maybe
like the questions are, you know, in terms of, of style. Um, but like, okay, talking
about the way that it portrays, you know, like, again, like the overarching, you know,
this is a system that works and you should be aware of it and sort of, you know, controls the
world type of critique. Like, it does talk about, you know, how all these algorithms are
manipulating us. And the film, like, basically links it to, you know, a ton of catastrophic
events, you know, or, you know, there's the rise in self-harm among teenage girls.
there's a rise of nationalism, okay, maybe not catastrophic, but certainly, you know, on the road
there, all these things that we don't want to see in society, maybe the rise of loneliness and
withdrawal. These are other impacts. And so I just sort of wonder, like, what percentage would
you say of these issues that the film talks about and that you've brought up, you know,
in the beginning of this conversation, what percentage are, I mean, even it's hard to ask
percentage-wise. But do we really think that the Facebook algorithm is responsible for all this?
Or isn't there other things going on in society that is responsible for these outside of the
algorithms? Yeah, I mean, there's always going to be a yes-and here because the trend towards
loneliness and isolated atomized individuality, you know, per the book Bowling Alone by Robert
Putnam and these trends, you know, the elimination of shared spaces in public space, you
know, fewer parks, the hollowing out of Main Street, inequality, more drug use, opioid addiction,
various forms of addiction, less meaning. There's multiple overlapping crises that we find ourselves
in. However, if I ask myself, okay, if there's an industry operating on a business model
of addiction and engagement, let's just say engagement, right, I need you to use that platform
for 45 minutes a day. And by the way, everyone else also needs to do is it for 45 minutes a day.
So in general, having you sit there by yourself on a screen is way more profitable to that entire industry than having you spend more time in community with friends over dinner tables with candles.
That's just built into it.
So in other words, loneliness and isolation are definitely exacerbated by that background effect that subtly wants to atomize and pull us apart.
But that's much like every other aspect of our economic system.
It's more profitable for each of us to buy our own lawnmower than to have one that's shared among a community and to have systems for, you know,
doing that because if we're doing profit maximization, much more effective to do that. It's much more
profitable for people to get diabetes and then sell you a subscription plan for treatment than to
have you be healthy in the first place and to have our economic system competing to make us
healthy. In general, there's many perverse incentives across the landscape. And technology is just
giving us specifically social media and this addiction and engagement-based business model
is giving us just another version of that, where it's more profitable for us to look at
conspiracy theories, rabbit holes, that keep us there for four hours long than to have a set
of how-to videos to go do improvements across your life or teaching musical instruments, because
they're just not going to keep you there as long. This is important because this is not meant
to vilify all technology at all. I think if YouTube was a library, like a library of Alexandria,
for how to make improvements in your life, learn skills, learn musical instruments, you know,
do self-medical care, things like this, this would be amazing, right? I think these are the kinds of things
that people do find valuable on YouTube.
We actually, at our house in Santa Rosa, before unfortunately burned down in the fires,
we used YouTube to figure out how we would supply ourselves with a generator so that when
the power would go out, how would we know how to hook this up and mix it with the gas pipeline
and all of that kind of stuff.
So there's incredible uses.
I've taught myself to play songs on the piano because of YouTube library.
Now, that's fine.
But the problem is when there's a business model built on all.
automating where three billion people's attention goes in languages that the engineers don't speak.
And you have cases where two years ago, a teen girl who went to watch a dieting video,
what does YouTube recommend on the right-hand side for all those teen girls?
Thinspiration anorexia videos, because those were better at keeping attention.
Same thing if you watch World War II historical videos.
And the right-hand sidebar gives you all Holocaust denial videos.
And you have parents who are sitting their kids in front of YouTube for multiple hours a day
at school during COVID, and then they come to the dinner table at night and say,
Holocaust didn't happen and the earth is flat and they wonder why this is going on and these
trends have to do with this business model that is subtly influencing the way that all of us are
thinking and feeling and believing on a daily basis. Yeah, and I only, I like my critique on, I agree
with this largely. These are problems. My only critique is like you watch some of these montages and
you're just like social media is, you know, the root of all evil in our society. And I would have
loved a little bit more nuanced to that. I mean, obviously social media is a problem, but
the question is when you look at let, when you look at the percentage or look at, you know,
what what degree of responsibility does this stuff have? I mean, I'm still kind of curious.
What do you think? Like when you think about the big picture, we're talking about, you know,
the problems that we've had, you know, for instance, with our economy, that so much of the money
has gone to a small percentage of people. And it's caused a lot of, a lot of folks out there to
lose hope and that influences community. We know religion is down. I mean, maybe technology is
partially responsible for that. But how responsible is technology for all the bad things that we've
talked about, you know, talking about like the rise of nationalism and people not seeing
each other as other human beings, et cetera, et cetera. Yeah. Well, I think, you know, I appreciate you
bringing out this nuance. I mean, the film isn't, I think, ever claiming that, you know, all the problems
in society are coming from social media or that, you know, rising inequality or. Yeah, sometimes gives
that impression. So, but, but, I mean, anyway, like, I, I know it's trying to make a point. And
there have been people also were in the film who were like, we needed to be like a little bit
more simplistic to get this across. But, and that's sort of the reason why we want to have this
discussion here is to sort of get a little more nuanced and dig in. So it's not an attack. It's
just a question of like, let's see. No, I, of course, completely, completely appreciated.
And black and white thinking is one of the, um, black and white thinking is one of the
externalities of the attention economy because it rewards simpler, shorter, black and
white metaphors for the problem as opposed to longer complex, nuanced, high cognitive junk sizes
for dealing with these issues.
But I think that if we made a list of the claims of the film about which specific harms,
addiction, loneliness, teenage mental health problems, conspiracy thinking, breakdown of shared
truth, the film is very specific, I think, about the harms that, now, maybe if you're pointing
to just chaos on the streets of every major city in the world, I understand if that's kind of what
you're pointing to, but if we use language to say, okay, which claims is the film actually
making? And for each one of them, we can find clear evidence of an asymmetric responsibility.
I'll give you a clear example. In Facebook's own leaked documents from that Wall Street Journal piece,
this has now become a famous stat. They found that 64% of the extremist groups that people joined
were due to Facebook's own recommendation system. In other words, I don't know if you know this,
but back in 2018, you know, they changed their mission statement from making the world
open, more open and connected.
Oh, yeah.
To change, you know, let's be covering them, you know, so this is definitely, but sorry, go
ahead.
No, no, yeah, sorry, I didn't mean that in a, yeah.
So, you know, as, so they change their, for listeners who don't know, the, you know,
they changed their business mission to bringing the world closer together.
And the way they're going to do that with Facebook groups.
And we said this in our 2019 SF Jazz presentation that that's in the film that they said,
So what did we do?
We built an AI that would recommend groups for people to join.
And then Zuckerberg claims in this blog post that, and it works, exclamation point.
You know, we were able to get people to join 50% more groups than they would have if we hadn't built this AI to recommend them.
And Renee DiResta, one of our colleagues who's in the film and studies, you know, Russian disinformation and some conspiracy groups,
she talks about her own experience as a mom where she had joined a make-your-own baby food group on Facebook.
So organic, do-it-yourself baby food.
And you can imagine what was the most recommended Facebook group to her when she joined that group?
Those anti-vaccine conspiracy theories, which is another kind of related, you know,
do-it-yourself medicine-type approach to being a mother.
But then, of course, once you joined those groups, it recommended PizzaGate, Flat Earth,
chemtrails, et cetera.
And there you have that stat that 64% of the extremist groups that people joined were due to Facebook's own
recommendation systems.
And in the case of YouTube, we know that of...
the billion hours that are watched daily, 70% of at least, by the way, this stat is two years
old because I think they've stopped wanting to brag about how good their recommendation
system is after this pushback. But they briefly claimed that more than 70% of all the watch
time, so 700 million hours after about a, excuse me, 700 million hours out of that billion hours
is due to the YouTube recommendation system. And we know that they recommended flat earth videos
hundreds of millions of times. They recommended Alex Jones, Info Wars, Conspiracy Theory videos,
15 billion times. So that's more than the combined traffic of, you know,
the Washington Post, BBC, Guardian, Fox News combined. So if you make a list of these claims on
addiction, loneliness, mental health, conspiracy thinking, there's clear evidence for each
one of those claims specifically. Right. But okay, I'm going to just go back to the main
question, which is social media, mostly responsible for this? Is it one factor of many?
like, why don't you give us your personal opinion in terms of how you would contextualize
its responsibility here?
Okay, so let's take a look at conspiracy thinking.
And before I say this, I want to mention that co-intel pro, M.K. Ultra, these are real
conspiracy theories.
So I don't want to say, or rather, these are real things.
So if I use the phrase conspiracy, we also know the CIA created the term conspiracy theory
to sort of dismiss things that might have been legitimate.
So I want to make sure that we're all self-aware.
This is not meant to vilify any question of the establishment narrative.
But if you were to ask, okay, so we have, you know, a third of the Republican Party inside of the kind of Q&N movement. We've got people believe we've got flat earth conferences that are very well attended. We've got, you know, more 5G coronavirus, Bill Gates, you know, Satanic cult conspiracy theory stuff than we've ever had before. I think we've seen a rise of this thinking in the last two to three years than we've ever seen in modern.
modern, you know, I think the last 30 years, I would at least say. I've studied cults earlier
in my career, so I'm very familiar with the kind of dynamics of group think and, you know,
self-enclosed belief systems that, you know, ways that evidence is used to even, you know,
further justify that we were right, Leon Festinger's work on why prophecy, when prophecies
fail that, you know, when I say that the world's going to end at May 22nd at 2 a.m.
Because the stars are aligning this way or that way. And then what happens when it doesn't happen
that way we just re-justify and double down and say we got the math wrong. It's the same
formula, but we were using the BC calendar instead of the A-B calendar or whatever you want to
say here. So, you know, when you think about conspiracy thinking and you have Facebook doing
these group recommendations, each of those groups are a self-enclosed echo chamber. And we know
that from Facebook's own research that if more than 50% of their recommendations came from
them, as opposed to users going around and searching for groups to join, we have clear
responsibility that that was on the side of Facebook. We know from the research that the best
predictor of whether you'll believe in a new conspiracy theory is whether you already believe in
one. That's the best predictor of whether you'll believe in a new conspiracy theory is whether
you already believe in one. So if you daisy chain these facts together, they add up to a world where
it makes sense that we're all more paranoid. Okay. So I think what you're saying is that it is the main
factor here, social media. Yeah. Well, I mean, I'm not sure if I would say. Maybe I'm part of the
problem asking you to make a big statement like that. But I do want to get your thoughts. Sorry, go
ahead. Yeah, I'd probably phrase it a little bit more delicately, which is that I would say social
media has been a dominant force in the rise of conspiracy-oriented thinking and paranoia and
distrust in the last five years. And we have to remember that we're actually about 10 years into
these recommendation systems warping society. I think of YouTube, you know, if you remember in
2015, 2016, just how toxic YouTube used to feel. I don't know if you remember the kind of
background radiation of hate. We had a former YouTube recommendations engineer, Guillaume Chaslow,
who's in the film, who built a website called algotransparency.org. And he actually
monitored what were the most common verbs that showed up in the right hand sidebar,
meaning, you know, like if you look at all the recommended videos across YouTube in English,
you know, what words were used the most? And it was, I think the list was hates, obliterates,
destroys, owns, you know, right?
So it's like Jordan Peterson,
destroys social justice warrior in debate, right?
And so this is the background radiation of hate
that we were dosing our population with
for, you know, again, more than 3 billion people.
And we were doing that for years.
So I think we have to look at these consequences
over time period.
Yeah, and so how do you square that, like calling it,
if it's not the majority, the main factor,
but it's a dominant force?
How do you square that with like some of the comments
that you made in the film, for instance,
one thing that struck me is when you said,
this is Checkmate on Humanity,
tools to destabilize every country everywhere.
Is that putting it in the proper context?
Well, that quote actually that they use in the film
when I say it's Checkmate on Humanity was in reference
to a specific thing from that presentation
that was not quite actually in the film.
And what it had to do with was what we diagnose
as we call it the inversion point.
So previously, you know, people in AI futurism, you know, effective altruism circles, AI safety have all been worried about the singularity point, the point when technological intelligence or strength, sorry, when AI outcompetes human intelligence and strengths, because that's when it takes our jobs and takes off and all that.
But we miss the much earlier point when technology undermines human weaknesses, which happens much earlier in that timeline.
Yeah, that was in the movie.
That was a new thought that was interesting to me, yeah.
Sorry, but go ahead.
Yeah, yeah, yeah.
But then the checkmate point actually was following a different part of the presentation that was not there, which was actually around deep fakes and the ability to completely break the basis of what heuristics our minds used to know whether to trust information.
So, you know, how do I know that you're trustworthy?
Well, you know, maybe you squint your eyebrows in a trustworthy way.
Maybe you use your voice in a trustworthy way.
Maybe you have a Stanford shirt on that says you went to Stanford and I'm the kind of person who appeals to authority.
if you went to Stanford, clearly you must be smart or thoughtful or ethical.
Whatever it is that we use as a basis to know whether something is trustworthy or not,
that is being reduced down to a simpler, simpler set of signals.
On Twitter, it's how many followers do you have?
Does it look like you're in Kansas?
Does the tweet timeline look real?
And does your photo look like it's authentic?
That's a small number of discrete signals that are increasingly fakable.
And the checkmate humanity was the point that I could completely undermine your faith
that something was either human-generated or machine-generated.
And when that point gets crossed, there's a sort of checkmate on humanity.
In addition to the fact that these systems have controlled our information environment
and by doing that they're kind of controlling human behavior.
So there's a bigger point there, too.
Are you upset that the film seemed to have taken that line out of context then if it was referring
to something else?
I mean, we talk a little bit about the way that these algorithms generate, you know,
help prey on fear, for instance.
And, you know, isn't that sort of a chemine, I'm going to kick it to you,
But isn't that sort of a case of the filmmakers doing some of the same stuff they're decrying?
Yeah, I mean, I think that, you know, the films have to do editing to try to compress information down.
And they thought that it was probably possible to make the point that it was checkmate humanity from, you know,
because it's really an extension of the points that are already being made,
that if you continue to undermine more and more and more of human weaknesses and to therefore take down and erode the kind of life support systems that make
up a social fabric, you kind of get to checkmate humanity from there, and I think that's what
they were probably referring to. But I take the point that, you know, the film has, you know,
music that is maybe exaggerating or, you know, setting the tone. I guess it's not just music
on that front. It's the fact that, like, you're talking about deepfakes, which is a totally
different technology from, you know, the social media algorithms and engagement machine.
And to juxtapose that is, I feel like there should have been more context.
on that one. I mean, I'm glad we're discussing it, but I'm also sort of scratching my head to see why
they would use that without the context that you just delivered. Yeah, I mean, that may be fair. I mean,
I think that the point, I mean, really, again, it's through hacking more and more and more of human
weaknesses as you arrive at that checkmate point. So, you know, the Marshall Islands of technology
hacking human weaknesses was when it overloaded our short-term memory. Seven plus or minus two
things we can hold in short-term working memory, as we know from cognitive science.
And we feel that. That was our first kind of felt sense of technology overriding human
weaknesses. And we felt that as information overload or I have too many tabs open or what was I
doing? I came here to open up that tab and now I can't remember why or I seem to say anything with
email. And that was kind of the first point. And then you can map each of the other points of
polarization, giving us our own filter bubble, changing human weaknesses on how we perceive
other people's reality. These are all just on a continuity landscape of hacking more and more
of human weaknesses until you arrive at Checkmate.
But maybe that wasn't as clearly presented to the film.
So it's a fair critique.
Yeah, and totally.
And look, I mean, I'm giving you a hard time here, but no one talked about this stuff.
I mean, it was rarely talked about, I think, before you started speaking out about it.
And it's not all going to be perfect.
It's why I'm glad you're doing the work that you're doing.
And, yeah, I think that, you know, there is definitely, I think, fair critiques of the movie.
But it's also good that we're having this stuff.
discussion. And I'm glad that the movie started this discussion. I'm glad you brought it up.
And I think I agree with you. And I think also, you know, we encourage people after they watch
the film to really educate themselves and go deeper. You know, on our podcast, you're in divided
attention. We, we interview many of the subjects who are in the film, who go just into detail.
And it's not exaggerated at all. It's just an honest reflection of, you know, what they found on
Russian disinformation or YouTube recommendations. Yeah. No, I'm laughing because they did have the
website up at the end of the movie. But before anyone could take it.
it down. It already started auto playing the next thing, probably based off of an algorithmic
recommendation. So, and that's Netflix. You can get to that for you. And then that in the next
segment, because I do have some Netflix questions, but I found that fairly interesting. Okay,
we will take a short break and be right after, right back after this with Tristan Harris.
Hey, everyone. Let me tell you about the Hustle Daily Show, a podcast filled with business,
tech news, and original stories to keep you in the loop on what's trending. More than two
million professionals read the hustle's daily email for its irreverent and informative takes on business
and tech news. Now, they have a daily podcast called The Hustle Daily Show, where their team of
writers break down the biggest business headlines in 15 minutes or less, and explain why you should
care about them. So, search for The Hustle Daily Show and your favorite podcast app, like the one you're
using right now. And we're back here for the second half of our show talking with Tristan Harris, one of the
stars of the social dilemma movie about the movie. And man, it's been a fun conversation so far.
Tristan, I appreciate you, you know, being willing to come on here and really discuss the meat
of the film and face some of the critiques head on. One question I have for you to start the
second segment is like the question of personal responsibility. I mean, one of the questions I got
on Twitter when I mentioned that I was going to do this interview is people wanted me to ask you,
like, whether you think people have. I mean, actually, I said,
I want to what would you ask the people who made the movie and want to make sure
people know that you were a character and it wasn't yours but you know one of the
questions that came up was do these people believe that you know the population has
free will and how much comes down to how much of this comes down to you know the
actions of the platforms versus how much of this comes down to the actions of the
people and what do you think about that is there is some level of responsibility that
we ourselves have and can we can we blame you know all this stuff going on on our lives
in our lives on facebook and on youtube yeah well first i want to be really clear again i i don't
blame you know the consequences of my life or the entire world on facebook or youtube i think
where responsibility lies has to do with where there is asymmetric power so if i have more
than 50% influence over your actions um and you know you're choosing from menus that i am
and more and more of your choices.
If I look at the surface area of choices that you make in your life,
and what percentage of that surface area occurs on a smartphone
and from a handful of user interfaces that are designed by, you know,
a handful of 21 to, you know, 40-year-olds in California who are mostly in the Bay Area.
Well, that number, whatever that percentage is, has been going up over time by a lot, right?
And one of the things, as we talked about in the film,
my background as a kid was in magic.
And what astonishes me in magic is how many people think that they make genuinely free choices when magicians constantly are manipulating the basis of those choices.
I mean, the simplest thing is by controlling the menu, you control the outcome, right?
So, you know, and in the field of rhetoric, we know from Bertrand Russell, Russell conjugation, that you can conjugate the emotional feeling you want someone to have about something before they hear it.
So you can say, well, the embattled leader, you know, or strong,
the embattled leader of that company, I'm trying to think of an example.
I don't know who's an embattled leader, Travis Kalanick, right, from Uber.
So you could say embattled CEO, Travis Kalanick.
And I've told you how I want you to feel about Travis before you even think for yourself.
Well, do I feel good about Travis?
Do I feel bad about Travis?
In general, you know, our choices, our thinking, our feeling are being pre-conjugated
by interfaces that we don't see or control always.
the work of George Lakoff is really good on this in terms of language. In magic and in general
in design, we use the phrase choice architecture that we live inside of a choice architecture
and a menu. And in that choice architecture, we're making and privileging certain features of choices
like, you know, when I buy that food at Starbucks, does it show me, does it privilege the price
of that food with a dollar sign or does it privilege the calorie count, you know? And that would
shift the basis of the choices that I would make to be privileging.
piece of information over another. If I change the social psychology where everyone's rushing
to get the illuminated, you know, in a shiny sign, you know, McMuffin sandwich because everyone
else is going for it or maybe that's not the best example, but using social proof by saying, you know,
3,400 other two people like this post, don't you also want to like it? This also influences
our psychology. In fact, social, you know, proof is one of the most powerful ways of making us
seem that something is legitimate. I mean, take conspiracy theories, you know,
if everyone's believing it, then how can it be false? If a majority of people believe it,
usually if a majority believe it, that would mean that it must be true. But if you think of
I'm a deceptive actor, it's not very hard for me to slowly grow a population to get more than 50%
of people to believe in something. So we don't like to admit to ourselves the extent to which
our sense-making and choice-making are driven by factors in our environment and are outside of our
control. And the degree to which we are not aware of those things is the degree to which
we are controlled, meaning if I know about social proof, and then when I see that 3,000 other
people like it or the majority of people believe it, and if I say to myself, well, I'm going to ask
what, on what basis would I know that that was true? That's one micro degree of free will
because I've created an awareness about one of the things that would otherwise have me
turn into an automatic believing machine. So that's a big complex answer, but I think that
when you ask about free will, you know, the level of asymmetry between people who
who are designing technology and structuring the choices and the features and the colors
and the notifications and the appearance of news feeds and the fact that news feeds don't show
you if an article that was posted was five years old versus it was yesterday, allowing people
to not post fake news, but fake recent news. It was not actually recent. Those are all
decisions made by designers in California and have big consequences. Yeah, totally. I mean,
I remember like the opening scene is you sitting down and immediately checking your phone.
as you go.
You're aware of this stuff and you can't help it.
This is actually really important for people to get.
No one should feel bad when we're using technology and feel like we got sucked in.
I have been studying these things for such a long time.
And do you think that I'm immune, do I think that I'm immune when I post something about the social dilemma and I get lots of likes versus a bunch of angry
comments. I mean, social approval is one of the things we're most evolved to care about. And, you know,
if 99 comments on a post are positive and one is negative, you know, which does our mind,
which, yeah, our mind, does our mind, does our mind remember the 99? Or does it, does it loop and loop
and loop on the negative, right? In general, we're evolved to look on the negative because that's
where that's helpful for us evolutionarily. But with social media, it's never been easier to see a tree
of people who don't like you. Or if you're black or LGBTQ,
you, people who are more harassed and discriminated against on social media, you have infinite
hate trails that you can keep clicking on through the tree for hours.
You know, there's so much vitriol that is so easy to take us over that I think, you know,
we have to be aware.
Yeah, I mean, there's the vitriol, and then there's also just sort of the connection
to other people.
Like, I knew what, I mean, I cover social media for a living, and I knew what the deal was
when I'm sitting down and watched a movie.
I feel like I'd probably pick my phone up like 60 times in the middle of that movie and probably couldn't make it through, you know, 10 minutes straight because my brain is so fried by it.
I want to, we can talk about this for hours.
I know we only have another few minutes.
And now I have some like nitty, gritty questions to ask you.
So talking about the way that platforms are structured, how much of this do you think is algorithms and how much of do you, you know, one thing the film didn't discuss is the share button and the retweet button.
And I've covered it.
And I think they have a profound effect in terms of.
the type of information we share and the type of information that populates these platforms because
people write for retweets and shares. So I'm kind of dubious that it's the algorithms. And I think
that the share and the retweet are much worse, but I never hear about them. So what's your take on
that? Oh, yeah. It's interesting because in my mind, the film does include that, but maybe it's not
as evident. I guess I would like this sort of, sorry, this is again, me like saying, but I would
like in the movie, which I know is annoying, but like that there's this, there's a real clear
issue with the fact that when we, when we don't think before we share, we'll pass along
stuff, you know, that's fake or sensationalized or, you know, confirms our emotions without a
second thought. But whereas we pause to think, we are often much less susceptible. Even
Twitter's running this experiment right now, making people click before they retweet something
or asking them if they want to. And they have that shown to, you know, improve the information
ecosystem on the platform. I just wish that that was foregrounded a little bit and
foregrounded more in this discussion versus algorithms and engagement, which I think is totally
different. Yeah, I mean, I think the reason that my mind includes the share button in this is that
those algorithms wouldn't work if people weren't hitting share buttons and retweet buttons,
and so they include them in the premise. But I understand what you're saying. And, you know,
the example of what Twitter is doing with showing you a prompt saying, have you read this before
you share it? I mean, these were things that were being advocated.
four or five or six years ago, frankly, among people in our community. So it's taken a long
time to get some, you know, some of these things in. I would use the metaphors of epidemiology.
You know, I think what's interesting about the coronavirus is it has imbued the culture
with a new kind of way of understanding the world in terms of infection, right? And in terms of
super spreaders, in terms of shedding, you know, who is an asymptomatic carrier and who's
a symptomatic carrier. So each of us are spreading information and infecting others with beliefs
and biases. And some of us are super spreaders. Some of us are shedding biases whenever we like
things and retweet things. We're shedding biases for how other people should see things. And
some of us are doing that symptomatically. Like we are very obviously polluting the information
environment. Some of us are doing that more asymptomatically by maybe boosting things in source subtle ways
by clicking on them, and the fact that we click on them is actually making the algorithm
upregulate them to other people, even though we don't explicitly share.
So I think when you think about it that way, the share button and those pauses are kind of
like loading vitamin C into each carrier and saying, you know, maybe we are going to put a mask
on, and so we're not going to share everything to everyone else.
That's what that share interstitial is on Twitter.
But is it like more profound than that?
Because, you know, one of the things that people have talked about when you look at algorithms,
especially is what do you make with the fact that like some of these same problems occur on
WhatsApp? Like WhatsApp has the forward, but it doesn't have ads and it doesn't have an
algorithm that shows you stuff and you still see people spreading conspiracy theories and
WhatsApp isn't dependent on the time that you spent there. So isn't that like a fairly compelling
counter argument to the one that you're advancing? Well, so WhatsApp's business model is still
dependent on its parent company, which is based on an engagement-driven model. That's why
the VP of Growth at Facebook, Chamath, says, you know, how did we make these things work?
You know, we hack into human vulnerabilities, WhatsApp's done it, LinkedIn has done it,
Facebook has done it.
You know, he uses that line and he includes WhatsApp because it is hacking that same thing.
It is delivering reasons, excuses for you to go back and check messages.
It is creating social signals for us to respond to.
It makes us feel guilty when we don't respond to them, whether that's feel guilty
that we didn't like our partners post back or feel.
guilty that we didn't respond to that message, using read receipts so that now you know that
I saw that message, and it was a big message from you that talked about maybe your house
burning down or something like that. And if I don't respond to that, now I feel really guilty.
This is all tapping into really deep human psychology and vulnerability. So it actually is
driven by that same business model, just not explicitly advertisement-based. I mean, WhatsApp is
essentially an advertising-based business model. It's just not happening on WhatsApp. It's happening
on the other platforms that subsidize it.
So is the idea that basically it's driving all these engagement methods so that you end up going to like Facebook and Instagram like basically or yeah I mean I think yeah there's no ads there. So it's sort of interesting that some of the same stuff is happening. Yeah. I mean if it that's why the ads themselves, the rectangles of the advertisements are not what the critique of the film is about. It's about a business model that is dependent upon the zombieification of human beings. Domesticating people into.
you know, addicted, distracted, outrage, polarized and disinformed humans writ large. And that
business model powering WhatsApp still has benefits to turning us into addicted, distracted,
responsive human beings. And so again, if we want a society that's addicted, distracted,
and hyper responsive to others, then maybe that business model is aligned with the social fabric. But
in general, these things weren't designed, you know, by social theorists who say, well, what makes a
healthy social fabric or weren't designed by child psychologists to say, what's good for children,
they were just designed based on, hey, did we get a flywheel turning and get engagement and
growth going up into the right? Yeah, totally. And it's been interesting, you know, being in the
valley for a while and seeing the way that executives, and this might have been in the film too,
but the way that executives hand this stuff to their kids and the caution that they take,
whereas like none of this stuff comes with a user manual to anybody when it comes out of the box.
I mean, I think the point there to make is, you know, would you trust a doctor?
if you say, well, would you get the surgery for yourself or for your own kids? And they'd say,
hell no. You know, would you take their advice? You know, if you're still a lawyer and say,
hey, would you argue the case this way if it were your own children? And they say, oh, my God,
I would never do that. And you say, well, why would you give that advice to me? The CEO of Lunchables
foods didn't give his own kids lunchables. I think that's all you need to know if you're a parent.
And I think one of the basic principles of ethics is the ethics of symmetry, doing on to others
as we would do to ourselves or even doing to the most vulnerable of our own children.
And if we lived by that ethical protocol, I'm sure we would live in a much healthier, better
society. And that would be the unit test of if we had made humane technology for kids,
is that technology was used. Parents in the tech industry were giving it to their own children
and didn't feel bad about it. Yeah. Netflix. Why Netflix? I mean, Netflix is sort of ground zero
for all this problems, right? It says it's competing against sleep. And like I said,
It gives you the recommendation algorithm to make sure that you're watching the next thing,
sort of exactly the problem.
So, and you guys, I mean, you guys also created, or the film created accounts, I think,
on Facebook and Instagram and Twitter.
So, like, is this, I mean, what's going on there?
I hear you asking multiple questions.
And one is about the hypocrisy of starting social accounts on the very services that you're
criticizing.
And that critique is easily answered by, yeah, I mean, that critique is easily.
answered by the fact that if you want to change the public perception and create the only
thing that will actually change these systems, which is government regulation through a massive
cultural movement of shared understanding about the problem, kind of the climate change of
culture, how are you going to reach millions of people except through one of these limited
platforms, whether it's YouTube, Facebook, Twitter, etc. So I think the fact that we have to use
these platforms, to try to critique them and to build support for changing them, speaks to their
monopoly power. And the fact that it would be seen as hypocritical actually, actually empowers
the antitrust arguments that are proceeding right now on Capitol Hill. So I think it's the fact that
we don't have another place to go. The fact that children who want to opt out of Instagram don't
have another place to go, but another addictive, manipulative system of TikTok, you know, this
this is kind of speaks to the problem. So we actually use that as further ammunition of that is
exactly right. That's what the problem is here. Now, in terms of, you know, why would the film,
you know, Netflix, you know, the film is launching on Netflix. I think one thing that gets confused
about these platforms is the belief that, you know, Netflix is a video site, so therefore it's only
competing with other video sites like YouTube or back in the day, Facebook, you know, in 2009,
was only competing against other social networks like Twitter. And I remember there was this
day, I was at a cafe in San Francisco, Samovar Tea Lounge. And I was talking with a friend
who was, you know, deep in the growth team at Facebook. And by the way, I mean, that's where
this work comes from because I know so many people who've told me these decisions over and over and
over again kind of what the calculus was. And he said to me, you know, people think that our
biggest competitor at Facebook is Twitter or is one of these other social networks or MySpace
or something might have been. But actually, our biggest competitor is probably YouTube.
because they're not competing for social networks.
They're competing for time spent.
And so it doesn't really matter whether it's Netflix or YouTube or whatever.
Everyone is competing in a finite attention economy.
And that's the problem we have to face, is that because it is a commons, it is an attention
commons, it is a consciousness comments, we are all sharing one airspace of finite amount
of attention.
And much like in the climate movement, there's something called Earth Overshoot Day, where
it's the day in a year that we have overshoot, overshot the Earth's resources, surpass the
replenishment rate. And that day moves earlier and earlier because we keep consuming way more
beyond our means with an infinite growth-based economy. The same thing is true for the attention
economy. We have a shared commons and we are overshooting the limited capacity of attention
that we have as a culture with essentially trivia. And this is where I think Aldous Huxley's
book, Brave New World, or, you know, Postman's book, Amusing Ourselves to Death, is
really about, you know, that instead of this Huck, this Orwellian dystopia of censorship and
Big Brother and surveillance, we have this other dystopia of the Huxleyan dystopia of Brave New
World, where we give people so much amusement that they amuse themselves to death. We give people
so much trivia and trivialities and egoism and passivity that we kind of devolve not through
restricting ourselves, but by overwhelming us with our vices. And I think that's the
dystopia that we're actually living in strangely simultaneous with a kind of Orwellian surveillance
utopia. So we've kind of gotten both. And I think that's what we have to fundamentally change
is have a more healthy and humane attention economy that respects the finite commons that we have
to share for airspace and have it reflect the things that we would want our common attention
landscape to reflect. Yeah. And you mentioned government regulation. That's kind of tricky today
given the fact that the Trump administration and the Department of Justice, which is politically motivated, is the one that's pushing it now.
That's something I would love to talk to you about on a further episode. Maybe we'll have you back when Google's going through its DOJ, well, it's not even investigation. It's going to be an action. We just don't know exactly what the outline is yet. But that's coming up. I'd love to have you back on to talk about it. Meanwhile, for listeners out there, I will tease it. We have Ben Smith from the New York Times coming on next week.
And he and I go a little bit into depth about what it means for the fact that we're now having these platforms being investigated by governments that actually want to move them one way or the other politically and how that's going to play out over time.
But for now, I want to say thanks to everyone for listening.
And thank you, Tristan, for coming on and dealing with all these questions.
It's not easy.
And I appreciate you having a moment to come on and speak with me about this.
It's really great discussion, Alex.
I really want to say I appreciate all of the critical questions that everyone's exercising here
because this is not about creating a moral panic.
It's about really understanding the kind of geometry of a very specific threat that's a big deal.
So thank you for asking and really excited to come back another time and talk about maybe some of the other aspects.
Totally, yeah.
And I definitely appreciate the work that you're doing.
Keep on doing it.
I don't think any activism was going to be perfect.
But you're speaking up about something that I think we need to have a conversation about.
So I think that's good.
I think it's good.
Definitely.
Thanks.
Okay, everyone.
Appreciate you listening.
If you are new to the podcast, feel free to hit subscribe.
We have episodes every Wednesday.
We'll be back next Wednesday with Ben Smith from New York Times, as I mentioned.
And if you're a repeat listener and are enjoying what you're listening to, if you could hit a rating on your podcast app of choice, that would help us with discoverability.
choose those algorithms make sure we get in front of more people play the game and then we can
critique it some more all right thanks everyone we will see you next wednesday