Making Sense with Sam Harris - #310 — Social Media & Public Trust
Episode Date: February 1, 2023Sam Harris speaks with Bari Weiss, Michael Shellenberger, and Renee DiResta about the release of “the Twitter files” and the loss of trust in the institutions of media and government. They discuss... Bari and Michael’s experience of participating in the Twitter files release, the problem of misinformation, the relationship between Twitter and the federal government, Russian influence operations, the challenges of content moderation, Hunter Biden’s infamous laptop, the need for transparency, platforms vs. publishers, Twitter’s resistance to the FBI, political bias at Twitter, J.K. Rowling, the inherent subjectivity of moderation decisions, the rise of competitive platforms, rumors vs. misinformation, how Twitter attempted to control the spread of Covid misinformation, the throttling of Dr. Jay Bhattacharya, the failure of institutions to communicate Covid information well, the risk of paternalism, abuses of power, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
Transcript
Discussion (0)
Thank you. of the Making Sense Podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed to add to your favorite podcatcher, along with other subscriber-only
content. We don't run ads on the podcast, and therefore it's made possible entirely through
the support of our subscribers. So if you enjoy what we're doing here, please consider becoming Today I'm speaking with Barry Weiss, Michael Schellenberger, and Renee DiResta.
Barry is the founder and editor of The Free Press and host of the podcast Honestly.
From 2017 to 2020, she was an opinion writer and editor at The New York Times,
and before that she was an op-ed and
book editor at the Wall Street Journal, and a senior editor at Tablet Magazine. And I highly
recommend that you sign up for her newsletter and check out what she's doing over at the Free Press,
and you can find that at thefp.com. Michael Schellenberger is the best-selling author of
San Francisco, Why Progressives Ruin Cities, and also Apocalypse Never, Why Environmental Alarmism Hurts Us All.
He's been called an environmental guru, a climate guru, North America's leading public intellectual on clean energy, and a high priest of the pro-human environmental movement.
Human Environmental Movement. He is the founder and president of Environmental Progress,
an independent nonprofit research organization that incubates ideas, leaders, and movements,
and a co-founder of the California Peace Coalition, an alliance of parents of children killed by fentanyl, as well as parents of homeless addicts and recovering addicts.
And he also has a newsletter over on Substack titled Public. And finally, Renee DiResta is the Technical
Research Manager of the Stanford Internet Observatory, a cross-disciplinary program of
research, teaching, and policy engagement for the study of abuse in current information technologies.
Renee led an investigation into the Russian Internet Research Agency's multi-year effort
to manipulate American society, and she has studied influence
operations and computational propaganda in the context of pseudo-conspiracies, terrorist activity,
and state-sponsored information warfare. She's advised Congress, the State Department,
and other academic, civil society, and business organizations on these topics. She also regularly writes and
speaks about these issues and is an ideas contributor at Wired and The Atlantic. And
she appeared in the Netflix documentary you might have seen, The Social Dilemma.
So this is a conversation about what I consider to be a very important issue. We focus through the lens of the so-called Twitter files,
but it really is a conversation about the loss of public trust in institutions and the way social
media seems to have facilitated that. And one thing you might hear in this conversation at
various points is a tension between what is often thought of as elitism and populism. And I should say up
front in that particular contest, I am an unabashed elitist. But that doesn't mean what most people
think it means. For me, it has nothing to do with class or even formal education. It has to do with an honest appreciation for differences in
competence, wherever those differences matter. When I call a plumber, I have called him for a
reason. The reason is I have a problem I can't solve, right? I don't know a damn thing about
plumbing. So when my house is flooding with sewage backing up from the street,
and the plumber arrives, that man is my God. Jesus never received looks of greater admiration
than I have cast upon my plumber in a time of need. And so it is with a surgeon or a lawyer or an airline pilot whenever there is an emergency. There is such a
thing as expertise, and we are right to want it, because the alternative is ignorance and incompetence
and needless suffering and untimely death. Without plumbers, we live in our own filth. And we've been doing that online for many years
now. And it's getting disgusting. Of course, I'm not saying that we should blindly trust experts.
And I'm not saying experts haven't failed us in shocking ways. But we are shocked against a
background assumption that expertise is a real thing,
and that our institutions have a responsibility to provide it. If you didn't think expertise was
real, then there would be no basis to be disappointed when our most esteemed institutions
fail to function. And of course, there are whole fields of pseudo-expertise. I've often criticized
organized religion for this.
When you ask a Catholic bishop about the Holy Spirit or about the problem of human evil,
and you will get a lot of fake knowledge aimed at your brain. We could add a few branches of
the humanities here, along with scientific results that don't replicate but are still
widely believed. And there are surely situations where
real experts thinking about real problems can have their thinking distorted by bad incentives.
It is absolutely true that institutions become corrupt. All of this is true, and yet none of
this suggests that the solution to our problems is for everyone to just, quote, do their own research.
Of course, now we have chat GPT, which would seem to make doing your own research more promising.
Eventually, I think it will.
I think that many things will become like chess.
What's the best next move in a game of chess?
Should you ask the best human player on Earth?
Or should you ask a computer?
The answer now is you should ask a computer,
and that will be the answer for as long as computers are computers and we remain apes.
I think many other areas of human intelligence will be toppled in just this way.
But not yet.
Someone on my team asked ChatGPT why I left Twitter,
and it produced the following paragraph.
Sam Harris quit Twitter in December 2018.
I wish I had quit in December 2018.
It took me four years longer than that.
He cited the platform's inability to handle discussions about controversial topics
without devolving into toxicity and harassment
as the main reason for his decision. I agree with that, but that's not really what I said.
At least I'm not aware of saying that anywhere. In a blog post, Harris wrote,
quote, Twitter is now one of the worst possible forums for the exchange of ideas,
and that he had grown, quote, increasingly disturbed by the way
the platform, quote, amplifies the very worst aspects of our culture. I'm actually not sure
if those are accurate quotes from somewhere or not, but I definitely didn't write anything in
a blog post. He also expressed his dissatisfaction with the way Twitter is used to silence dissenting
voices and spread misinformation. After quitting Twitter, Harris continued to communicate with his audience
through other means, including his website and podcast.
Okay, well, it appears to know that I have a podcast.
But it seems that our robot overlords haven't quite arrived yet,
and so we still need to rely on human experts for a while.
This point between elitism and populism comes down to the following claim.
Not everyone's opinion is valuable. On many, many topics, my opinion isn't valuable. I shouldn't
even have an opinion. Having a strong opinion when you know nothing about a topic, it's your political right, sure,
but it's also a symptom of a psychological problem. And having a society filled with such
people becomes a social problem. And social media has been a vector of strong, divisive,
unfounded opinions and lies for over a decade. I mean, really, you just have to react
to that thing that AOC said, about that thing that Tucker Carlson said, about that thing the
cops may or may not have done in a city you've never been to and will never go to even if you
live a thousand years. And then you need to respond to all the people who didn't understand
what you meant, or who were just pretending not to understand what you meant.
And you're going to do this a dozen times a day?
For what, the rest of your life?
Oh, you're not going to do that?
You're just going to watch other people do it every day?
And then what, you're going to find your real life in between all of that scrolling?
What an astounding waste of time that was.
But the social consequences of our spending time and attention this way are well worth talking about.
And the question of whether it's possible to build a social network that is genuinely good for us is a very important one.
And those are among the topics of today's podcast.
But I want you to keep a few distinctions in mind,
But I want you to keep a few distinctions in mind, because there's been an extraordinary amount of misinformation spread about what I think about free speech and content moderation and censorship online.
So I just want to put a few clear landmarks in view.
The first is that I absolutely support the right of anyone, anywhere, to say almost anything.
I don't think people should be jailed for bad opinions. So, for instance, I don't think the laws against Holocaust denial that exist in certain
European countries are good. As much as I agree that it's insane and odious to deny the Holocaust,
people should be free to do it. Now, the question of whether they should be free to do it on a social media platform
must be decided by the people who own and run the platform. And here, I think people should
be generally free to create whatever platforms they want. So, Elon now owns Twitter. I think
he should be free to kick the Nazis off the platform, if that's what he wants to do. I might
not agree with his specific choices. He kicked
Kanye West off the platform for tweeting a swastika inside a Jewish star. I honestly doubt I would
have done that. I mean, can you really have a terms of service that doesn't allow for weird swastikas?
That seems impossible to enforce coherently. But the point is, I think Elon and Twitter should be
free to moderate their platform however they want. Conversely, I think Elon and Twitter should be free to moderate their platform however they want.
Conversely, I think a Nazi should have been free to buy Twitter and kick all the non-Nazis off the platform.
Twitter is a company. It should be free to destroy itself and to inspire competitors.
And many people think it's in the process of doing just that.
and to inspire competitors. And many people think it's in the process of doing just that.
And it remains an open and interesting question what to do when the Nazis or the semi-Nazis start using your social media platform. And similar questions arise about people who spread
misinformation or what seems to be misinformation. Where is the line between necessary debate,
which I agree we should have about things like
how to run an election or vaccine safety, but where's the line between debating these things
and simply making it impossible for people to cooperate when they really must cooperate?
For instance, after an election, when you have a sitting president lying about the results being
totally fraudulent. Or during a global
pandemic, when the healthcare systems in several countries seem on the verge of collapse. There is
a line here, and it might always be impossible to know if we're on the right side of that line.
It's simply not enough to say that sunlight is the best disinfectant, Because we have built tools that give an asymmetric advantage to liars and
lunatics. We really have done that. Social media is not a level playing field. And the idea that
we are powerless to correct this problem, because any efforts we make amount to, quote, censorship,
is insane. It's childish.
It's masochistic.
And it is demonstrably harming society.
But this is a hard problem to solve, as we're about to hear.
As I said, we take the Twitter Files release as our focus,
because both Barry and Michael were involved in that release.
But the four of us speak generally about the loss of trust in institutions of media and the government.
We discussed Barry and Michael's experience of participating in the Twitter Files release,
the problem of misinformation,
the relationship between Twitter and the federal government,
Russian influence operations,
the challenges of content moderation,
Hunter Biden's infamous laptop, the need for transparency,
platforms versus publishers, Twitter's resistance to the FBI,
political bias, J.K. Rowling, the inherent subjectivity of moderation decisions,
the rise of competitive platforms, rumors versus misinformation, how Twitter attempted to control the spread of COVID misinformation, the throttling of Dr. J.
Bhattacharya, the failure of institutions to communicate COVID information well,
the risk of paternalism, abuses of power, and other topics.
And now I bring you Barry Weiss, Michael Schellenberger, and Renee DiResta.
I am here with Barry Weiss, Michael Schellenberger, and Renee DiResta.
Thanks for joining me. Thanks for having us. Thanks for having us.
As I said, I will have introduced you all properly in the beginning, but I was hoping we could have a discussion about the Twitter files and social media generally and the failures of the mainstream media and the government and other institutions to maintain public trust and perhaps the failure of them to be worthy of public trust. But I think the Twitter files is the right starting
point here, because as luck would have it, we have Barry and Michael, both of whom were part of the
journalistic effort to reveal these files. Barry, let's start with you. Perhaps you can
really take it from the top and give us the high-level description of what the Twitter files are and how you came to be part of the release.
It's funny because this is one of those stories where I feel like for half of the country, it was the biggest thing that has happened in the past decade.
And the other half of the country had no idea it even existed.
of the country had no idea it even existed. And it was interesting to kind of test my family in Pittsburgh to find out which news sources they were reading and could tell you everything about
the way they viewed the story. So basically what it is, depending on how you look at it,
is Elon Musk, the new owner of Twitter, trying to, in his words, have a kind of informal truth
and reconciliation commission. He understands that the platform that
he just bought has lost a tremendous amount of trust with the public, was claiming to be one
thing and actually in secret was something quite different, was also probably, as he would frame it,
cooperating with the government in ways that would make Americans, if they knew about it,
extremely uncomfortable, was blacklisting people without their knowledge, and all kinds of other details along those lines. And so another group of people would say,
this is all about Elon Musk buying Twitter and trying to shame the previous owners of Twitter
and the previous top brass at Twitter. And really what this is all about is embarrassment and
vengeance. And where you fall on the answer to that question tells you a lot about where youibbi, who has a very popular newsletter. Then he texted me and reached out to me.
Then I reached out to Michael Schellenberger.
And then the group kind of grew from there.
It came to include journalists like Abigail Schreier, Lee Fang, Leighton Woodhouse, and
a number of other people associated with my company, The Free Press.
What was said on Twitter publicly by Elon Musk is
that we had unfettered access to all of the inner workings of Twitter, everything from emails,
private Slack messages, group Slack messages, and on and on and on. And that was sort of the
headline that was trumpeted all over Twitter and all over the press. In fact, what we had,
and Michael
can explain this probably in better detail than I can because he has a meticulous memory, we
basically were able to do highly directed searches on at most two laptops between at times up to
eight journalists in a room. So what we had the ability to do was to say to a lawyer working through a
laborious e-discovery tool, and it came to include two different tools, tell me everything that
happened between the following six C-suite level employees of Twitter on the dates between January
6th and January 10th, basically the dates that Trump got kicked off of Twitter. And basically over the course of a few days, it would spit back to us information.
And what came out of that was a number of stories that, depending again on how you look at it,
were either enormously important bombshell confirmation of what a number of, or of what
a lot of people in the country had thought was actually going on on Twitter or what they denied.
And if you're on the other half of the country, and again, I'm being crude here,
it was, you know, nut picking. It was cherry picking. It was finding, it was going sort of
searching for anecdotal stories that would confirm the political biases of the independent journalists
involved in the project. I think the really, really important thing
for people to understand,
and I think that this wasn't explained well enough
by any of those of us who were involved,
is how unbelievably laborious these searches were
and how if we had the choice,
like it's not as if we walked into a room
with organized files according to COVID,
masking, myocarditis, the election in Brazil, Modi, Israel, Palestine.
Like, then we could have really told you the comprehensive story. Instead, we had to make
some very difficult choices based on the kind of tools we were using to go looking for certain
stories where we knew the public story that had been told, and we wanted to see what had actually gone on behind the scenes. And again, in my view, you know, the story
of the decision to kick off Trump, very important story. Is it the number one story that was
interesting to me? Not at all. COVID was far more interesting to me. But I knew that if we looked at
those set of dates, that we could come out with some information that would be worthy of the public interest.
And we also knew that we're dealing with someone who is in many ways a mercurial person. Any source
that gives you information has motivation. You have no idea when their motivation or incentives
might change. And so we wanted to harvest as much information as we possibly could
in the days that we were there. Yeah, I hadn't thought to talk about these details, but now I'm
interested. So just a couple of follow-up questions. So when you would perform a search
or have someone perform a search for you, there wasn't some layer of curation from lawyers or
anyone else who was telling you you who were deciding what could be
released and what couldn't be released? If you said, give me every email that contains the word
Trump between certain dates, they just spit out those emails? No. One of the ways that I knew that,
Sam, I just don't know how much detail you want me to get into here, but in the first few days
that I was there with my wife, Nellie, who also works with me and building the company, Matt wasn't there. It was just the two of us, Elon Musk, and lawyers that we were communicating with over the phone.
asks or Fauci or whatever. And what I was getting back was garbage information. It was such garbage information that it, and I'm not a paranoid person. I would say Michael Schellenberger is
way more suspicious than I am in general. I'm pretty naive, but it was so bad that Nellie was
saying, this cannot be right. This cannot be right. And that's when I came to discover that
the lawyer who was actually doing the searches worked for Twitter
and was one of the people that we were hoping to do searches on, which is this guy, Jim Baker,
who became a sort of flashpoint in the story. Later on, and maybe Michael, I can hand it over
to you if you want to explain sort of the mechanics of how this worked. It can be maybe
a little boring. The reason I think it's significant is because I think it will help people understand why we did the stories we did.
Right. Right. Yeah. Michael, jump in here. What was your experience extracting information from Twitter?
Yeah. I mean, I think it's really a fun conversation. I love talking about it. And I was a little annoyed after that just a lot of people wrote stories about how they thought the process worked without just asking us because we would have said so.
And I've always have taken all the time to explain it.
But, you know, as Barry mentioned, Barry brought me in.
I do not have a relationship with Elon Musk.
I've only criticized Elon Musk in the past.
I criticized him in Mother Jones.
I wrote about him in Apocalypse Never.
And just obviously when Barry was like, we can get access to the Twitter files, I was like, hell yeah. I mean, there's no for me, there was no, just,
it's a chance to go and get this incredible information. I met when I met Elon, he said he
did not know who I was. You know, and, and basically, it's just like what Barry said,
if there was any filtering or curation or removing of any emails, we saw no signs of it.
And I would be shocked because the size of the searches we were getting, I can just tell you
some of them. We would be like, you know, a lot of like all the emails for this person over this
period of date. And we would get, you know, email boxes of 2000 emails, 890 emails, 2000 emails, 1800 emails, 1800 emails,
2300 emails. So it's just for somebody like we, I consider myself an extremely fast reader,
and I'm able to process a lot of information very quickly. It took me a very long time to
go through these emails, I couldn't see anybody being able to have done that. And then when the
emails populated in our inboxes, there was no, we never saw any
evidence that anything had been removed. I don't think anybody, I mean, I'm not saying, I can't
prove that nothing was, but I just saw no evidence for it. And I didn't see anything in Elon that
suggested that he cared about that. Although, Michael, it sounds like that cuts against what
I was understanding Barry to be saying, which was initially the search results were so crappy that you thought somebody, this nefarious lawyer, was throttling the results.
That was before, I should clarify, that was before Michael had gotten there. And as soon as Elon
found out that that person was involved, he's operating at the highest level of the company.
He had, until I told him,
hey, do you know that Jim Baker is the one doing the search? He had no idea that Jim Baker was the
one doing the search. Then the people involved change, he was fired. And like Michael said,
the files we got subsequent to that, there was no evidence at all that they were tampered with.
The thing I should add is one of the criticisms of the story of the Twitter
files is that we focused an inordinate amount on a person who had been at one time the head of
Twitter's trust and safety, this guy Yoel Roth. And the reason for that is that Yoel Roth was a
very loquacious person. He talked a lot on Slack and on email and in other places. So it's not as
if we weren't interested in other people. It's just that, like any story, you're looking for the person who's going to share the most
information. And he spoke openly and a lot on platforms like Slack to his colleagues.
It's not like we were actively going out to interested in Yoel Roth. I barely knew who
he was before I walked into Twitter. Right. Now, were either of you concerned
about the optics of this process that you would
appear to be, at least in part, doing PR for Elon rather than actually engaging in journalism that
was of a more normal sort? I mean, there were other constraints. Releases had to be done on
Twitter itself, which I think it could be argued was not the best platform for
actually discussing this and really anything at length. What were your concerns going into this?
And is there any residue of those concerns? Not for me, really. I mean, for me, it was just like
we get the access to the data and I just am not. I mean, people say things, but I'm just not I'm
not that concerned about. But for instance, what I noticed before I left things, but I'm not that concerned about it. everything Elon said about what was coming and what had dropped was wrong, right?
I mean, he was lying or just delusional about what he was releasing,
you know, the level of government involvement.
But he wasn't releasing it.
Isn't that kind of the point?
But it was the frame around it.
I mean, he was saying, here it is.
And his summary of what Taibbi was saying was just not, in fact, accurate.
In fact, in the case of one of Taibbi's drops,
it was the opposite of what Taibbi said. So that didn't bother you at all?
I mean, it bothered me when he tweeted, my pronouns are Prosecute Fauci. It bothered me
when he said that thing about Yul Roth. I told him that. I think Barry criticized Elon when he deplatformed those journalists.
I retweeted it. We don't control Elon Musk. I mean, we were invited into a smash and grab
situation to be able to look at as many emails as we could. And we're thrilled at it. And it's
super important what came out of it. But no, I mean, I just kind of I'm a big Gen Xer. I'm a Breakfast Club type. I go on Tucker Carlson. I talked to Tucker Carlson. I talked to people that my family thinks it's terrible. I talked to them and I don't believe I'm not I don't have like a view that if I talk to somebody that somehow I'm legitimizing all of their views or that if I go take these if and like, look for these emails, that somehow I'm agreeing with Elon Musk. I've criticized Elon Musk about his policies around
solar in China. I'm not going to stop doing that. I told him exactly what I thought and have told
him exactly what I thought. And I'm just with Elon the way that I am with everybody. And
so no, I mean, and people talk shit, but it's like, I don't like people say things,
but they're not true. So I can't, I'm gonna have a stoic attitude about it, which is like, I'm responsible for the things that I do. I'm not responsible for what other people do.
And journalists who felt those documents were in the public interest jumped to go look at them. And any source who leaks documents or leaks a story to the New York Times or the Washington Post always has an agenda. Like that goes without saying. I think the unusual thing in this case is that the source was public about it and he made his agenda entirely transparent the entire time. So like, and you know, as Michael just mentioned, I think I well proved that I was not in the tank for anyone on this matter.
I'm just on the side of the public knowing more information and people can decide for
themselves whether or not that information was in the public interest.
I certainly think that it was.
And I frankly think a lot of people are resorting to sort of criticizing journalistic practice or, you know,
standing in or other sort of like technicalities of that sort because they don't want to confront
what the actual story is. Right. Well, I definitely want to get to the story, but Renee, I want to
bring you in here. Do you have anything to say at this point about just the process and the optics?
Yeah, it's very interesting. So I, for your audience members who probably don't know,
I started off talking about social media kind of as an activist on the outside in 2015,
moved into academia in 2019. And in the intervening time, the relationship between
platforms and government and researchers changed very significantly over those four years.
We can talk about why and how, perhaps.
I was part, you know, I'm at Stanford Internet Observatory.
We are, were, I don't know, part of something that was called the Twitter Moderation Research Consortium that I think no longer exists because everybody got laid off.
But it was a process by which Twitter could actually share
data sets with researchers. And this is relevant because all of our research was done independently.
We would receive data sets from Twitter. We would do research independently. And sometimes we would
actually go back to them and we would ask, why is this account included? Why is this? This doesn't
feel like it fits. If we're going to tell a story to the public about this Chinese network, this Iranian network, this Saudi network, this Russian network,
we want to make sure that we're doing an independent analysis and we are only going to
say what we think we can support as researchers. And what we would try to do was look at and enrich
the story with as much of a full picture as possible. So the Twitter data set was almost
a jumping off point to a significant process that would involve also looking for these accounts on
Facebook, TikTok, Twitter, YouTube, you name it. And what we would try to do was not tell an
anecdotal story, but we would always include both the qualitative, here's what these accounts are
saying, here's what we think they're doing. But we would try to include something in the way of
summary statistics. Here's how many of them there are, here's the engagements accounts are saying, here's what we think they're doing. But we would try to include something in the way of summary statistics.
Here's how many of them there are.
Here's the engagements they're getting.
Here's where they are situated in the public conversation relative to other accounts that
talk about these things.
And the reason for that is because one of the problems that I think has been, one of
the problems with a big driver in the public conversation around content moderation, whether that's related to the kind of foreign influence campaigns or domestic activism or anything else, is that it is so anecdotal.
And so when the Twitter files began, as somebody who has worked with platform data and also testified in front of Congress critiquing platforms and their lack of transparency and who has written about that for the better part of seven years now. What has been interesting to me in the files, I think they're very interesting,
just to kind of start with that. I'm not a person who says, oh, this is all a nothing burger. This
is not interesting. But I had kind of three issues with the process. And the first was that I think
a lack of familiarity with that multi-year evolution of content moderation policy meant that for me,
as an observer, there were some of these like wet streets cause rain moments, you know,
the Gelman amnesia phenomenon where the person doesn't fully understand what is happening in
context. A specific example that I said on Twitter was one comment in which you see the Senate
Intelligence Committee engaging with Twitter, asking it if it responded to some sort of tip from the FBI. And that was very interesting to me because I had done a bunch
of work on Twitter data for the Senate Intelligence Committee in 2018. And as a researcher running that
process in 2018 with no engagement with Twitter whatsoever, what I knew was that the Senate
Intelligence Committee did not have very high confidence in Twitter's ability to find anything. So reading that interaction was fascinating to me because
they were, in my opinion, essentially saying, did you find this yourself or did somebody have
to hand it to you again? But what the reporter who wrote that thread construed that as was,
are you taking direction from the FBI and marching? So this was the kind of wet streets
cause rain experience that I had in a number of these threads where I thought, gosh, I wish that somebody who had either been there,
you know, in an abstract sense, not in the company, but who understood the evolution of that
stuff had perhaps like weighed in or been consulted. And then I think the second critique
was how anecdotal it was. And that made it feel a little bit cherry picked. And this kind of ties
into maybe point three, which is that the trust in the public confidence
in whether or not you believe in a framing around an anecdote is entirely dependent on
whether you trust the reporter or the outlet at this point.
And that's a function of polarization in American society.
It is not a critique of Barre or Michael or anybody else's thread.
It is, I think, the reality.
And so with some of the, in my
opinion, overemphasis on anecdote, and I recognize, you know, this is the process, this is what you
had available to you. What made that troubling to me is that it did feel like there were opportunities
for score settling and things or searching for things that you, you know, that a particular
reporter found problematic or wanted to dig more into, but that didn't
necessarily get at the scope and scale of the phenomenon overall. And I'll point specifically
to something like shadow banning, right? Fascinating topic. Many of us have looked
at it over the years and made arguments that I don't think it's something that the platforms
shouldn't be able to do. And we can talk about why, but I do think it should be transparent.
So that's sort of where I sit on the shadow banning question. But what we didn't get was
how many users were receiving these labels, in what country, during what time period,
how many of those who received a label were mentioned in a government request.
That's absolutely kind of crucial to this question of to what extent does the government
actually exert influence over the platform? It's not simply filing a report, it's did the report lead to an action? And this is
the sort of thing, again, maybe this is my bias as, you know, as somebody in academia, where I say,
like, God, I'd really love to get my hands on the summary stats, you know, can you request those?
Can you say, like, in this moderation tool, you know, can we connect the dots here between here's
the FBI over submitting, in my opinion, litanies of accounts, you know, really we connect the dots here between here's the FBI over submitting, in my opinion,
litanies of accounts, you know, really just sort of stupid process, but then what happened next?
And that was like the kind of connecting the dots there was, in my opinion, kind of underdone candidly. And it led to an opportunity for innuendo to drive the story and whether or
not you believe the innuendo is entirely dependent on whether you believe or trust the outlet and the person covering the story. So in the interest of
informing the public writ large, that's where I felt like, and as Barry notes, depending on,
I think we're saying the same thing, which side of a political spectrum you sit on,
you either trust or do not trust at this point. I don't know that it's political spectrum so much as like, you know, institutionalist, populist, maybe. But there is,
I think that that tension for me was, it was where I felt, and I wrote this in The Atlantic,
that I felt that there was a little bit of a missed opportunity there. How could we perhaps
get at more of those like holistic or systemic views informing an opinion and platform moderation
that are less anecdotal and less dependent on trust in a reporter's narrative. Right. Yeah. I mean,
just to echo part of it, it's hard to capture kind of like how chaotic the situation was. I mean,
it was like getting searches back at midnight, working till three in the morning, the owner of
Twitter coming in at 1230, wanting to schmooze. Like, youze. I second everything Renee's saying. Meaning on the
question of should these platforms, not just Twitter, be more transparent? Do we have a
problem with private companies that have sort of unaccountable power over the public conversation?
And to what extent are they doing the bidding of organizations like the FBI?
That's something really important that every citizen has a right to know, not just me and
Michael Schellenberger and Matt Taibbi.
But I just can't emphasize enough that the idea of going in and saying, give me a report
or a summary on XYZ, that just wasn't something that was possible while we were there.
Okay, well, let's get to what was found out or what has been found out so far. I guess as preamble,
I just want to say, I think the big story here, which is certainly beyond the case of Twitter,
is our ongoing struggles to deal with misinformation. And this is something that
Renee obviously knows a lot about. But it
seems to me that this is the kind of thing that may never be perfectly solved in the absence of
just perfect AI. And when you look at what imperfect solutions will look like, they will
always throw up both type one and type two errors. So any attempt to suppress misinformation
is going to suppress real information, and that'll be embarrassing and cause some people to be irate and to allege various conspiracies. And it also, you know, it will fail in the other way, and lots of misinformation will get through and fail to be suppressed.
fail to be suppressed. And this isn't merely just an engineering question. This is an ethical question. It's a political question. And even in the simplest case where we know what is true
and what matters and what we should do on the basis of these facts, and I would say we're
very rarely in that situation at this point, but even in the best case where we know what's true,
at this point. But even in the best case where we know what's true, it can be very difficult to know what to do in a political environment where great masses of people believe crazy things.
It's a question of how to message the truth in hand to great numbers of people who,
as we've already said, no longer trust certain institutions or certain people and will reach
for the most sinister possible interpretation of events and anchor there. And that seems to be the
state of discourse we have on more or less everything from public health to the very
topic we're talking about. So with that as just kind of the frame around this,
perhaps Barry and Michael, either of you can start. I'd love to know what you think we have
learned so far and what has been the most interesting slash concerning facts.
I'll say maybe one thing and then kick it to Michael. I think that there are two main stories
here. Story number one is about the way that an extremely powerful tool
that has influenced elections, that has led to revolutions, claimed to have a particular mission
and gaslit the public as it secretly abandoned that mission in critical ways. And it shouldn't
matter what your politics are. That, to me, is a really
important story. If you believe, as I do, you don't need to go all the way and believe that
Twitter's a public square to believe that has an enormous influence on the public conversation,
on deciding who is a hero and who is a villain, on any number of things.
The second thing that I think is the headline story is the way that this sort of very close relationship between the federal government and one of the most powerful tools of communication in the world.
I think those are sort of like the two core stories that came out of all of the reporting.
I don't know if we're on Twitter files number 121 or whatever, but those to me are the two biggest headlines.
And around which topics do you think you're most concerned about the messaging around COVID or Hunter's laptop?
What do you consider to be the center of gravity here?
On the cultural, I'll leave the government conversation to Michael because he did much more on that. To me, it's the way that Twitter actively narrowed. I don't know if we
want to get into Hunter yet, but yeah, I mean, I certainly think that when a private company
decides to lock out a newspaper doing truthful reporting weeks before an election on the spurious grounds that it was
based on hacked material as if that isn't what is printed in places like the New York Times and the
Washington Post every day. Yeah, I have a huge problem with that. But I think one of the core
things that came out of what we saw, especially in the shadow banning of people like Dr. J.
especially in the shadow banning of people like Dr. J. Bhattacharya, was the way that people inside Twitter actively worked to make things untouchable, to make people untouchable,
to make particular viewpoints that have turned out to be very vindicated untouchable,
and therefore profoundly shaped the public conversation about something like COVID and
the right way to respond to it. I think that is a really significant story.
Michael? I would say there's three areas. I would say the first had to do with the crackdown
on misinformation bleeding into a crackdown on free expression, which I think you alluded to,
Sam. And I'll give one big example, which is
Facebook, under pressure from the White House, censoring accurate information, emailing the
White House to say, we are censoring this accurate information because we view it as encouraging
vaccine hesitancy. Now, they didn't exactly black it out. It was they repressed the spread of it.
But it is a form of censorship.
Twitter did a milder version of this with Jay Bhattacharya and with Martin Kildorff,
who just simply said not everybody needs to get the vaccine.
And they put an interstitial on it, which is a kind of like warning thing saying official
Twitter censors say that this is not this is not right.
That was a case where, in the case of
Kildorff, he was expressing an opinion, and he is a Harvard medical doctor, not to stoop to
credentialism, but he certainly, I think, had a right to weigh in on that question.
And then in the case of Facebook, there was no transparency here. And I should actually pause
and just say, however much we disagree on many things, I've had the pleasure of being able
to have an ongoing conversation with Renee over the last few weeks, and we both very strongly
support transparency. That, I think, I agree with Renee and others that argue that transparency
would solve a lot of these problems. If Facebook had simply done, if there was something, somebody
said that Facebook said, we are suppressing these views because we are encouraging vaccines and we're going to allow this debate on in some
way. This is no technical obstacle to allowing that to occur. So that's one. Number two is
the laptop. I think there is a very clear pattern of behavior. I cannot prove that there was an organized effort,
but nonetheless, I think that my thread
on the Hunter Biden laptop shows
that there was a very strange pattern.
And again, maybe it was a total coincidence
for both existing intelligence community officials
and former intelligence community officials
to pre-bunk the Hunter Biden laptop.
And we can get into the details of this, but suffice it to say, I think it merits more
investigation. I strongly support congressional investigation on it. I don't think we've gotten
to the bottom of it. I find it extremely suspicious. And I think other people do too
when they really look at it. And again, maybe I'm overly pattern recognizing here. I hold that
as a possibility, but I think there's something really interesting there that has to be talked
about more. And then the third thing is just this grotesque inflation of the threat of Russian
influence operations. It was being used as a way, as a cudgel, to basically start to deplatform, deamplify, censor, demonize, disparage, discredit
people that did not deserve that.
That's sort of what Matt talks about today.
And I thought, Rene, over email, there was an exchange about this.
But it's not just a single thing.
I mean, it was being used as justification for all sorts of things, including censoring
this laptop.
It became a kind of boogeyman.
And, you know, I think one thing I wanted to do on this podcast and say very clearly
is I do think that Yolroth turns out to be a more complicated character than I think
he had been perceived as in the beginning.
turns out to be a more complicated character than I think he had been perceived as in the beginning.
I see him, he repeatedly would point out that various things were not violations,
including the thing that Trump was deplatformed for. He said very explicitly that Trump had not broken, had not violated Twitter's terms of service. And they then worked to create a
justification for deplatforming him. Same thing with the Hunter Biden laptop. They said that it
had not violated Twitter's terms of service. They were very clear on this and there were other
instances of it. Now then, Yul Roth was then basically overruled by the people above him.
So he was a good company man. But I don't think that the demonization of Yul Roth that had occurred
perhaps earlier in the process of looking at what happened at Twitter
was fair. But I do think that this, you know, and I think I mentioned him here in this context
because he was the one that was often pushing back against the abuse of this Russian influence
operation. You mean it wasn't fair when Elon branded him a pedophile in front of 120 million
people? That was overreach? No, that was obviously wrong. That was obviously wrong. Absolutely. No hesitation in denouncing
that. So Renee, feel free to say whatever you want here, but I would love to get your take on
the Russian disinformation piece too. Yeah, sure. So I think that where I come down,
and Michael and I have been emailing about so many of these issues over the last couple of weeks, I really come down in a place where I feel like there are nuanced moments here.
And as we talk about, for example, Yoel pushing back against some of the things that happened, content moderation is the story of people trying to make decisions, the best possible decision, in line with a particular policy that a company has written, and then some sort of sense of even handed enforcement, you know, so you have the policy
and then the enforcement, these are sometimes two different things. The policies, what you then have
is people in the most high stakes, volatile situations, trying to figure out what to do.
So what winds up happening on Twitter, ironically, is that all of these things are reduced down to
do you think this person is bad? Do you think that decision is bad? If you think that's bad,
obviously there was some sort of malice behind it. And that I think is a flattening of what
is actually happening. There's some interesting dynamics and uses of the word censorship that I've
been intrigued by as we have moved through the evolution of some of those policies over the last seven years.
And just to help make sure the audience understands, content moderation is not a binary, take it down, leave it up. So I'll use Facebook's terminology here. They have a framework
and they call it remove, reduce, inform. Remove means it comes down. Reduce means its distribution
is limited. And inform means a
label is put up. There is some sort of interstitial, you know, a pop-up comes up,
or there's a fact check under it, or YouTube has a little like context label down at the bottom of
the video. Sometimes it'll take you to a Wikipedia article. So in that moderation framework, remove,
reduce, inform, when something is reported, there's a policy rubric that says this thing
may violate the content. And then the enforcement, whether to remove, reduce, or inform,
is based on some sort of internal series of thresholds. I am not an employee of these
companies. I don't know what those are. So for me, one of the interesting things about the files has
been seeing those conversations come to light. And my personal take on it, my interpretation,
has been largely that you have people trying to decide within the rubric of this policy what they should do.
So there were a couple of policies that I think are relevant to this conversation and what's just been said.
The first on the subject of the Hunter Biden laptop was the creation of a policy following a lot of what happened with the GRU.
So when we talk about Russian interference, I'll connect it to Russian interference for you. You and I spoke back in 2019 about the work that I did on a particular
data set for the Internet Research Agency. So that is the sort of troll factory. When people
think about social media interference and they think about trolls or bots, the Internet Research
Agency is what they're thinking of. But there was another component to Russian interference in the
election, which was Russian military intelligence hacking the DNC and the Clinton campaign, and then releasing files at opportune times, for example, to distract or change the public conversation to make them cover these files.
I think the first transfer, if I'm not mistaken, was dropped the day of that Access Hollywood Pussygate tape coming out, right?
So the media is talking about Pussygate.
All of a sudden, here's this tranche of secret documents, media conversation changes. So this is
a, you know, in response to things like this, and also to hacked materials more broadly,
the platform implements a hacked materials policy that says, despite the fact that again,
journalists may have a particular point of view about how to treat hacked materials,
the platform does not necessarily have to share that point of view, because sometimes hacked materials turn out to be
sex pictures or nudes that are sitting on your phone or a variety of other types of private
conversations that get dropped. So this policy covers things beyond the contents of a wayward
laptop from a presidential son. And so again, they're not writing the policy for Hunter Biden's laptop, they've written the policy, and then you see in the conversation them deciding
whether and how to enforce it. And this is where the conversations with the FBI come into play.
Again, no personal, you know, I felt like the enforcement on the Hunter Biden laptop by Twitter
was quite foolish. I thought this is one of these, like the horse has left the barn, you know,
you're doing more, you're creating more problems for yourself by trying to censor, particularly an article,
as opposed to the contents of the laptop itself, right? There's one thing, you can enforce your
policy on hacked material by taking down the nudes that were going up and saying that violates our
terms of service without saying that also the New York Post article digesting the contents of the
laptop violates the terms of service. This is where you see some of the debates about the enforcement there.
But the...
Actually, just to linger on that distinction, if I'm not mistaken, this was true when I last looked, but perhaps something has come out since.
Biden and his team never asked for the story to be suppressed on Twitter.
for the story to be suppressed on Twitter? Weren't they just asking about the contents of the laptop,
like nude photos of Hunter Biden to be taken down? So my understanding from when that Twitter files thread went out, I and others went to the internet archive to go see what the substance of
those tweets had been, and they were in fact nudes. Does that mean that they were all nudes?
No, because again, we have a very particular filtered anecdotal view of what happened with regard to those requests. You know,
we're told the Trump campaign requested takedowns, sorry, the Trump administration requested
takedowns, the Biden campaign requested takedowns, and then we have a list of like four or five
different tweets. And so that again, is where, depending on your framing and your perception,
this was either egregious jawboning or somebody trying to get
nudes taken down off a platform. But from what I have seen, it was the latter.
But don't we think that the scandal was the fact that Twitter locked out the New York Post?
Yeah.
Yeah. And I'm not in any way saying that I thought that that was a good decision. That
was what I meant when I said that the suppression of the article was bad.
Facebook did something different. I don't know if you remember what Facebook did at the time,
but Facebook actually used reduce. And Facebook said, we are going to throttle the distribution
of the story while we try to figure out what is going on here. Now, the question of is that
throttling censorship is a subsequent label. Censorship is where we've really moved very,
very far in our use of that term in
the context of social media moderation. My personal feeling on that very strongly is that it was
political. The first labeling is censorship articles began when Twitter began to fact check
tweets by President Trump. It did not take them down. It did not throttle them. It put up a label,
a fact check. I think that's
counter speech and contextualization. This is my personal view on it. But we began to see a,
again, a flattening of the conversation where remove, reduce, and inform were all contextualized
as egregious overreach and censorship. And so where I come down on a lot of these questions
is I recognize the complaint. I acknowledge that things were not handled well. And I ask, what do you want done instead?
If you do not want the label, if you do not want the reduce, and if you definitely don't want the
takedown, then is the alternative simply a viral free for all at all times with every unverified
rumor going viral and the public being left to sort it out.
And I'm very curious about that, particularly because journalism is supposed to be about informing the public, a recognition that journalists themselves serve a filtering function,
serve a fact-checking function.
And we can debate whether that's partisan or biased or this or that.
But there is, I think, a core belief at the center
of the profession that there is such a thing as the best information that we have in this moment.
And how do we convey that in a particular information environment? That's where I think
a lot of my work has been. But I'll stop talking there because I think that the complexities of
content moderation are too often viewed as right versus left,
takedown versus leave up. They're really filtered through the context of the American culture war.
And this is a global platform trying to figure out, you know, what the hell do you do when
Modi's government requests a takedown? You know, this is a, this is the policy.
Which Elon just agreed to.
Yeah, you just, you just capitulate and hope no one notices. All right. So I just want to add
something to what you said, Renee, because it's what people are reacting to. So people are acting like they want just everything
to rip however the algorithm sees fit and any curation is nefarious. And yet we know we have
an algorithm or a set of algorithms that preferentially boost misleading and injurious
information. So the truth is always playing catch up to the most salacious lies. And if that's going
to be the status quo, there's no way you build a healthy society and a healthy politics on top of
that. So I think anyone who thinks about it for five seconds knows that they don't want that. And therefore, you have to get your hand on the wheel, at least a little bit.
And whether that hand is some other algorithm, or it's actual, you know, the conscious curation of
monkeys, you need to intrude on what we currently have built. And it comes back to how transparent those intrusions are and then what people make of
those efforts based on our divided politics and our tribalism. And I think that the transparency
piece is the common ground and the area where we can actually move forward. Google has an
interesting... All the platforms have transparency reports.
Most of them are aggregated stats.
They're not particularly interesting.
Google actually will say, here's a government takedown.
Here's approximately the request.
You know, here's what we received.
Here's what they asked us to do.
And then here's what we did.
It's very, you know, very one sentence summary, two sentence summaries.
But I really love that.
I think of that as like, this is a best practice.
There's the Lumen database, which does this for DMCA takedowns, which are usually companies, sometimes others requesting takedowns related to copyright violations. Again, here is the request, here is what we did. optimal path forward for saying you cannot have a wholly moderation-free environment.
Every algorithm, just speaking of curation, has a weighting in some regard. There is no such thing
as neutral. Even reverse chronological is a particular value judgment because they're
weighting it by time. And you can see this actually quite clearly now on Twitter. If you
look at the For You page, I think they're calling it, or For You versus following, you see different types of things, you can go and you can look at a
chronological feed, you will see that for you is often bait, right? It's the most, you know,
outrage inducing, you're going to go click into this, you're going to go fight with that person.
That's great for the platform. The chronological feed is not necessarily as engaging. It's not necessarily going to keep you there, but it is a different mechanism for surfacing information. And so what
we're ultimately talking about here is incentives. It is a system of incentives. It is a system of
judgments, and that is in algorithmic curation as well as content moderation. And I do think
that the public does not actually understand the extent to which an algorithm deciding to curate and surface something shapes their behavior, shapes what they do next. Is that person being censored? Is actually what is being amplified? And that is potentially the far more interesting question as we think about how to build a
system that vaguely mimics a public square.
You know, I've run the simplest algorithm over here, which is to delete my Twitter account.
And it's impossible to exaggerate the effect it has had on my mind and life not to be on
Twitter.
I mean, it's just, I recommend it to anybody.
on my mind in life not to be on Twitter. I mean, it's just, I recommend it to anybody.
And when I have checked back on Twitter just to prepare for this conversation,
I am just fucking aghast that I spent that much time there. It's a mortifying glance backward over the previous 12 years. And I mean, even the good stuff, it's for the same reason I'm not on
TikTok now or any of these other platforms, because it would just be a time incinerator. When I look back at my engagement with Twitter, it's amazing to me. And so there's something pathological about, I think, every variant on offer. And it's not to say that it would be impossible to build a social media network to everyone's benefit, but Twitter ain't it.
And it's just very interesting to have unplugged.
I've done podcasts about Facebook without ever being on Facebook because it's of enormous
importance to society.
It's both what it does appropriately and badly, and hence this conversation.
Michael, do you have anything you want to insert at this point?
Well, I mean, I guess the only... to hence this conversation. Michael, do you have anything you want to insert at this point?
Well, I mean, I guess, so I would say the three things I raised, which was the need for transparency and content moderation, because there is some amount of censorship of justifiable opinions
going on in accurate information. That's kind of the big social media thing. The other two I
mentioned, I think really have more to do with FBI. And do we think it's an apolitical law enforcement organization? And the third is around this inflation of the Russia threat, which doesn't is not specific at all to social media, but I think is extremely important because we all know it's a terribly dangerous thing to underestimate a threat. But in fact, exaggerating a threat has
very serious problems associated with it, both the ability to abuse that, which we saw in terms
of deplatforming, deamplifying people that were innocent, in other words, saying Russian-influenced
as opposed to Russian. And I think that needs to be, that we need to kind of get to the bottom of the FBI issue and the treatment of the Russian laptop and also, I think, have a real honest conversation about this issue of Russian threat inflation.
acknowledge that, you know, for example, some of the misinformation, a lot of the misinformation is coming from, you know, the sources that I think you might think of as the people that are not the
sources of misinformation and that sometimes it's innocent. So we all thought that if you got the
vaccine, that you were either not going to get sick or that you were not going to be transmissible.
Those two things both turned out to be wrong. Seems to me that like having a
conversation about those edges of science is exactly what you would do on something that we
call a platform. And so I think it resolves a little bit if you say, look, if you're a platform,
you have this incredible privilege, which is that you're going to have this light in this,
these limited liability, basically. But that means that you're also but the flip side is that
those platforms are also curating. And so you get yourself in a funny position, which is like, okay, well then how do
you resolve that? And it seems that you have to resolve it, which is that if you're going to have
this amazing privilege to be a platform rather than a media content producer, then you must be
transparent about how you're making those decisions. And there must be a place for people to appeal if they're being censored or just throttled or reduced
or even if there's an interstitial.
You know, I have an ongoing conflict with Facebook
about the cause of high-intensity fires in California.
I have the top scientists in California saying
high-intensity fires are
caused by fuel wood load rather than temperature changes. I am not allowed to express that on
Facebook. I have been so severely throttled by Facebook that it's basically a dead platform to
me. My demand for an appeal to the fact checker was, they said, go talk to the person that censored
you. It pisses me off. It needs to be resolved.
If you say that it's due to Jewish space lasers, you might have an open lane on Facebook.
You got some nice followers. I want to drill down on a couple of points you just made.
So it seemed to me that the story was changing in real time as the files,
as the Twitter files were dropping. Initially, the story seemed to be
that the meddling on the part of Twitter's
ultra-woke employees was just horrific
and horrifically biased.
But then it seemed to shift,
and it was more of a picture of Twitter
resisting an intrusive FBI
and actually resisting fairly professionally, or at least, I mean,
they eventually caved, I guess, but they seem to surprise some of you in how much they, in fact,
did resist. And so, you know, viewing it from the outside, it seemed like the goalposts kind
of moved a little bit. Like, first, we're just meant to be astounded by
what the backroom of Slack looks like at Twitter. But now we're seeing that, no, actually,
they were quite tied in knots over following their own policies, and we're just really getting
worn down by the FBI's requests. How do you view it there? And then I guess the other piece I would
add is, here, we're obviously talking about Trump's FBI, right, run by a Trump appointee. So that does kind of muddy the picture of, you know, frank political bias being the reason to suppress or pre-bunk a Hunter Biden laptop scandal.
laptop scandal. Yeah, I mean, so I think, yeah, I definitely think that my perception of what was going on changed over time. I, of course, you know, we were all only responsible for the threads that
we wrote. So, you know, by the time I came in, it looked like YOL was doing more pushing back.
At the same time, on both of the two issues that I looked at and was involved in,
same time on both of the two issues that I looked at and was involved in, the decision to deplatform Trump and the decision to bounce is the technical word, the New York Post account. In both cases,
Yol and his team had decided that there was no violation, and then they reversed themselves.
Now, to some extent, you go, well, you know, you're kind of, I think Rene articulated this
a little bit before, which is, you know, the rules are evolving over time as they deal
with real world cases.
So you can't be too much of a, you know, like, well, we wrote the rules and we can't change
them.
But it did, there wasn't transparency about how that was happening.
And hence, hence another reason for transparency.
And in the case of the Hunter Biden laptop, I find Jim Baker's behavior extremely suspicious. He is clearly a deep anti-Trumper.
He is the person that Hillary Clinton's attorney came to, Michael Sussman came to,
to share false information about an alleged Russian bank that was wiring information potentially to the Trump campaign, triggering the investigation.
conclusion by Yul Roth's team that the laptop had not violated any of Twitter's terms of service and that the laptop appeared to be authentic rather than a product of Russian disinformation.
Again, you get to a point with this stuff where you kind of go, I've done all I can in terms of
going through the data. I've made a strong of a statement about this pattern of behavior,
appearing to be a pattern of behavior
and appearing to be organized.
And now I just think it's in the hands of the Congress
that it needs to get to the bottom
of what was going on over there.
And they may never do it.
But there was stuff going on, Sam, that was weird.
Like why in the world did Aspen Institute
do a tabletop exercise
with the national security reporters
from the New York Times and Washington Post, the safety officers from Facebook and Twitter,
to talk about a potential Russian hack and leak operation involving Hunter Biden? Why did they
do that in August and September? Keep in mind that FBI had the Hunter Biden laptop since December
2019, that they were listening to Giuliani's calls,
I find the whole thing extremely suspicious. Now, of course, Hunter Biden was in the news,
and that was what Trump got impeached over, yes. And it may be that I am, again, engaging in
an overly pattern, trying to recognize a pattern here that's not there. And that's possible. But
I do think there's something very strange going on that we haven't
gotten to the bottom of, and we should. And the fact that, you know, Trump had as a appointee
there doesn't mean that there wasn't potentially some effort by both existing and former intelligence
community operatives to basically engage in a PR campaign, or what we used to call PSYOPs,
or now it gets called influence
operations to basically pre-bunk the Hunter Biden laptop so that I personally and my entire liberal
Democratic family and all of the Democrats I know thought that it was Russian disinformation.
Yeah. I mean, I didn't even really know that it wasn't until pretty recently. I just sort of
assumed it was Russian disinformation. I didn't take it seriously. So I think partly I kind of go, I know how I experienced that episode, which was
to go, I don't know, sounds like it was probably Russian disinformation, and it wasn't, and they
knew at the time that it wasn't. And that seems to me quite important. Well, let's just close the
loop on that, if we can. Now this laptop has been studied studied certainly by many, many people right of center
for many months. Is there anything there that was important? Oh my gosh. Yeah. Is there anything that
ties back to his father that is a big deal? Yes. Like what? Oh. I'm still in a bubble. I still haven't heard this story apart from a
single line in an email saying, I got to give 10% to the big guy. And we assume the big guy is.
If you'd like to continue listening to this conversation, you'll need to subscribe at
samharris.org. Once you do, you'll get access to all full length episodes of the Making Sense
podcast, along with other subscriber only content, including bonus episodes and AMAs
and the conversations I've been having on the Waking Up app.
The Making Sense podcast is ad-free and relies entirely on listener support,
and you can subscribe now at SamHarris.org. Thank you.