Making Sense with Sam Harris - #220 - The Information Apocalypse
Episode Date: October 17, 2020Sam Harris speaks with Nina Schick about the growing epidemic of misinformation and disinformation. They discuss the coming problem of “deep fakes,” the history of Russian “active measures” ag...ainst the West, the weaponization of the EU migration crisis, Russian targeting of the African-American community, the future of Europe, Trump and rise of political cynicism, QAnon, the prospect of violence surrounding the 2020 Presidential election, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Transcript
Discussion (0)
Welcome to the Making Sense Podcast.
This is Sam Harris.
Just a note to say that if you're hearing this, you are not currently on our subscriber
feed and will only be hearing partial episodes of the podcast.
If you'd like access to full episodes, you'll need to subscribe at samharris.org.
There you'll find our private RSS feed
to add to your favorite podcatcher, along with other subscriber-only content.
And as always, I never want money to be the reason why someone can't listen to the podcast.
So if you can't afford a subscription, there's an option at SamHarris.org to request a free account.
And we grant 100% of those requests. No questions asked.
and we grant 100% of those requests. No questions asked.
Welcome to the Making Sense Podcast. This is Sam Harris.
Okay, no housekeeping today. Today I'm speaking with Nina Schick. Nina is an author and broadcaster who specializes in how technology and artificial intelligence are reshaping society.
She has advised global leaders in many countries, including Joe Biden,
and she's a regular contributor to Bloomberg, Sky, CNN, and the BBC.
Nina speaks seven languages and holds degrees from Cambridge University and University College London.
And her new book is Deep Fakes, which explores the terrain we're about to discuss.
We talk about the epidemic of misinformation and disinformation in our society now, and the coming problem of deep fakes, which is, when you imagine it in detail, fairly alarming.
We get into the history of Russian active measures against the West,
the weaponization of the migrant crisis in Europe,
Russian targeting of the African-American community,
Trump and the rise of political cynicism,
QAnon, the prospect of violence surrounding the presidential election,
and other topics.
Anyway, this is all scary stuff,
but Nina is a great guide through this wilderness. And now I bring you Nina Schick.
I am here with Nina Schick. Nina, thank you for joining me.
Thanks for having me, Sam.
Nina Schick. Nina, thank you for joining me.
Thanks for having me, Sam.
We have a lot to talk about. You have a very interesting background, which I think suggests many common interests and overlapping life trajectories. I don't think
we're going to be able to get into that because you have produced so many urgent matters in your
recent book that we need to talk about. But to get started here,
what is your background, personally, but also just what you're focusing on
these days that gives you an expertise on the topics we're going to talk about?
Well, it's a really interesting and crazy story, one that could only happen in the 21st century.
I'm half German and I'm half Nepalese. My father was a German criminal defense
lawyer who in the 70s decided, you know, he was going to seek spirituality and travel east and
took his car, threw in a few books and did that big journey that a lot of young people did back
in the 70s through Afghanistan, India, and then ended up in Nepal, which at this time was still this hermetic kingdom.
Fell in love with it and met my mother there briefly after a decade or so.
And basically, my mother came from this totally different universe.
She grew up in Nepal as a member of this community in a Himalayan tribe, had no running water,
electricity, shoes when she was growing up.
And because she met my
father, you know, they fell in love and they kind of decided to have us, my brother and myself.
And I grew up in Kathmandu in the 80s and the 90s. And then eventually I came to the UK
to go to university and I went to Cambridge and UCL. And my kind of discipline is really in history
and politics. I've always been fascinated by history and politics, and especially at this time when
the geopolitical sands seem to be shifting in such a dramatic way.
So my career over the last 10 years has really been working at the heart of Westminster as
a policy analyst, a journalist, and an advisor on some of the key geopolitical shifts around
the European Union.
advisor on some of the key geopolitical shifts around the European Union. So this includes the kind of what happened with Russia and the invasion of Ukraine in 2013, subsequently the EU's migrant
crisis in 2015. Then, obviously, I was very tied into the work here in the UK around Brexit. I was
helping to advise the government on that in 2016. Then, of course, the election of Trump in 2016.
Then I went on to advise Emmanuel Macron's campaign, which was also interestingly hacked
by the Russians. And finally, I got to a point in 2018 where I was working with the former NATO
Secretary General, and he covened a group of global leaders, which included Joe Biden. And he wanted to look at how the 2020
election might be impacted by what we had seen in 2016, and how the new kind of threats were
emerging. And this is really where I came to DeepFakes. And that is really the starting point
for my book. So I have this background in geopolitics, politics,
information warfare, and my area of interest is really how the exponential changes in technology,
and particularly in AI, are rewriting not only politics, but society at large as well.
So yeah, you are a citizen of the world. I mean, that's quite amazing. Did you grow up speaking Nepali and German? Yeah, I mean, I grew up with four languages. So Nepali, German, Tamang, because my mother is from an ethnic minority group in Nepal,
which actually is closely related to Tibetans. So Tamang is a completely different language. So
Nepali, German, Tamang, and Hindi, because everybody in Nepal
speaks Hindi. India is the big brother on the border. So that was something I wish I could
give my daughter as well. I live in the UK now, and most people in the UK, we speak English.
That's it. Yeah, all too well, I can hear. So your English betrays none of that colorful
backstory. It's quite amazing.
So yeah, I know we have common interests in the kinds of things that brought your father
to Nepal in the first place, and meditation and forming a philosophy of life that is aimed
at deeper levels of well-being than is often attained by people.
But we have such a colossal mess to clean up
in our society now with how our information ecosystem has been polluted and deranged that
I think we're just going to do another podcast on the happy talk of what we could share when we
get past these increasingly terrifying dangers and self-inflicted wounds.
I mean, it's amazing to see how much of this is our own doing.
And we'll talk about bad actors and people who are consciously using our technology against
us to really destroy the possibility of living in an open society.
really destroy the possibility of living in an open society. But so much of this is a matter of our entertaining ourselves into a kind of collective madness and what seems like it could
be a coming social collapse. I realize that if you're not in touch with these trends, you know,
if anyone in the audience who isn't, this kind of language coming from me or anyone else can sound
hyperbolic, but we're really going over some kind of precipice here with respect to our ability to
understand what's going on in the world and to converge on a common picture of a shared reality,
because we're in the midst of an information war, and it's being waged against democratic societies by adversaries like Russia and China.
But it's also a civil war that's being waged by factions within our society.
And there are various political cults, and then there's the President of the United States himself.
All of this is happening on the back of and facilitating an utter collapse of trust in institutions and a global decline in democracy.
And again, we've built the very tools of our derangement ourselves.
And in particular, I'm talking about social media here.
So your book goes into this and it's organized around this new piece of technology that we
call deep fakes.
And the book is Deep Fakes, the coming infocalypse, which that's not your coinage.
On the page, it's very easy to parse.
When you say it, it's hard to understand what's being said there.
But it's really, you're talking about an information apocalypse.
Just remind people what deep fakes are and suggest what's at stake
here in terms of how difficult it could be to make sense of our world in the presence of this
technology. Yes, absolutely. So a deepfake is a type of synthetic media. And what synthetic
media essentially is, is any type of media. It can be an image, it can be a video, it can be a text
that is generated by AI. And this ability of AI to generate fake or synthetic media is really,
really nascent. We're only at the very, very beginning of the synthetic media revolution.
It was only probably in about the last four or five years that this has been possible.
And for the last two years that we've been seeing how the real world applications of
this have been leeching out from beyond the AI research community.
So the first thing to say about synthetic media is that it is completely going to transform
how we perceive the world. Because in future, all media is going to be
synthetic because it means that anybody can create content to a degree of fidelity that is only
possible for Hollywood studios right now, right? And they can do this for little to no cost, using apps or software, various interfaces, which will make it so
accessible to anyone. And the reason why this is so interesting, another reason why synthetic
media is so interesting is, until now, the best kind of computer effects, CGI, you still can't
quite get humans right. So when you use CGI to do effects where you're trying to create
robotic humans, it still doesn't look right. It's called, you know, Uncanny Valley. But it turns out
that AI, when you train your machine learning systems with enough data, they're really, really
good at generating fake humans or synthetic humans, both in images. I mean, and when it comes to generating fake human
faces, so images, still images, it's already perfected that. And if you want to kind of test
that, you can go and look at thispersondoesnotexist.com. Every time you refresh the page,
you'll see a new human face that to the human eye, to you or me, Sam, we'll look at that and
we'll think that's authentic human. Whereas that is just something that's generated by AI, that human literally doesn't exist. And also now increasingly
in other types of media, like audio and film. So I could take essentially a clip of a recording
with you, Sam, and I could use that to train my machine learning system. And then I can synthesize your voice. So I can literally hijack your biometrics. I can take your voice, synthesize it,
get my AI kind of machine learning system to recreate that. I can do the same with your
digital likeness. Obviously, this is going to have tremendous commercial applications. Entire
industries are going to be transformed. For example,
corporate communications, advertising, the future of all movies, video games. But this is also the most potent form of mis- and disinformation, which you're democratizing for almost anyone in
the world at a time when our information ecosystem has already become
increasingly dangerous and corrupt. So the first thing I'd say about synthetic media
is it is actually just heralding this tremendous revolution in the way that we communicate.
The second thing I'd say is that it's coming at a time when we've had lots of changes in our
information ecosystem over the past 30 years. So that society hasn't been able to keep up with from the internet to social media to
smartphones.
And this is just the next step in that.
And then the final thing, this is where I come to deep fakes, is that this field is
still so nascent and emerging that the taxonomy around it is completely undecided yet.
And as I already kind of pointed out or touched upon, there will be
legitimate use cases for synthetic media. And this is one of the reasons why this cat is out of the
bag. There's no way we're putting it back in because there's so much investment in the kind
of commercial use cases ever since. I think there's almost 200 companies now that are working
exclusively on generating synthetic media. So we have to
distinguish between the legitimate use cases of synthetic media and how we draw the line.
So I very broad brush in my book say that the use and intent behind synthetic media really matters
in how we define it. So I refer to deepfake as when a piece of synthetic media is used as a piece of mis- or disinformation.
And, you know, there is so much more that you could delve into there with regards to the kind
of the ethical implications and the taxonomy. But broadly speaking, that's how I define it.
And that's my definition between synthetic media and deepfakes.
Hmm. Well, so as you point out, all of this would be good, clean fun if it weren't. It's something that state actors and people
internal to various states are going to leverage to further divide society from itself and increase
political polarization. But it's amazing that it is so promising in the fun department that we can't possibly even contemplate putting this cat back in the bag. I mean, it's just, that's the problem we're seeing on all fronts. I mean, so it is with social media. So it is with the ad revenue model that is selecting for so many of its harmful effects. I mean, we just can't break the spell wherein people want the cheapest, most fun media, and they want it endlessly. And yet the harms that
are accruing are so large that it's amazing just to see that there's no handhold here whereby we
can resist our slide toward the precipice. Just to underscore how quickly this
technology is developing, in your book you point out what happened once Martin Scorsese released
his film The Irishman, which had this exceedingly expensive and laborious process of trying to
de-age its principal actors, Robert De Niro and Joe Pesci, and that was met
with something like derision for the imperfection of what was achieved there. Again, a great cost,
and then very, very quickly, someone on YouTube using free software did a nearly perfect de-aging of the same film. It's just amazing what's
happening here. And again, these tools are going to be free, right? I mean, they're already free,
and ultimately, the best tools will be free. Absolutely. So you already have various kind
of software platforms online. So the barriers to entry have come down tremendously.
Right now, if you wanted to make a convincing deepfake video, you would still need to have
some knowledge, some knowledge of machine learning, but you wouldn't have to be an AI expert by any
means. But already now we have apps that allow people to do certain things like swap their faces
into scenes. For example,
Reface. I don't know if you've come across that app. I don't know how old your children are,
but if you have a teenager, you've probably come across it. You can basically put your own face
into a popular scene from a film like Titanic or something. This is using the power of synthetic
media. But experts who I speak to on the generation side,
because it's so hugely exciting to people who are generating synthetic media, think that
by the end of the decade, any YouTuber, any teenager will have the ability to create
special effects in film that are better than anything a Hollywood studio can do now.
And that's really why I put that anecdote about the Irishman into the book, because it just
demonstrates the power of synthetic media. I mean, Scorsese was working on this project from
2015. He filmed with a special three rig camera. He had this best special effects artist,
post-production work, multi-million dollar budget, and still the effect
at the end wasn't that convincing. It didn't look quite right. And now one YouTuber, Free Software,
takes a clip from Scorsese's film in 2020. So Scorsese's film came out in 2019. This year,
he can already create something that's far more, when you look at it, looks far more realistic than what Scorsese did.
This is just in the realm of video. As I already mentioned, with images,
it can already do it perfectly. There is also the case of audio. There is another YouTuber,
for example, who, because a lot of the kind of early pieces of synthetic media have sprung up on YouTube,
there's a YouTuber called Vocal Synthesis, who uses an open sourced AI model to train,
trained on celebrities' voices. So he can, something that he's done that's gotten many,
many views on YouTube is he's literally taken audio clips of dead presidents and then made them rap nwa's fuck the police right
ronald reagan fdr he very interesting this is a an indicator of how complex these challenges are
going to be to navigate in future because another thing that he did was he took jay-Z's voice and made him rap, recite Shakespeare's To Be or Not To Be.
And interestingly, Jay-Z's record label filed a copyright infringement claim against him and
made him kind of take it down. But this is really just a forebear of the kind of battles we're going
to see when any anonymous user, this is this can take your likeness, can take your
biometrics and make you say or do things that you never did. And of course, this is disastrous to
any liberal democratic model, because in a world where anything can be faked, everyone becomes a
target. But even more than that, if anything can be faked, including evidence that we today see
as an extension of our own reality, and I say evidence in quotation marks, video, film, audio,
then everything can also be denied. So the very basis of what is reality starts to become
corroded. Of course, reality itself remains. It's just that our perception of reality
starts to become increasingly clouded. So what are we going to do about this? Again,
we're going to get into all of the evidence of just how aggressively this will be used,
given everything else that's been happening in our world. We'll talk about Russia and Trump and
in our world. We'll talk about Russia and Trump and QAnon and other problems here, but many of us can dimly remember 20 years ago before COVID when the Bush audio tape dropped and Trump
sort of attempted to deny that the audio was real of him on the bus, but we were not yet in the presence of such widespread
use of deepfake technology that anyone was even tempted to believe him. We knew the audio was real.
Now, apparently it didn't matter, given how corrupted our sense of everything had become by that point politically. But we can see the
resort to claiming fakery that will be relied upon by everyone and anyone who is committed to
lying, because there'll be so much of it around that really it will only be charitable to extend
the benefit of the doubt to people who say,
listen, that wasn't me. That's just a perfect simulacrum of my voice and even my face. But
you actually can't believe your eyes and ears at this point. I would never say such a thing.
In any of your conversations with experts on this topic, are any of them hopeful that we will be able to figure out how to put a watermark on
digital media in such a way that we will understand its provenance and be able to
get to ground truth when it matters? So I think the problem of what we do about it is so huge that
ultimately we can only fight the corroding information ecosystem by building
society-wide resilience. But the solutions, if you want to term it that way, broadly fit
into two categories. The first are the kind of technical solutions. So because synthetic media
is going to become ubiquitous, and we as humans will not be able to discern because of the fidelity, the quality,
whether it's real or fake. So you can't rely on digital forensics in the sense that somebody
goes through and clicks and looks at each media and decides, oh, are the eyes blinking correctly?
Do the ears look a little bit blurred? Because these are what we do now, right? Because the
generation side of synthetic media is
still so nascent. So we're not going to be able to do that. Second, the sheer volume when you talk
about at the scale at which you can generate synthetic media means that humans are never
going to be able to go through it all, never going to be able to fact check each piece of media. So we have to rely on building the AI software to detect,
for example, deep fakes. And right now there is an interest and increasingly there are certain
experts and groups who are putting money into being able to detect deep fakes. However,
the problem is because of the adversarial nature of the AI and the way that it's trained, every time you build a detector that's good enough also given the various different models and ways in which the fakes can be generated,
there's never going to be a one size fits all model.
There's a hypothetical question, which is open still in the AI research community,
about whether or not the fakes can become so sophisticated.
So we already know they're going to be humans,
they already basically do. But is there a point where the fakes become so sophisticated that even
AI and AI detector can never detect in the DNA of that fake that it's actually a piece of synthetic
media? We don't know yet is the answer to that. But I will say that there is far more research going into the generation side, because like
so much in terms of the information ecosystem, the architecture of the information ecosystem
and the information age, it has been driven by this almost utopian flawed vision of how
these technologies will be serving an unmitigated good for humanity without thinking about how
they might amplify the worst sides of human intention as well. The second side, and you touched upon that,
is building provenance architecture into the information ecosystem. So basically embedding
right into the hardware of devices, whether that's a camera, a mobile phone, the authenticity watermark to prove that that piece
of media is authentic. You can track it throughout its life to show that it hasn't been tampered with
or edited. And this is something that, for example, Adobe is working on along with
on its content authenticity initiative. So there are technical solutions underway, both in terms of the
detection and the provenance side of the problem. However, ultimately, this is a human problem to
the extent that disinformation or bad information didn't just come about at the turn of the millennium. It's just that we have never
seen it at this scale. We have never seen it this potent, and we have never, ever been able to see,
to have it as accessible as it is now. So ultimately, this is a human problem. There's
no way we can deal with the challenges of our corroding information ecosystem without talking about human, quote unquote, solutions. How do we prepare society for this new reality? And
we are way behind. We're always reactive. Our reactions are always piecemeal. And the
biggest problem is the information ecosystem has become corrupt to the extent that we can't
even identify what the real risks are, right? We're too busy fighting
each other about other things without seeing what the real existential risk is here.
Yeah, yeah. I mean, that is a very symptom of the problem itself, the fact that we can't even agree
on the nature of the problem. There's so much disinformation in the air. It makes me think that
one solution to part of the problem, I don't think it captures all of it, but certainly some of the most pressing me and it purported to be a, you know,
part of my podcast where I said something, you know, reputation canceling, and I said, well,
that's a fake, that wasn't me, the only way to resolve that would be to tell whether I'm lying
or not. We're forcing ourselves into a position where it's going to be a kind of emergency not to be
able to tell with real confidence whether or not somebody is lying. So I think we're going to,
in addition to the arms race between deep fakes and deep fake identifying AI, I think this could
inspire a lie detection arms race because there's so many other reasons why we would want to be able to detect people who are lying.
Having just watched the presidential and vice presidential debates in America, one could see that the utility of having a red light go off over someone's head when he or she knows that he or she is lying. But if we can't trust people, and we can't trust the evidence of
our senses when we have media of them saying and doing things convincingly delivered to us
in torrents, it's hard to see how we don't drift off into some horrifically dystopian
dream world of our own confection. Absolutely. And this is really why, you know, I wrote the book,
I wrote it in a way that was very accessible to anyone to pick up and zoom through an afternoon.
Because I think, without this conceptual framework, where we can connect everything
from Russian disinformation to the increasingly partisan political divide in the
United States, but also around the rest of the Western world. And understanding how now,
with the age of synthetic media upon us, how our entire perception of the world is going to be changed in a way that is completely unprecedented.
How we can be manipulated in the age of information where we had assumed that once we have access to
this much information that, you know, surely progress is inevitable, but to actually understand
how the information ecosystem itself has become corrupt, I think, is the first step.
And to be honest with you, I do tend to think that things will probably get worse before
they get better.
And I think the US election is a great case study of that, because it's almost no matter
the outcome, right?
Let's say that Trump loses and he loses by large margin.
You know that he could still refuse to go, even if the Secret Service will come and, you know,
take his bags and ask him, please, Mr. Trump, there's the door. He has this influence now where
a lot of his followers genuinely believe that he is, you know, the say, this kind of savior of America.
And if he asks them to take arms and take to the streets, I mean, this is literally already
happening right now, right? You have armed insurrection, militia kind of patrolling the
streets of the United States on both the left and the right for their political grievances.
So if Biden wins, let's say Trump goes quietly and Biden wins, well,
then you still haven't addressed the bigger problem of the infocalypse, where the information
ecosystem has become so corrupt and so corroded, and the synthetic media revolution is still upon
us. So I'm hopeful that we still have time to address this,, like I said, this technology is so nascent. We can still
try to take some kind of action in terms of what's the ethical framework, how are we going to
adjudicate the use of synthetic media, how can we digitally educate the public about the risks of
synthetic media, but it is a ticking time bomb and the window is short.
As if to underscore your last point, at the time we're speaking here, there's a headline
now circulating that 13 men were just arrested, including seven members of a right-wing militia
plotting to kidnap the Democratic governor of Michigan, Gretchen Whitmer, for the purposes
of inciting a civil war. One can only imagine the kind of
information diet of these militia members, but this is the kind of thing that gets engineered
by crazy information and pseudo facts being spread on social media. And this is the kind of thing
that when even delivered by a mainstream news channel, one now has to pause and wonder whether or not it's even true,
because there's been such a breakdown of trust in journalism, and there's so many cries of fake news,
both cynical and increasingly real, that it's just, we're just dealing with a circumstance of
such informational pollution. Let's talk about
Russia's role in all of this, because Russia has a history of prosecuting what they call active
measures against us. And we really have, for a long time, been in the midst of an information
war, which is essentially a psychological war. And Russia is increasingly expert at exploiting the divisions in our
society, especially racial divisions. So maybe you can summarize some of this history.
Yeah, I mean, I start my book with Russia because my career intersected a lot with
what Russia was doing in Ukraine in 2014. And the kind of information war
they fought around the annexation of Crimea and eastern Ukraine, where they basically denied that
it was happening at all. And the same with the shooting down of MH17. This was the Malaysian
aircraft that was shot down over eastern Ukraine, which now has been proven to have been by Russian
military services. But at the time, they were saying this, you know has been proven to have been by Russian military services.
But at the time, they were saying this had nothing to do with them, and that this was
pro-Russian Ukrainian separatists who had shot down the airliner.
So what Russia did with information warfare around Ukraine, Crimea, around Europe in 2015,
when Putin and Assad stepped up their bombardment of civilians in Syria,
unleashing this mass migration, which basically led to the EU's migrant crisis five years ago.
I don't know if you remember those images of people just arriving at the shores, you know,
and some of them were refugees, but as we now know, a lot of them were also terrorists,
economic migrants, and how that almost tore Europe apart and the information war that Russia
fought around those events where they perpetrated these stories about, for example,
girls in Germany who had been supposedly by, supposedly raped by arriving migrants.
And stories like this legitimately did happen, but this story was completely planted. So it's
dividing the line, you know, it's blurring the line between what's real and fake.
But what was also very interesting for me was that I worked on, or I studied and I worked on
the Russian information operations around
the US election in 2016.
And the first thing to say about that is, to me, we can see how corrupt the information
ecosystem has become to the extent that those information operations have become a completely
partisan event in America, right?
Some people say that Russia is behind everything, and others deny
that Russia did anything at all. And this is just nonsense. For sure, the Russians intervened in the
2016 election, and they continue to intervene in US politics to this day. And I suppose what was
very interesting to me about what Russia was doing was how this information warfare strategy,
which is old, and it goes all the way back to the Cold War, was becoming increasingly potent
with the weapons of this modern information ecosystem. And one of those was social media.
What they did in Ukraine and then Europe around the migrant crisis and then around the US election
was influence operations on social media, where they actually posed, in the case of the United
States, as authentic Americans. And then they, over years, by the way, this wasn't just them
getting involved in the weeks running up to the election. They started their influence operations
in the United States in
2013. They build up these tribal communities on social media and built up, well, basically played
identity politics, built up their pride in their distinct identity. And interestingly, this wasn't
just Russians targeting, you know, right wing kind of Trump supporters. They did
it across the political spectrum. And as a matter of fact, they disproportionately focused on the
African American community. So they built these fake groups, pages, communities, where you
imbue them with your distinct pride and your distinct identity. And then as we got closer
to the election, those groups were then sporadically injected with lots of political
grievances, some of them legitimate, to make these groups feel alienated from the mainstream.
And again, the primary focus of their influence operations on social media was the African
American community who they were basically targeting so that they felt so disenfranchised and disconnected from Hillary,
America at large, that they wouldn't go and vote in the election, right? And what has happened now,
four years later, is that those operations are still ongoing, but they've become far more
sophisticated. So in 2016, it might have been a troll farm
in St. Petersburg. But in 2020, one operation that was earlier this year, which was revealed
through CNN, Twitter, Facebook, a joint investigation was that the Russian agency,
which is in charge of the social media operations, it's called the Internet Research Agency, IRA, they had basically outsourced their work to Ghana.
They had set up what looked ostensibly like a legitimate human rights organization.
They had hired employees in Ghana, real authentic Ghanaians, and then told them, you know, you're
going to have to kind of build these groups and communities.
told them, you know, you're going to have to kind of post, build these groups and communities. And here is basically the same memes, the same ideas that they had used in 2016,
they were basically recycling in 2020. So I start with Russia, because what is really
interesting is that their strategy of information warfare is actually something called, is a phenomenon where
they flood the zone with a lot of information, bad information across the political spectrum.
So they're not just targeting, you know, Trump voters, for example. And this chaos, this bad
information, this chaotic information has the effect where it's called censorship. They do censorship through
noise. So this chaotic, bad information overload gets to the point where we can't make decisions
in our own interest of protecting ourselves, our country, our community. And that very spirit of
information warfare has come to characterize the entire information ecosystem. I mean, I start with
Russia, I map out how their tactics are far more potent, but you cannot talk about the corrosion
of the information ecosystem without recognizing that the same chaotic spirit has come to imbue
our homegrown debate as well. So I actually think, you know, of course, the Russians are
intervening in the US election in 2020. What's also very interesting is that other rogue and
authoritarian states around the world are looking at what Russia is doing and copying them. China
is becoming more like Russia. But this is also happening at home. And arguably, the domestic disinformation, misinformation, and information disorder is
far more harmful than anything that foreign actors are doing.
Yeah, I want to cover some of that ground again, because it's easy not to understand
at first pass just how sinister and insidious this all is, because the fact that we can't agree as a society that
Russia interfered in the 2016 presidential election is one of the greatest triumphs of
the Russian interference in our information ecosystem. The fact that you have people on the left over-ascribing
to Russian influence causality, and you have people on the right denying any
interference in the first place, and the fact that each side can sleep soundly at
night convinced that the other side is totally wrong, that is itself a symptom
of how polluted our information space has become. It's a kind of
singularity on the landscape where everything is now falling into it, and it's happening based on
the dynamics you just sketched out. Whereas if you mingle lies of any size and consequence with enough truths and half-truths or, you know, background facts that
suggest a plausibility to these lies, or at least you can't ever ascertain what's true,
it leads to a kind of epistemological breakdown and a cynicism that is the goal of this entire
enterprise. It's not merely to misinform people, which is to say have them
believe things that are false. It is to break people's commitment to being informed at all,
because they realize how hopeless it is. And so we all just tune out and go about our lives
being manipulated to who knows what end. So, you know, some of the history which you go through
in your book relates to the fact that, you know, long ago, long before they had any tools really to work with, you know, certainly didn't have social media, the Russians planted the story that AIDS was essentially a bioweapon cooked up in a U.S. lab, you know, with the purpose of performing a genocide on the black community. And they targeted the black community with this lie.
And to this day, a disproportionate number of people in the black community in the U.S.
believe that AIDS was made in a lab for the purpose of wiping out black people.
But the reason why that was so clever is because it has an air of plausibility to it given the history of the Tuskegee experiments,
the syphilis experiments, where African Americans who had syphilis were studied and not given the
cure even once the cure penicillin emerged. They were then, you know, studied to the end of their
lives with what amounted to the ethical equivalent of the Nazi cold water experiments, trying to see the effects of tertiary syphilis on people.
It was an absolutely appalling history.
And it's in the context of that history that you can make up new allegations that should seem patently insane, they're so evil, but they don't seem
patently insane given the points of contact to a surrounding reality that is fact-based.
And so it is with the current leveraging of identity politics in the U.S. where they create Black Lives Matter Facebook groups that are fake
and they can, you know, I think there was one protest in Times Square that had like 5,000 or
10,000 people show up and it was completely fake. I mean, the organizers were fake, you know,
they were Russians. There was no man on the ground who was actually a real leader of this thing. And people went to this protest, never realizing that they were characters in somebody's dreamscape.
Absolutely. This is why it is so dastardly. And as you pointed out, the Russians or even the
Soviets going back to the Cold War very quickly identified that race relations is a sore point for the United States. And they abused that to great effect. And the operation infection,
the lie that you already correctly pointed out, that the CIA invented the HIV virus as a way to
kill African Americans, was something that in the 1980s took about 10 years to go viral.
But when it did, oh boy, did it grab a hold of the imagination to the extent that it still
plays a challenge when you're trying to deal with HIV public health policy today, where you have
communities, African American communities, you disproportionately believe that the HIV virus is somehow connected
to a government plan to commit a genocide. And in 2016, I suppose what happened is that
the strategy was the same, right? We want to play identity politics. We want to hit the United
States where it hurts. We know that race is the dividing factor. But in 2016, it became so much
more powerful because Operation Infection, the HIV lie, was a single lie. Whereas in 2016,
and what's happening in 2020, is numerous groups, communities, pages, where it's not only about
spreading one lie, but it's actually about entrenching tribal divisions, entrenching identity politics.
And in the context of what's happened in 2020, very interesting, some of the other kind of
information operations that have come out that have been exposed is, unsurprisingly,
given your interest, Sam, and kind of the culture wars and wokeness, is that a lot of kind of unemployed American
journalists who had lost their job due to COVID were now working for a kind of social
justice-oriented left-wing news network in favor of BLM.
And it turned out that actually that entire network was fabricated and the Russians were
behind it. So these unwitting Americans who genuinely have good intentions are being co-opted into something
that is actually being run by Russian intelligence. And I suppose with our information ecosystem
right now, it's so much easier to actually infiltrate public life in the United States in a way that wouldn't have been possible in the 1980s.
So we don't even know.
Well, we're starting to see the impact of these operations on society.
That's not to say that, you know, the Russians created the problems with race.
Of course not.
But do they exploit them?
Absolutely. And are other countries, also other
rogue and authoritarian nation states seeking to do the same? Absolutely. Russia is the best at this
kind of information warfare, but other countries are learning quickly. And what's been really
interesting for me to watch is, for example, how China has taken an aggressive new interest in pursuing similar disinformation
campaigns in Western information spaces. This was something that they didn't do until about
last year when the protests started in Hong Kong, and then obviously this year with COVID.
I think you say in your book that Russian television, RT, is the most watched news channel on YouTube.
Yes, it is.
So this is another example to me of how quick they were to recognize that the architecture of this new information ecosystem, right, which developed around the turn of the millennium, that's characterized by the internet.
which developed around the turn of the millennium.
That's characterized by the instance of social media.
If you'd like to continue listening to this podcast,
you'll need to subscribe at samharris.org.
You'll get access to all full-length episodes of the Making Sense podcast and to other subscriber-only content,
including bonus episodes and AMAs
and the conversations I've been having on the Waking Up app.
The Making Sense podcast is ad-free and relies entirely on listener support.
And you can subscribe now at SamHarris.org.