Angry Planet - Free Speech Vs. the Big Lie
Episode Date: January 13, 2021It finally happened. Twitter finally suspended Trump’s account. After the capitol riots on January 6, tech companies such as Google, Apple, and Twitter took the unprecedented step of invoking their ...first amendment rights and removing several users from their various platforms. I just want to remind everyone that the 1958 Supreme Court Decision in NAACP v. Alabama outlined an implied part of the first amendment—that of Freedom of Association.Is this big tech censorship? Are private companies silencing conservative voices or merely deciding who they want to do business with? Is Twitter the new townsquare or just another site among a sea of websites.Here to help us figure some of this out is Dr. Alexi Drew. Drew is a Postdoctoral Research Associate at the Policy Institute at King’s College London. On Monday she published the policy paper “Disinformation Kills. Now what are we going to do about it?”recorded 1/12/21Angry Planet has a substack! Join the Information War to get weekly insights into our angry planet and hear more conversations about a world in conflict.https://angryplanet.substack.com/subscribeYou can listen to Angry Planet on iTunes, Stitcher, Google Play or follow our RSS directly. Our website is angryplanetpod.com. You can reach us on our Facebook page: https://www.facebook.com/angryplanetpodcast/; and on Twitter: @angryplanetpod.Support this show http://supporter.acast.com/warcollege. Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
Love this podcast. Support this show through the ACAST supporter feature. It's up to you how much you give, and there's no regular commitment. Just click the link in the show description to support now.
The difference between how we do content moderation outside of China and how it's done in China is that in the US and outside, private companies can print what they want and they have ability to remove what they want. In China, they have to put on the content that they're told to.
One day, all of the facts in about 30 years' time will be published.
When genocide has been cut out in this country, almost with impunity,
and when it is near completion, people talk about intervention.
They will be met with fire, fury, and frankly, power,
the likes of which this world has never seen before.
Welcome to Angry Planet. I'm Matthew Gull.
And I'm Jason Fields.
Well, it finally happened. Twitter has finally suspended Trump's account. After the capital riots on January 6, tech companies such as Google, Apple, and Twitter took the unprecedented step of invoking their First Amendment rights and removing several users from their various platforms. And I just want to remind everyone that the 1958 Supreme Court decision in NAACP versus Alabama outlined an implied part of the First Amendment that of freedom of association. But is this big tech censorship? Are private companies silencing
conservative voices or merely deciding who they want to do business with. And what part does
disinformation and the very real consequences we saw play out on January 6th play into that? Is Twitter
the Newtown Square or just another site on Among a Sea of websites? Well, here to help us figure out
some of this is Dr. Alexi Drew. Drew is a postdoctoral research associate at the Policy Institute
at King's College London. And on Monday, she published the policy paper with the eye-grabbing headline
disinformation kills. Now, what are we going to do about it? Doctor, thank you so much for joining us.
It's a pleasure to be here, Matthew, and I'm hoping I can live up to that large promise of trying to answer what is a ridiculously complex set of questions.
I wanted to have you on because I feel like we are now, we've been having this conversation broadly as a culture for a while now.
But I feel like there's been this fight between different, it's maybe not even different political,
spectrums in in America specifically, where we have one side that I feel fundamentally doesn't
think that online matters or doesn't translate into real world issues.
And I think that you've got another side that absolutely knows that it does.
And you saw, we saw the fallout of those of those two different arguments.
And one side, I think, was proved irrevocably to be in the right on Wednesday.
This stuff, disinformation leads to real.
world consequences. That's what happened, right? Yeah, I wrote it in the, I think, closing paragraph of
the article that I published earlier in the week is that my other main area of research is
cyber, and in the cyber field, we spent years trying to ascribe a human casualty to a cyber
attack, and there are varying arguments for and against the level that cyber can cause an actual
harm to a person. And the closest I think we've got was an incident in Germany last year,
where a person who were really in a hospital for medical purposes died while that hospital was under
ransomware attack. And that's the closest we've got. And yet on the sixth of this month, we have an
incident where, as of now, five people have died, arguably with a much closer causal chain to
disinformation and conspiracy theories online than any incident of cyber attack has caused physical harm
to a person. And I think we've had those same arguments of can the digital influence the physical
in both of these spaces, but one of them is now even more certain that it shows that there is a
crossover from the digital to the physical. And we should have answered the question beforehand,
quite honestly, like, what do we do about it? And we haven't. I'm hoping that these awful circumstances
and what we've seen in the last week finally hands to the impetus for us to really start asking,
what are complicated and difficult questions for a liberal democracy to ask. And then does something
about it's interesting to me though that we are dealing with this stuff now after we had the
arab spring for example and the arab spring the whole concept was people were coming out
in flashmoms and it was thanks to facebook and i think twitter was around then oh yeah it's a long time
ago and i think it's interesting that we didn't come up with a policy till now see i think one of the
problems of that is that there's a unique differentiation between the diverse coverage of these
platforms and the target of policy and action that these platforms take. And I try to always differentiate
between different platforms because they are very different in approach. This is a more broadly
accurate response. And almost in every case, the most policed, the most well-thought-out and the
most enforced policy decisions and technical changes happen within the United States and Europe,
not elsewhere. We can see massive changes caused the Arab Spring. There's a lot of academic
debate about to what extent social media and the internet influence the Arab Spring.
But setting that aside, there is a component, undeniably, but even with those obvious lessons,
there were almost no real changes to policy or platform technically or user interface
as a result of the lessons that we could have learned from that.
The only real big changes, technical, UI and policy have come from lessons that have been learned
within the European or the US context.
And it's pretty obvious to see why, because the US is the larger user base, but it's also the home of these companies.
They don't really, and they haven't shown much recently,
of doing things outside their home base. And that, again, is problematic. When you have people
criticizing, you've removed Donald Trump for inciting violence, but what about Deterre in the Philippines?
What about Ayatollah Commone? What about all these other people? Yeah, the people, the people
raising that critique are not inaccurate. Yeah, it's spot on. Even when Donald Trump Jr., is like,
how dare you remove such and such when these people are, yeah, he's right. I hate to say it,
But he's not wrong.
Trump should have been removed, in my opinion earlier,
because, yes, you've been inciting violence on an international level for four years,
to a greater and lesser extent.
But you can't enact that policy in one place and not do it elsewhere.
That undermines your argument for doing it for free speech or for public good,
or content moderation for the greater good,
because it's really easy then for those who are already on the fence or further off it to go,
but why one rule for them and one rule for us?
And I just want to throw out some data points here since we're talking about them.
These are the things that I was thinking about after Wednesday was Arab Spring, I think, is a good one, but also the Rehengen Genocide, which has largely been fueled and organized in Facebook groups in Myanmar.
The 2013 attack on the Westgate Mall in Kenya, I believe, and there was another similar attack in 2008.
I can't remember exactly where.
But I think the striking thing about the 2008 and 2013 attacks are that you had people using Twitter to coordinate relief efforts and also al-Shabaab watching those Twitter feeds and using it to pinpoint where to go.
Yeah.
So you have these communication platforms that, especially if you look outside of America, do you have real world consequences.
is there a way to responsibly police moderate these services and still allow a free flow of information?
And do we have any idea what that looks like?
See, this is the thing that I, to answer the last question first, do we have any idea what that looks like?
I would say that we have not certainly reached the ideal.
There is no precedent set for the ideal body or institutional group that would do that existing.
right now. In terms of those who are further ahead in the advancement of governance and moderation,
Europe has certainly led the way, but then the European approach has always been that it should
be governments and institutions that decide the public good and what content is appropriate and
what is not, and the levels of what content moderation should be managed at. It's ironic that
you've now got people quoting Merkel saying that I don't, not sure if I agree with the removal of
Trump and from the platform, but she did also state that it's the platform's responsibility
to ensure that political speech doesn't cause harm and isn't full of lies. So there's a balance
there. I think the big issue we have is one of trust. And I think that's what it really boils down
to, is that there are really two parties to which we would normally ascribe trust in
moderating and deciding the balance between freedoms and responsibilities and rights and
responsibilities. Either we give it to government because they are the duly elected officials that
we have instilled the power of our democracies into to act on our behalf, or we give it to
private individuals because they understand the technology as their platform. It's a private
enterprise. And that almost summarizes perfectly the two varying approaches between Europe and the
United States, I point out as well. The problem is I think that there are significant divides in our
society along the lines of who trust who. There are lots of people who would trust the government,
but the government isn't the one doing it. There are lots of people who wouldn't trust the
government, but would trust companies, but the companies aren't necessarily the ones doing it
either. I think what we actually need is a much more multilateral joint up approach where
all parties involved should be involved at the very least in deciding the breadth of moderation
that we will allow, the topics and content of that moderation that we will.
is allowed to cover. And then we should really listen to these platforms about what is achievable
and how to technically do it. And we should have academics involved. We should have civil society
groups involved because trust is the fundamental thing that has to be in here, trust and
transparency. Without it, it's too easy to undermine. It's in this country, and I don't know about
Europe, and I'm sure you do. When the airwaves first became available, when radio started like a hundred
years ago more, there was this whole concept of the public good. We're not going to give you
airwaves to broadcast on unless you give over time to news and other services that would actually
help the public. So I'm just thinking that regulation isn't new to free speech or working
together. So I don't know. It doesn't seem so crazy to me that there'd be some regulation on
Twitter. Or is that what? Do you think that's free speech or do you think that's too limited?
I think it's a really difficult one because as I used to teach my students,
and if we go back to the political philosophies that kind of underpin liberal democracies,
liberalism or the standards that we go from today, it's very easy to get hung up on purely
the freedoms and rights that come with it, but forget the responsibilities that come
on balance with those.
And in this discussion, so if you look at Europe, these platforms are held responsible,
legally already. There are governance structures in place. Germany has a law in effect, which
means social media platforms have to remove content flag within 24 hours. If they don't, they get fined.
So it's not something that they don't face in other contexts. It simply is a matter of the nature
of the kind of the system in the US is that that isn't applied. It's been left to the companies
to do what they believe is right with on their platforms, where in other settings, other governance
structures have been applied and they've had to enact on those. It's one of the reasons why
I believe one of the larger outposts of Facebook moderators actually are in Germany because
of this very law that they've had to scale up to meet their obligations. And likewise, in the UK,
if we broaden the social media further in terms of governance and balancing of freedom of speech,
we have off-coms at the Office of Communications and this sort of thing. They are responsible for
managing standards of media, of all types, that go on to the public broadcast. They are the
means for arbitration of false stories or misprints in news media or standards and quality
of broadcast media on television or radio. And there are governance structures that are there
in place already to ensure that harm is minimized. And the public good component, again, is what's
here. And I think what's lacking, or what's the problematic component here is that what works for us
doesn't necessarily work for the United States. And what works, even within Europe, what works for
Germany does not work elsewhere. My students often got amazed when I said in Germany, all Nazi
content is illegal, anything. And they're like, what do you mean? That's freedom of speech. I'm like,
no, it's, that's not freedom responsibility. It's responsibility.
The Germans simply take it the fact that they have a responsibility to manage speech in a manner that will not result in the harm that they've seen speech cause before in their own past.
And that I think is a powerful lesson that the Germans have learned very painfully and are remiss to fall back into it.
But again, that same lesson is not being translated in the same governance and legal structures in other parts of Europe.
You can look at the resurgence of neo-Nazism in Russia.
Again, they don't have the same controls and rules,
and arguably potentially, if they had,
they wouldn't have been facing the same issue they do now.
And the concept of freedom of speech is an ultimate,
unalienable and unlimited varies across Europe
and has different governance platforms based upon that variance.
Saying Europe does this, therefore the US can.
It's not necessarily the case.
which is why I think that actually a more joined-up approach across all sorts of institutions and platforms coming together to decide what is right within each regional context is a better approach.
Are you talking about like an international governing body of some kind or like an international moderators association?
I think there should be.
The problem with that is, and this is important, is that there will be base standards.
Like I believe we have a human rights convention, right?
There are some things that we understand are a basic thing that should be protected.
And that's a starting point, but not all of the values or all of the breadth decisions
about what should and should not be moderated will be the same from one region to another.
And that's a hard thing for a proponent of liberalism and democracy and left-leaning
or left-of-center politics to grasp because it's a difficult decision.
It's recognizing that some people have different values.
values, and therefore, you can't enforce your values through an international moderation system on their social media platforms. There can be a base level. You're more than welcome to try and export your values through reasonable discussion and the standard means of kind of soft power, but you can't enforce it. You can't just go, this is it, because that isn't liberalism. That isn't left-wing politics of any type. That's moderation bordering too far towards.
censorship and doesn't reflect the fact that we are not one society with no difference. There are
variations between us. And I think, Matthew, just my thought is, can you imagine the United States
signing up to an international accord of any kind? We won't even join the international criminal
court. No. If it's about posting, absolutely not. Yeah, we're the landmine ban. God forbid,
we even look at the nuclear ban that was just passed.
You're right. That's just going to be a non-starter in this country. And yet, we have an outsized, I think, probably an outsized influence on the way the rest of the world's internet works, right? Definitely. Europe has long played its hand by going, look, if we create government structures here and legal framework was here, we can change them elsewhere because companies have to do business in this large block. And that has worked. It's undeniably. Look at GDPR, which, you know, is it California that now has a kind of
of similar, was it Florida, one of the others, I can't remember, has a kind of similar data
protection thing going on on the state level. And all of Europe now has this single concept
of data privacy and data protection, individual ownership. And that means that any company that
seeks to, even a global company, sometimes it's cheaper for them to enact those standards
across their entire business, rather than have to have two different types of data
management to work within this GDPR platform. Now, that's Europe. If the US did it or anything in
this kind of space, considering the outsized liaries they have on the kind of service level layer
of the internet, AWS being a great example in this particular context, that could have significant
global impact. All right, Angry Planet listeners, we're going to pause there for a break. We will be
right back. Welcome back, Angry Planet listeners. Thank you for sticking.
with us. We do now have the whole concept of the private space hosting. You're talking about
AWS, which has just banned parlor, which was the alternative, conservative alternative,
I would say reality alternative, to Twitter. And the argument that's being made, I wonder what
Do you think about this?
Is that if a private company won't provide a platform for hate,
somehow they're influencing free speech.
They've heard someone's free speech rights.
So if I'm right, the owner of Parlor used the town square kind of example.
It's a really old concept of a public service good.
Like people should be able to say what they want in the town square without repercussion.
or limitation because it's a public space.
It's a freedom.
It's the implicit concept.
We have Speaker's Corner in the UK on Parliament, for example,
where you can just go and shout out whatever you want about politics.
It's a long-standing tradition,
and they were actually quite complex traditions around it.
And this, I can see the appeal,
but the cynic in me is that's a great metaphor that doesn't work.
Like, when I go and shout on the corner on the Speaker's Corner,
I'm not also starting a fire in the Parliament.
building or inciting people to do. And it's not likely that they will, even if I do try and
convince them to do it. Whereas some users, and particularly those with the larger
followings and legitimacy outside of the platform, like political figures, them doing that
would result in that, even if they were in the town square in a physical space. Just if you want to,
if you really want to, look at Trump's speech prior to the actual invasion of Capitol Hill
on the sixth, right? What gets me from that is that if we take that metaphor physically,
as they're trying to say, yes, we do, I can guarantee you that if someone stood in the
middle of their town square and incited a mob to go and burn down the village, then someone
should probably have stopped them before they got there. And yes, I think of all the kinds
of moderation that we've seen come out of the last four days, the AWS example, the censorship
at the service level of the internet, that is the one which bears the most problematic component
to me. And if there is any group that we, any group of or type of private industry that should
probably have a more, more, a closer relationship with government to decide when this is appropriate.
That is the level that should. Because that's not a matter of even, it's even on a different
level to Google and Apple going, look, you just can't download from our store anymore. Because
in Google's case, you can side load an APK and that doesn't matter. You can still get it.
you're not actually completely cutting off.
But at the AWS level, that's it.
It's gone.
There's no recourse.
There's no alternative.
That's the equivalent of blackbagging the person on the town square and walking off
with them and kicking over the footstool and rubbing out all trace.
Now, that I think is something that should not be.
And this comes back to the trust component.
That is not something that should be a private company's sole decision.
Now, it's not illegal.
We're not denying access to the backbone of the internet.
We're denying them access to private servers.
If there's a business model, I'm just, I'm not sure I believe this,
but if there's a business model for these websites,
why can't they start up their own servers come on?
This is the problem with it,
is that we reach the point that these kinds of services require such a high level
of technical investment to make operate.
at the scale they need to, that there is essentially nowhere else for them to go.
And that's the problem here.
So what's happened is the censorship and moderation component of our question on our problem
has butted up against the problem of technical dominance and monopoly.
And what that has created is this space in the middle where, don't get wrong,
I completely agree with the removal by AWS of Pallor.
But that does mean that because there is no other alternative,
Parlor will never be able to compete on the same level.
Yeah, they could make their own servers.
Sure.
It will take them a long time.
It will be much more expensive.
Also, there's a lot of other components to the cost of setting up these kinds of things
that people most often kind of brush over.
The benefit of AWS as a service provider is you don't actually need that great level
of technical skill to actually get that kind of service doing.
Because a lot of it is straight out of the box workable.
video streaming can be provided on demand as well
without actually having to build that infrastructure yourself.
Now, however, if we take Parlor,
if they're rebuilding from the ground up,
which it appears because of so many of their services
cancelling themselves, all these little components
they're going to have to.
You're going to have to find the expertise to do that.
Now, good luck competing financially with that
because the expertise can charge quite a lot of money for one.
But again, we've seen that some of these platforms
can bring up a lot of cash. Okay, maybe they do that. But they've got to find people ideologically
willing to work on that platform too. And tech, big tech, particularly, despite what they might
like to say or what the impression of their company as a whole looks like, has a very strong
ideological leaning away from this kind of position. And I would suggest that even if they can find
the money, they will struggle to find the expertise and the human expertise to actually build
this technology that they need to actually keep the platform going.
already seen the fallout from them not having the technical expertise. Yeah, I've seen some great
great. I think my favorite was their phone verification service that they had a free trial on.
But when it was pointed out to the owner of this company that they were using a free trial,
they just canceled it. And suddenly you could verify a phone number of 1, 2, 3, 4, 5, 6, 789.
And that's fine. You can have an account now. Amazing.
Yeah. And just to put a little bit more context on this, if you wanted to be verified on the server,
in the way that you can get verified on Twitter,
you had to provide them with a driver's license
and I believe your social security number.
Yeah, front and back of the driver's license
and social security number,
all of which has now been logged 80 terabytes or so of data.
Yep, which has made it that 80 terabytes of data was unsecured
and has made its way onto the web
and has been, to my understanding,
instrumental in federal authorities,
scraping that data to find people that were there on the Capitol on the 6th.
Yes, most of the videos automatically
store GPS data. So that whole individual privacy of data thing come rears its ugly head again and
go, hey, hang on a minute, guys. You were complaining about data ownership and that sort of thing.
Maybe it would have, I was on the, so I've been a long time user of Telegram. And I was on,
one of the great features about Telegram is you can preview a group before you join it. It's one of
my favorite things. As a research, it's fantastic. So I, I previewed the Parla Lifeboat chat last
night. And it was, it was quite an experience. I did not need my Netflix subscription yesterday
evening. All of my entertainment was in one place scrolling past at a ridiculous pace. One of the,
one of the long-running complaints was this platform was so fast. I can't keep up with what's
coming in because you've got 7,000 members just throwing mostly either conspiracy theories
about some the change of Trump's term and office dates on the Senate website, which was
really driving them wild last night. Or you were seeing people trying to advertise, you know,
their own competing groups are the one that these people are in. And what really struck me
is that, you know, this is, there's a lot of research been done into what happens when you
de-platform radicalization pathways or accounts. And there's been a lot of talk about, oh, this is just
platform whackamal, what we're going to do next,
that is going to go somewhere else.
Yeah, the diehard ones will,
but every time you do that,
there's a kind of opportunity cost,
and engagement cost for those who are on the edges to keep it up,
and they'll fall off as you go,
and decreasing and decreasing amount of that hardcore group that stick at it.
But it does have an impact.
It's funny you say that because one on the sixth,
and then was it the seventh that they finally,
Trump, the Twitter finally banned him.
I think it was late on. I wanted to get on the website
The Donald, not the Reddit sub. The one that appeared
after Reddit fans. Yes, exactly.
And I jumped on there and
like I went through the cloud flare thing
and then there was a captcha and that just
I was like, I don't care that much. I'm not going to sit here and click through
all these. I'm not going to tell you how many boats I see
so I can go and monitor these people. I'm just not going to do
it. So I'm sure there are other people
they're slightly more motivated than me that are also turned off by that kind of thing.
Yeah, when it gets harder to use, people walk away.
The message for this chat was essentially how to depersonalize telegram.
And like what settings you needed to change to make it so you couldn't be found out personally and tracked down through telegrams.
So this is a lot of people coming from Parlor, I've seen this 80 terabyte thing going, shit, what do we do?
I can be tracked by my DPS coordinates.
So they're all going, okay, I need to use telegram.
It's more private, great.
And one of the very first things on this list is do not use your first name or photograph.
But the amount of accounts that are just first name, second name,
here's a photograph of me and my girlfriend.
And I'm there going, you are one reverse search your way from me knowing exactly where you live.
And what's the point?
Like, and it's that level, it's that component.
that I think really will over time mean that those more and more accounts drop off because there's an added cost.
Every time you go for more privacy, you give up utility in terms of what you can get out of an app,
but also you generally have to do more to be private.
And I don't think the vast majority or the same size of these members are going to be willing to do that.
Something else I've been thinking about is that it feels like do the concerns, do the concerns,
are they starting to have the right of it because it feels like they're complaining about
censorship. They say these big tech companies have too much power, which I actually do agree with.
But then are they then arguing themselves into the position where, as you're saying, we need to
bring in more government regulation?
Do you think that that kind of naturally flow? Sorry, go ahead.
This is the thing that gets me. The irony here is that these are people that have always been
about small government, like generally speaking, talking Tea Party as the evolution to Q and
on, et cetera. It's like small government. We don't want government controlling all these
minute parts of our life. It should just be the bare minimum, if that. It's difficult when you
try to explain to them where do you think roads come from. Like the amount of times I living in,
like being a vague labor supporter in the UK and being told that, oh, it's awful because
Obama is a socialist. I'm like, what kind of socialism are you talking about? Because
socialism over here is a real different beast, if that's what's going on.
But, yeah, the issue here, I think, is that the obvious answer is governance, is government involvement.
Now, I would argue it should be government involvement plus civil institutions and social society groups,
which I think is obviously a step beyond where most laymen would consider because those groups,
they're not even aware they exist, let alone the benefit they bring.
But the only other potential option is this continual lean towards the correct, and I say that in air speech marks, the correct kind of private business.
So Parla was the correct kind because they were conservative and already ideologically aligned with what these people wanted.
So they were not only providing the service they wanted, but they were ideologically suited to what they wanted.
But the reality is there aren't many companies like that.
And there are certainly not many companies like that providing the services they want.
I just had a thought and some people are going to think I'm arguing against myself, which is fair, but I don't know that I have an argument here.
What we're talking about, what you're talking about is also about defining reality.
It's not just simply a matter of what your opinion, what you want to say, but the information that you're surrounded by, the only information you're,
get if you're on a parlor or, frankly, you're on a Twitter, if Twitter is going to move
through a real liberal point of view, either one is going to define your reality. What an
incredible power to give either the government or private industry. This is why in terms of
forms of content moderation, I've always learnt towards the idea of rather than the removal of
content, particularly of political figures, it should be the correction of content, but not
the removal and correctional, the direct editorial line, it should be the presentation of
is this false or is this fiction? Or is this false or is this true? So when Twitter started
labeling Donald Trump's tweets, I think that was a good thing. I don't think they were clear enough
in their language. I think there's too much kind of sitting on the fence when you're going,
oh, these election results are certified, et cetera, et cetera, no, call it a lie if it's a lie.
But the other thing to bear in mind is, I don't think it should be.
Twitter doing the fact-checking.
There are plenty of independent fact-checking organizations that have good practice,
that have experience, and that have legitimacy and connections to be able to carry out
the kind of research required to actually make sure that the information we're seeing,
whatever its ideological starting point, is accurate.
The facts behind it are accurate or not.
But different platforms have taken a different approach to how they
do fact-checking and who they get to do it. So Facebook does use independent fact-checkers,
but they can't fact-check political speech from politicians, which is ridiculous. Twitter
doesn't. They do it in-house. But I think independent fact-checking is an important way of
dealing with this. Censorship or de-platforming, as we've seen with Donald Trump, that should be the very
last step in a kind of amplification or a kind of process towards.
content moderation of extreme danger or risk.
It should be there as an option, but it should not be the first one.
And if it's done, it should be done transparently and equally to everyone coming
into that same kind of remit of rules.
Otherwise, it has limited legitimacy.
And that's the problem again, trust.
But if we can no longer agree on what facts are, if we're dealing with,
With the big lie, in the classical sense, in the gerbill's sense of the big lie, people just simply, how do you find a fact checker that all sides can agree is generating facts?
Yeah, I think that in this sense, one of the most important things I think to do is actually to try and educate on how fact checking works and why it's done.
I've seen a lot of people when doing this kind of research and the work that I do who are very quick to go, well, yeah, but how did you fact check this? Do you just Google it? Or did you just copy this from here? Or who are you working for? Who paid you? Make it really clear who did. So there are plenty, for example, if you look in outside of the US context, there's India Czech, for example, in India. I think there's Africa Czech as well. There's a few, there's lots of small.
organizations in particular regions. And all of these, many of them in fact, have signed up to a
single network of fact-checking kind of organization that have set standards of where does your
money come from? What's your approach? How are you funded? How do you explain each thing? And I also
think perhaps one way around this problem would not be have one single fact-checker
checking each fact. It should be a series of. And then if you like, and you really want to question it,
you can look at how those were scored in more detail.
So Facebook, I believe, allows you to do it in some of you posted,
where you can see how was something scored by different fact-checking organizations
that worked on this particular issue.
But this is fundamentally, I think what we have is a problem here
is that there is no short-term solution.
There's no UI change or quick change to an algorithm,
which is going to solve the big lie problem.
because that for me comes down to education, sadly,
and that is not something you change overnight.
If you want to look at places that have done better,
these are places that have had digital literacy
and critical thinking skills as part of their education systems for decades.
And funnily enough, they don't find themselves in the same place that we do now.
Yeah, I think that's a big part of it.
I think we all have to take a step back sometimes in,
remember how new and revolutionary this new communication technology is. I think the amount of
religious war and violence that followed the invention of the printing press, I think,
is instructive. So I think it may take generations before we figure all this stuff out and figure
out what works and how to balance moderation and free speech. I love the fact that you brought up the
printing press because it comes back to one of my favorite kind of my own personal research project
when I was a kid. I was given the option of studying, like writing a coursework piece from my A
levels, which is just before like college level for your American viewers, where you could
essentially write on whatever you wanted. And I just read this book on a kind of alternative
history of Vlad Dracula and Vlad Sepeth. It was a basis for him. And I decide I want to do
an actual history paper on the reality of the kind of man behind the myth.
And there was so little history done on this guy.
But one of the things I found really fascinating at this kind of 17, 18 year old me was that one of the reasons why Vlad's myth had lasted for so long, even until the point where Bram Stoker came across him to be the eponymous Dracula, was that, yes, he'd done some awful things, but then most rulers in the 1400s did some pretty terrible stuff.
but one of essentially his vassals
decided they really didn't like Vlad
and wanted to ruin his name for all of Europe.
This printing press thing had come along
and so what he did is he printed thousands of pamphlets
about the awful things that Vlad Zeppeth had been doing in Wallachia
during his reign and spread them all through Europe
to the point that they were only matched or beaten
in kind of their commonality among the public who could read by the Bible.
and it's therefore completely unsurprising. Hundreds of years later, Bram Stoker comes along and goes,
this awful man, he'll be perfect. And I think that is a very similar problem we now face. We don't have
hundreds of years to work out what the damage or what their issues can be caused by this new form of
technology. The awful reality is that we have had time. We have been having these discussions
for quite some time, but it seems that the impetus has not been there. The realisation
of the actual risk posed has not actually percolated through to those who should be going,
yeah, these are difficult questions.
We should actually answer them and then do something.
What they've done is gone, yeah, these are difficult questions.
I'm sure we can put that off and just take your time and work on it.
We don't have time, quite frankly.
The time is now.
The time was 10 years ago.
But we didn't do it then.
We definitely need to do it now.
Well, and the cynic in me tells me that,
Jack Dorsey and Mark Zuckerberg probably understood that Trump and people like Trump are good for business.
I think one of the biggest, I had a lot of journalists ask me this week. Did platforms do enough?
And I think, I think it's, I come back to something I said earlier that all platforms are not equal.
I think some have done a lot more. Some have been better at recognizing the responsibility they have
that comes with the platforms that they have created for themselves and the wealth that has come
from that. Others have done, have realized it less so or later on. But one of the things that
really struck me is particularly from Facebook's point of view is that they released this statement
of these are the changes we are making to how our groups now operate as a result of what happened
on the sixth. So things like actively moderating hate content and incitement on group posts.
Great, fantastic. You've turned the dial up that says, or turned it down going how much
is acceptable in our groups. You've had that capability all the time then. Why now? Why do it now?
What changed? The answer is everyone else started going, we need to do something. And it became
this kind of one-upmanship of who can show that they've done the best and the most. And don't
you're wrong, I agree with the outcome. Like, I find it hilarious that like places like Shopify have
banned Trump's T-shirt platforms and etc. That's great. That's great.
but it shouldn't have been done as a result of potentially attempted insurrection in the United States of America.
It should have been done far earlier or something should have been done far earlier.
And that, for me, smacks at least the other things, that this shouldn't be a series of independent decisions by vying platforms in a market space.
This should have been all platforms coming together and going, this is our problem.
It's not just Facebook.
It's not just Twitter. It's not just Reddit.
It's not just Wikipedia.
There's a range of different platforms here.
It's all of our problem.
And we should work together on a single effort, on a single drive, to do what is best for our users.
And I don't think they've done that.
All right.
I have three more questions.
And often an argument I hear from my libertarian-leaning friends is that we can, we
can't moderate this stuff too heavily and you shouldn't de-platform people in part because if you've
got the Nazis out in the open, you can monitor the Nazis. What say you to that? Yeah, there's a long
standing thing about if you, from a law enforcement perspective, that the benefit of open social
media platforms is that people do things in the open and that means you can see what they're doing.
It's easier to trace. But from the opposite perspective of your libertarian friends, for my
acquaintances in the security services, I can guarantee you that they can still find them.
It's not anywhere near as difficult as you might think. Don't go wrong. There are all sorts of
things. There is that, again, this is almost never complicated questions about it between privacy
and security, longstanding issue that's definitely not going to go away. But I would not rank
the concern of, you know, lack of exposure and ability to secure from threat highly.
when compared to the other risks that I think this information
and non-content moderation poses.
It's there, undeniably, but it wouldn't be up there for me.
How do we avoid, as we are entering into this new world
where we are talking about moderation more,
how do we avoid ending up in a situation like China or Russia
where you can't talk about Tiananmen Square
or you can't, there are things that just aren't done
and that you do have to make sure that your servers are located on government property as in Russia, this kind of thing.
How do we avoid all of that?
This is where I think that we need to make sure that when I talked about this kind of conglomerate of different interest groups involved in how we, not only how we do content moderation, but what is, what's the breadth, what's the goals.
They shouldn't be, this is not a, this should be a joining of equals.
It shouldn't be government has final say and total say to overrule the lot.
this should be a balanced and joining of equals.
The difference between how we do content moderation outside of China and how it's done in China
is that in the US and outside, private companies can print what they want
and they have ability to remove what they want.
In China, they have to put on the content that they're told to.
That's the difference.
We're talking content moderation here.
Our form of content moderation is the removal of stuff.
In China, content moderation is the production.
and spreading of it.
The censorship happens in your brain before your hands even touch the keyboard.
Unfortunately, this is a thing which is becoming increasingly exported.
There was a great piece by the Penn Association of America a little while ago,
talking about the censorship of Hollywood as a result of Chinese involvement in that kind of market.
And the thing they were talking about was not necessarily direct censorship from China,
but it was the self-censorship going on where studios were preemptively changing scripts
or not putting things in scripts and removing things for fear of them not being able to access
the markets in China and paying for advisors from China to come and go, look, what do you think?
Is this okay?
Or are we going to get?
So there's a great example.
I don't know if you saw the film Abominable.
It was a Pixar production a little while ago.
There's a really brief moment in it where there's a map shown that the kids looking at it
and it shows the South China Sea extending far further than it's actually meant to,
and the Philippines banned the film as a result of it.
Yes, there is a concern, and we do need to be aware that there are slippery slopes out there,
and we should avoid them.
But I think it's good that we're starting this discussion from a point of understanding
that there is a place that we do not want to end up,
but we also need to understand what that place looks like,
and it's not as simple as content moderation equals removal of content.
Content moderation also equals the production and creation of content.
And censorship doesn't necessarily start with the state.
It can start elsewhere.
It's really interesting.
It fundamentally sounds like what we're talking about.
There's this dream of the internet when it first started that we saw this network that
distributed power for the first time in generations.
And we've seen an accrual of that power as it tends to do, start moving towards the
top.
And it feels like what you're talking about is redistributing some of that power to a bunch of different vested interests.
I quote in my article, Aaron Schwartz, the individual who attempted to, I would say, epitomize that concept of what we viewed the internet to be to spread information freely, particularly academic information and research to everyone who wanted it.
And there's a younger me that hates in a way and it is upset by where we've reached that the internet is not.
this, the ultimate democratizing force. It could have been. I've been studying the internet for
a decade now, and I've seen things change, and the realist in me has to go, we might not be
able to reach the ultimate Aaron Schwartz's goal of what the internet can achieve, all the Tim Berners
Lee goal, but we shouldn't give up because of that. We can achieve, and it can do good
somewhere along that spectrum. And I think this is an opportunity to redress a direction of
travel that we shouldn't have allowed in the first place. We won't be able to go all the way back.
I think that's past. But we can push back on where we've come so far.
I think the anniversary of Schwartz's death was yesterday, I believe, eerily.
Which is like a tragic irony.
Yeah. And I can't believe it's been eight years.
Yeah, that certainly rolled around quickly.
Yeah. So I want to read a quote from your piece and ask you the last question.
Jason, unless you've got something else.
I'm afraid about getting us off track, but I think one aspect of this that's interesting to think about in the United States, because I don't know the situation in Europe, but on the airwaves, conservatism sells.
And the airwaves are dominated by people who are some of cases very far to the right.
and I just wonder if we left things as a free-for-all,
if that's what we would end up on the internet too,
or is there something specific to broadcast?
I think we would definitely end up exactly there.
And you need to only look at the kind of difference.
If I quoted off-com earlier,
but look at the standard of news broadcasts in the UK
versus the variance,
and you say the kind of conservative-leaning dominance
of news broadcasts in the US.
There's a reason why the Murdoch attempt to get into the US market with Fox failed,
and the broadcaster decided actually, no, we don't want that here.
And that, I think, that is a demonstrable truth, is that, yeah, populism sells.
Populism is attractive, but it's the same like bad news cells too.
And that's, I think it's almost the same, there's a similar dynamic going on here.
And I think that if we didn't do something to impart standards of not just the type of content and the connection to fact rather than fiction on social media,
but the content which is actually for the public good rather than for private interest,
I do think that social media and the internet would go in a very similar direction.
And we have already slipped that way in all sorts of other ways.
Yeah, I think it unfortunately would.
So this quote from your paper,
we have a clear causal chain between online conspiracy theories and
disinformation and the deaths of five people due to the rights on Capitol Hill.
There is a lesson here.
We need to learn it.
What is the lesson?
The lesson, I think, is that what we've done so far, the status quo has not worked.
And we need to recognize that and realize that if there is any time to correct the
later as quo and create a new way forward, it's now. Leaving it any time longer is simply going
to see us back seeing these kinds of events again and again. Dr. Alexey Drew, thank you so much
for coming on to Angry Planet and walking us through all of this. It's been a pleasure. Thank you,
Matthew. That's it for this week. Angry Planet listeners. Angry Planet is myself, Matthew Galt, Jason
Fields, and Kevin O'Dell. It was created by me and Jason Fields. If you like the show, please
subscribe to our substack. It is at angry planetpod.com for just $9 a month. You get access to two
premium episodes, two extra episodes. Another one of those premium episodes is going to drop this
week. Another conversation with Jason Stanley about the events at the Capitol. So be looking
for that sometime on Friday or Saturday. As always, if you like the show, please follow us.
We are on Twitter at Angry Planet Pod. I'm at MJG, A-U-L-T, and
And Kevin is at KJK, no doubt.
We will be back next week
without a conversation about conflict
on an angry plan.
Stay safe until then.
