Front Burner - What ISIS can teach us about fighting far-right violence online
Episode Date: May 15, 2019Today on Front Burner, professor Taylor Owen helps us understand the changing nature of online extremism and what we learned from dealing with ISIS....
Transcript
Discussion (0)
In the Dragon's Den, a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem.
Brought to you in part by National Angel Capital Organization,
empowering Canada's entrepreneurs through angel investment and industry connections.
This is a CBC Podcast.
As I talk about it, I can feel and smell everything that I did back then.
And he looks down at me, I'm looking up at him, and he says, that's my little girl.
It's a 30-year-old homicide where we don't have anybody charged and convicted.
Felt like a murderer had gotten away with something.
Tell me now, did you have anything to do with the murder?
Someone Knows Something with David Ridgen, Season 5.
Now available. Go to cbc.ca slash sks.
Hello, I'm Jamie Poisson.
Today in Paris, Prime Minister Justin Trudeau is set to join what's being called the Christchurch Call.
Today in Paris, Prime Minister Justin Trudeau is set to join what's being called the Christchurch Call.
Christchurch, as in Christchurch, New Zealand, where 51 people were massacred at two mosques by a white supremacist terrorist.
What happened in Christchurch was unique in one particular way.
A grotesque crime that the Prime Minister of New Zealand herself has said was... Designed to go viral.
The Christchurch Call is a summit of politicians and
tech leaders with a serious and pressing goal to eradicate violent extremist content online.
Tech giants like Google, Facebook and Twitter are there. We could have simply sat back and within
New Zealand, formulated our own regulatory response. But social media companies, these platforms, they're global.
And so the response needs to be global. And while this problem of violence inspired by
extreme online rhetoric is growing, it's not unprecedented. A few years back,
big tech faced a real pressure to curtail ISIS propaganda on the internet.
The hashtag ISIS media blackout sprung up and immediately went viral. The CEO of
Twitter tweeted, you know, we've been and are actively suspending accounts as we discover
them related to this graphic imagery. And by many accounts, it worked. So can we follow the
same playbook used against ISIS with far-right extremism? That's today on FrontBurner.
Taylor Owen is the chair of media ethics and communications at McGill University. Hi, Taylor.
Hi, thanks for having me.
Thanks so much for joining us today.
So I remember that there was a lot of talk around how to deal with ISIS content online after the video of U.S. journalist James Foley was posted in 2014. A video showing the horrific killing and its gruesome aftermath
was released on the internet a short while ago, along with a message to the United States to end
its intervention in Iraq. That awful video where he was beheaded. Am I right about that?
its intervention in Iraq.
That awful video where he was beheaded.
Am I right about that?
Yeah, I mean, that moment really was a turning point in how governments and I think society more broadly
decided to exert significant pressure on platform companies
and hold them in a way responsible for the type of
particularly egregious content that that represented,
how that was disseminated and the role they played in the dissemination of that content.
There's already been much talk about how ISIS successfully uses social media to recruit foreign fighters.
It was part of a deliberate PR strategy on the part of his killers.
Twitter and YouTube were trying to pull this video down.
Twitter administrators played whack-a-mole with extremists who wanted the video to be seen.
Some standards need to be, you know, sort of rather quickly adopted to guide them as we go forward.
Leading up to then, there was really a lot of foot dragging from tech companies about taking on that responsibility.
Right, because I know at this point there was a lot of attention towards ISIS propaganda,
and they had all these Twitter accounts, message boards, Facebook pages,
to get their message out, as well as these sort of videos that they were posting.
Yeah, this had been a growing problem for a long time.
But I think there was something very particular about that video,
that it was so clearly beyond the pale.
And in some ways, like the Christchurch video is now, it was such a clear indication of something
that should not be allowed in the public sphere, that governments felt emboldened to force the companies to react.
And companies on the flip side felt the responsibility and the obligation,
both legal and ethical, to do something about it.
So it was this turning point.
There's no question about it.
So what were governments saying at the time after this video?
Well, they're saying, look, like you as platform companies are going to have to deal with this
in the same way you dealt with things like child pornography, other forms of content that are clearly in breach of existing laws.
ISIL has made extensive and sophisticated use of the various technological innovations that we have witnessed over the past decade taking full advantage of social media.
And you have to remember that the platform companies, the Facebooks, the Googles, the Twitters,
emerged in the U.S. in particular under a concept called Safe Harbor,
which meant that they weren't ultimately responsible and liable for individual pieces of content
that individuals posted using their services.
Right. This is the argument. We're just a platform. We're not actually a publisher like a newspaper.
Exactly. We're not a publisher. We're not a media company. We are a neutral platform that facilitates the speech of others.
What these moments did, though, around issues like child pornography, around issues like clear acts of terrorism and extreme violence, they shifted
that concept of safe harbor to a certain degree and said, look, for some things, we are going
to hold you responsible.
And faced with that kind of pressure from Western governments, and I think from society
writ large, they acted.
And they did deal with both of those problems fairly effectively.
So let's choose ISIS as the example here.
How did big tech deal with the content that ISIS was posting online?
The short answer is they used AI and machine learning to spot certain types of videos in terms of type of text content in order to censor it before it was posted to the site and
the accounts that were propagating this kind of content or very soon after it was posted, to flag it and remove it.
And they could do that in the case of this content because the lines were so clear.
It was relatively clear to them what was an act of terrorism and what was not. And so it's much easier for these machine learning systems to deal with those kinds of binaries.
And in the case both against of things like nudity on the platform and in case of extreme violence, those binaries are relatively clear. So it allowed them to scale this machine learning capacity to these sets of problems relatively effectively to the point like just last, and this is still ongoing, right? Facebook last year said that they pulled down 3 million pieces of ISIS
and al-Qaeda content in just the third quarter of 2018. So the scale of this is massive and it's
still happening. I think that we can all agree that certain content like terrorist propaganda
should have no place on our network. The First Amendment, my understanding of it, is that that
kind of speech is allowed in the
world. I just don't think that it is the kind of thing that we want to allow to spread on the
internet. And also they seem to have been able, Twitter seems to have been able to eradicate or
almost eradicate this vast network of Twitter accounts that were connected to ISIS as well.
Yeah, that's my understanding as well. So how are they able to do all of that? Because I would imagine there is like clear images of violence in a lot of these cases,
or maybe iconography that they're able to see using all this AI technology.
But like, what about all of these other things that they were able to attack?
Yeah, I mean, part of what makes the challenge a little bit easier,
I mean, it's still not an easy thing they did, stopping this problem.
They devoted tremendous resources to pushing back against this.
But with the terrorist content, there are a number of things that are different about it.
I mean, one is, yes, the iconography, like you said, is quite clear.
People want to take responsibility for these types of images.
So when the, for example, when the Foley video came out,
that was not a hidden message.
That was something that the people who took it and posted it
wanted both to take credit for
and to be seen by as many people as possible.
And that is a very different problem
than sort of hidden ironic messages
that allude to white supremacist memes, for example.
I don't think anyone would have predicted the Christchurch shooter said subscribe to PewDiePie.
Music plays in the background.
Originally a piece of propaganda, it was later remixed online and became known as Remove Kebab,
a reference to wiping out Bosnian Turks.
Right.
They just exist in very different mediums.
And I think that this is what gets us into the second part of this conversation that I want to
have with you today. So we've seen this relative success at tackling, you know, online hate and
extremism over the last several years when it comes to ISIS. But what we're dealing with now is a different kind of extremism as well, far right extremism. This is why we're seeing
all of these leaders meet in Paris this week. No one wants to see anything like the 15th of March
ever happen again. That includes the proliferation of this kind of content online.
We have a reluctant duty of care,
a responsibility that we've now found ourselves holding.
This is what the Christchurch call is all about.
We've seen the Pittsburgh shooter who just killed 11 people in a synagogue.
He was active on a forum known for white supremacy and anti-Semitic posts.
The man who allegedly mailed this pipe bomb to high- Democrats seemed immersed in online right wing conspiracy theories. And can you explain to me
how this stuff that they're steeped in is different than the ISIS propaganda and hateful
content that we see connected with that group? Yeah, I mean, one way to look at it is that for both governments
and private companies, platform companies, and for society at large,
how we deal with this kind of speech online is both a technical issue
and a political issue.
And I think in the case of the ISIS content,
both the technical issue was certainly difficult, but I think it was easier.
It was clearer what they were looking for and taking down.
But perhaps more importantly, the political issue was much clearer too.
It was – there was very little ambiguity of whether or not terrorist speech should be allowed in our society. Friends and allies around the world, we share a common security and a common set of
values that are rooted in the opposite of what we saw yesterday.
The moment we're in now is very different.
Both the technical problem is a lot more difficult, right?
There's a huge amount of nuance in the type of conversations that are happening in this
space, the types of memes that are being used, the subtleties of the content and the arguments that are being made in these spaces.
So on the technical side, it's just much harder to spot this stuff and pull it down at scale.
You know, when you say that there's way more nuance here, when you talk about
irony and memes, what are we talking about here?
So if someone, for example, in a photo does the white power signal.
Okay.
Doug Glanville, former Cubs outfielder and now a reporter for NBC Sports, had this happen right behind him when he was reporting live.
And that fan is now banned from any future Cubs game.
What used to just mean okay started as a joke by internet trolls on 4chan messaging board,
but it was quickly adopted by white nationalists.
Is that something we are, are we sure that person's a white supremacist?
Why are they doing that? Are we sure that's even the signal they're making? And is that something that we should be pulling down and censoring? I mean, I think that's a very different problem than a video of a beheading.
We're in a place where technically finding these unacceptable forms of speech and then pulling them down is an incredibly difficult effort.
And we've tried training machine learning algorithms to do it, and they are getting better at doing that. There are huge hundreds of millions of takedowns on these platforms happening by machine learning
algorithms. But we also need, on the technical side, huge numbers of people. And that's the
other side of this is we have these companies have thousands and tens of thousands of people often sitting in content moderation farms in developing countries deciding what does and does not breach local laws in countries around the world.
So just technically, that is a huge problem.
And more expensive for these tech companies to take on, too.
Absolutely.
I mean, that being said, I would suggest that these companies also have massive profit margins. And it might be that responsibly operating in this space requires significantly greater investment in that human side and slightly smaller profit margins.
I want to get to how you think we could get there with some of these companies. But first, I want to elaborate a little bit on what you said earlier, that there doesn't seem to be the same sort of consensus around the far right.
The white nationalism is a rising threat around the world.
I don't really.
I think it's a small group of people that have very, very serious problems.
I guess if you look at what happened in New Zealand, perhaps that's the case.
I don't know enough about it yet.
They're just learning about the person and the people involved.
You know, I would note the United States is not at this gathering in Paris this week.
Yeah. This is the harder piece of this problem. Yes, there is a technical aspect to it,
but the far bigger problem is a political one. It's that we as societies need to decide
what kind of speech we find acceptable and what kind of speech we don't.
And right now, we don't have a very good process for making that decision.
We have outsourced that decision, in most cases in the digital space,
to a small number of American-based companies who have embedded in their ideology
and in the design of their technology a very American notion of free speech,
and in the design of their technology, a very American notion of free speech,
that unless it is very clearly and explicitly illegal, basically anything can be said.
And my view is we need to bring that conversation into our institutions that actually have accountability in our societies, which are our governments.
So ultimately, this is a political challenge. Do you think that like the James Foley video and how that spurred governments into action all those years ago around dealing with ISIS online, do you think that Christchurch could be that moment for governments?
I mean, this was a mass murder that was live streamed on Facebook.
It is certainly having an effect on the urgency with which governments are engaging with this problem.
White supremacist movements are a very real, a very grave threat to Western liberal democracy.
I think they are a grave and real threat here in Canada,
and they are a grave threat in many other countries around the world. I'm a little bit concerned, though,
that by focusing on the most extreme violent aspects of this problem, that we're bucketing
this problem as one that's just about radicalization and violent extremism.
Whereas I actually think what we need to be having is a much broader conversation about how to govern these platforms.
And so what would you like to see, to see that system change?
You know, I'm guessing what I'm getting from this conversation is that we cannot take the playbook used to curtail ISIS and use it to deal with the far right.
Yeah, I mean, I think to a certain degree, the terrorism framing and the extreme violence
framing is far too limited. So I think we need to look much more broadly at content issues.
So what do we as a society think should be allowed to be said? How do we think
content should be amplified in our society? And how are we going to hold the mechanisms that
amplify voices on these platforms accountable? Right. These algorithms that push you towards
more and more stream of content because they keep you on these platforms longer so that they can
sell you more ads. Absolutely. I mean, there's a very big difference between having a right to say something
and the right for that to be amplified to millions of people. Nobody has the right for an algorithm
to make my speech visible to millions of people and to torque it and to provide it to people who
are most susceptible to hearing the message. And we need to start thinking about holding those two acts of speech accountable in different
ways.
There's no single solution to this.
It's a package of policies that ultimately come together to represent governing a space
that we've left ungoverned for a very long time.
You know, what I'm hearing from you is that in order to be able to deal
with this multifaceted problem,
governments can't just keep making these calls against the tech companies,
asking them to move. They have to intervene.
Democratic Institutions Minister Karina Gould.
We have had several discussions with all of the platforms
to varying degrees of success, I would say,
in terms of how they plan to protect Canadians
and our electoral process in the upcoming election.
And we have not really seen that much progress with them. I think that the platforms feel that
this is something that they should be doing on their own. What about the argument that we need
to give these companies the opportunity to police themselves in this new world. So, I mean, Facebook has taken steps to purge far-right voices
like Alex Jones and Canadian Faith Goldie.
They've kicked them off the platform.
I would sort of ask in return,
what other sector of society or industry in our society
do we allow to make that argument?
We don't do that with food safety. We don't do that with aviation safety. We don't do that with food safety.
We don't do that with aviation safety.
We don't do that with the pharmaceutical sector.
We don't allow industries that potentially have negative social and economic costs, as most industries do, to set their own rules.
What about people who are concerned that this could go too far, that if we start going down this road, then we really could infringe on people's free speech rights?
I think it's a legitimate concern that needs to be central to however we move forward on this.
There is real risk of overreach of governments in this space, and in particular in countries that are less democratic than ours. That being said, at the
moment, speech online is heavily regulated. It just happens to be regulated by private companies.
And I would rather these really difficult conversations about what we want to be said, about how abuse online gets dealt with,
about how people are censored from speaking because of the abuse they receive online,
about how nuanced hate speech is amplified by the algorithms and the design of this infrastructure.
I would like those really difficult conversations to happen within the one space in our society that has accountability and has democracy. And that's our governments. So yes, I'm uncomfortable with the brisk of overreach here. But given the option of our democratic governments leading this process versus private companies that are outside of our country. I think we should be
having this conversation within our own society. Taylor, thank you so much for this conversation.
Thanks for having me.
At the summit this week, several countries and a few of the big tech companies are expected to sign a pledge.
Canada is among them.
So is Britain, Jordan, Senegal, Indonesia, Australia, Norway and Ireland, according to The New York Times, and of course, New Zealand and France.
That pledge is expected to ask the social media companies to examine the software that directs people to violent content
and to share more data
with governments and each other to help eradicate toxic online material. It is non-binding and it
does not contain enforcement or regulatory measures. Ultimately, countries and companies
will have to decide for themselves how to carry out the commitments.
it carry out the commitments. That's all for today. I'm Jamie Poisson. Thanks for listening to FrontBurner. For more CBC Podcasts, go to cbc.ca slash podcasts.
It's 2011 and the Arab Spring is raging.
A lesbian activist in Syria starts a blog.
She names it Gay Girl in Damascus.
Am I crazy? Maybe.
As her profile grows, so does the danger. The object of the email was, please read this while sitting down.
It's like a genie came out of the bottle and you can't put it back.
Gay Girl Gone. Available now.