Your Undivided Attention - Your Nation's Attention for the Price of a Used Car — with Zahed Amanullah
Episode Date: October 6, 2020Today’s extremists don’t need highly produced videos like ISIS. They don’t need deep pockets like Russia. With the right message, a fringe organization can reach the majority of a nation’s Fac...ebook users for the price of a used car. Our guest, Zahed Amanullah, knows this firsthand. He’s a counter-terrorism expert at the Institute for Strategic Dialogue, and when his organization received $10,000 in ad credits from Facebook for an anti-extremism campaign, they were able to reach about two-thirds of Kenya’s Facebook users. It was a surprising win for Zahed, but it means nefarious groups all over the African continent have exactly the same broadcasting power. Last year, Facebook took down 66 accounts, 83 pages, 11 groups and 12 Instagram accounts related to Russian campaigns in African countries, and Russian networks spent more than $77,000 on Facebook ads in Africa. Today on the show, Zahed will explain how the very tools that extremists use to broadcast messages of hate can also be used to stop them in their tracks, and he’ll tell us what tech and government must do to systematically counter the problem. “If we don’t get in front of this,” he says, “this phenomenon is going to amplify beyond our reach.“
Transcript
Discussion (0)
Before we get to the show, we just wanted to welcome all the new listeners who discovered our podcast since watching the new Netflix documentary, The Social Dilemma.
The response to the film has been unbelievable.
It was briefly number one in India, number one in Canada, number two film in the United States on Netflix, and the result for this podcast has been astonishing.
As of this morning, we're actually the number one tech show on Apple Podcast.
We're getting about 100 messages per day from all around the world in Brazil, Argentina, Sri Lanka, Indonesia, from people saying,
wow, you know, this film has really opened my eyes and is showing us this thing that I always thought was true,
but now I can see and understand why it's happening.
And in a way, there's a little bit of hope in knowing that everyone else is seeing the same problem at the same time.
As a friend of mine says, we're not alone, it's okay, and there's a way out.
And speaking of a way out, you know, how are we going to?
to change this massive system? Well, it's going to take change from all sides, just like climate
change. You know, you simultaneously need to get people inside of Exxon to be part of the solution and
reinvesting all their money into regenerative energy and carbon capture, just like we want
to get Facebook to be as good and healthy and regenerative as possible in the midst of the transition
to better and more humane technology. But we're also going to need new platforms and new experiments
in social design and how these things can be designed in a way that does not break down truth,
doesn't erode our mental health, that doesn't rely on social feedback loops of approval
that end up ruining the psychology of a generation.
Governments are not regulating this fast enough.
It took us something like six years to get GDPR, the European Privacy and Data Protection Regulations,
and it's taken more than a year or two to get the California privacy legislation enacted.
So given the fact that the legislative changes are going to take some time,
and these products are still omitting a kind of digital fallout of harms that we know about
from the film, one thing that can change fast enough,
is global culture, what we think and what we do.
But the people in the film who help describe these problems
are not the only ones who are going to think of the solutions.
Not by far.
This is going to take a mass movement,
partnering with the powerful organizations
and several society groups that represent marginalized populations
around the world who've been most harmed by these technologies.
So what can you do right now?
Well, the most valuable thing you can do
is to be in discussion with people around you
around this problem.
We asked you on our previous episode
to host a screening of the social deline.
with people especially who won't have seen it,
who may not have Netflix in their area.
And it's so much easier now because there's a real global conversation.
When heads of state, big celebrities or musicians like Pink
or the creators of Game of Thrones or Family Guy
are all speaking about it,
it's much easier for all of us to bring it up
with people who we'd like to see the film.
What are the practices and rituals that we could adopt now
to live better and in a more humane way
in an inhumane digital environment?
We just posted an update to our take control.
page that has some suggestions for some things that people can do. But I think we need to co-create
a better kind of humane doctrine for how we all want to participate in inhumane social
platforms whose business models are not aligned with us. We can have this conversation together,
write us a review on Apple podcasts, and leave some of your ideas about what that would look like.
What does it mean for you to live in a more humane way in an inhumane internet? And what are the
things that you think we should be doing or covering more on this podcast so that we can help
accelerate the solutions. You could ask me or Aza a question, where you can tell us what you want to
hear on our next episode of Your Undivided Attention. We look forward to hearing from you, and we really
do read every review. So with that, onto the show. We were trying to find out how a targeted
message could push back against extremists recruiting. That's Zahed Amanula, a counterterrorism
expert at the Institute for Strategic Dialogue. He's run a variety of campaigns that are meant to
counter the narratives of extremist groups worldwide.
We tested this out against ISIS propaganda.
We tried it with far right groups in the U.S.
Every once in a while, he startled by his own success.
When Facebook gave Zahed's organization $10,000 in ad credits,
he started targeting messages in Kenya.
We weren't expecting to reach all the Facebook users in Kenya.
It just turned out that after we had spent that money,
every single person who had Facebook in Kenya
were at least exposed to the ads.
To be clear, Zahad didn't quite reach every single user in Kenya.
His ads reached 4.4 million people, which is about two-thirds of Kenya's Facebook users at the time,
but think about it, you can come close to capturing an entire nation's attention for as little as $10,000.
That may be a victory for Zahed, but that same victory is now within the reach of nearly any extremist group on a shoestring budget.
And surely this is happening all over the world where you have this sort of mix of grievance,
conflict, and extremist groups that know how to use these platforms.
They don't need highly produced videos like ISIS.
They don't need deep pockets like Russia.
With the right message, a fringe organization can reach the majority of a nation's Facebook users for the price of a used car.
All they had to do was to create accounts, create pages, and look out for those who were engaging with that content.
That was all they needed to reach potential recruits.
We thought our researchers in Kenya were mainly going to be observing.
What they didn't expect was the consistent and repetitive and,
incessant drawing in through tagging of pictures, through personal messages toward invitations
to meet offline. And some of it was very personal stuff. I mean, these people were inviting
some of our researchers to sort of family functions offline. They wanted to sort of build that
rapport and that familiarity and that camaraderie. This global assault on democracy is not
theoretical. Last year, Facebook took town 66 accounts, 83 pages, and 11 groups, and 12
Instagram accounts related to Russian campaigns in countries on the African continent.
Russian network spent more than $77,000 on Facebook ads in Africa,
nearly eight times what Zahead talks about in his interview.
As I said in the film, The Social Dilemma,
this is happening at scale by state actors, by people with millions of dollars,
saying, I want to destabilize Kenya.
I want to destabilize Cameroon.
Oh, Angola, that only costs this much.
Well, hearing from Zahead, now you'll have the exact price tag.
Today on the show, we'll ask Zahed Aminula,
head of the counter-narrative project at the Institute for Strategic Dialogue,
how he fought extremism in Somalia, Pakistan, the UK, and Kenya, among other countries.
He'll explain how the very tools that extremists use to broadcast messages of hate
can also be used to stop them in their tracks.
He'll also explain what tech and government must do to systematically counter the extremists.
If we don't get in front of this, this phenomenon is going to amplify beyond our reach.
I'm Tristan Harris, and this is your undivided attention.
In 2014, 2015, I think the world was sort of looking at the dramatic propaganda that was coming out of ISIS in Syria.
For most people, when they think about extremist recruitment online, they think of those ISIS videos and the production value in all of that.
In developing countries, those resources aren't necessarily there.
But the recruitment is no less effective.
In Kenya, the first thing that we did when we went there to sort of analyze what was going on
was to map the sort of extremist landscape.
And part of that was done by getting researchers to explore some of the pages that were being put up on Facebook
that appeared to be recruiting for al-Shabaab.
El-Shabaab had a long history, going back to the embassy bombings in 2008, when they were
influenced by Al-Qaeda and grew in Somalia and were pretty well known by 2014, 2015.
Al-Qaeda was still operating.
ISIS was relatively new at the time, but al-Shabaab was the real threat.
Al-Shabaab was explicitly recruiting from the Kenyan coastal regions because of the
grievances that people in the coastal regions had toward the Kenyan government.
They found them to be ready and willing recruits many of the young people.
If you engage with those pages recruiting for al-Shabaab, if you like those pages, if you
share a comment from those pages, that was a window to get at vulnerable people.
And so our researchers, as soon as they would like a page, they would get inundated by friend requests, by being tagged in photos, being messaged and invited to online meetups and offline meetups.
And again, all of this wasn't very explicit.
It wasn't, you know, we're al-Shabaab come join us.
It was more of a slow grooming process, not unlike that you'd see with gangs or other groups around the world.
This mirrors a lot of what past guests on your undivided attention have talked about, Renee DiReste.
The Russia disinformation researchers talked about also how in conspiracy theories, if you followed on Facebook one conspiracy theory, let's say PizzaGate or something like that, it would say, oh, people like you also tend to like the anti-vaccine group, the chemtrails group, the flat earth group, et cetera.
And so one thing that people underestimate is the power in terms of the human brain of surround sound.
If you join a few of those groups, then you start to get a surround sound effect where your news feed is gradually filled up more and more with more of those memes.
And I think that the non-linear impact of surround sound versus just hearing it once from one group is something we tend to underestimate.
What do you think about that?
And that's exactly how we were able to sort of be exposed to so many groups over a short period of time.
It was a truly immersive experience.
We were really worried that there was actual physical risk if we were to go too much further down that road.
So we actually had to come up with another way of monitoring content without putting our researchers and our partners in Kenya at risk.
We interviewed a lot of NGOs that themselves had a higher risk tolerance than we did.
And a lot of them were willing to meet people.
Some of them had worked with former al-Shabaab extremists.
We actually had to ask them to pull back and say,
okay, we're not going to do this engagement anymore
because it's clearly getting out of hand.
What we are going to do is focus on the messaging based on your experiences,
based on what we've learned from the online mapping,
and create something that will resonate with the target groups,
that are being influenced by these extremist groups.
We just really just wanted to turn our focus on that.
We put a really, really complex picture
of the languages that people use, the groups used.
Mostly were Swahili, also Arabic.
There was some English content,
but that English content appeared to be translated
through Google Translate from Arabic source text.
Some of that text came from Al-Qaeda.
So it was a bit of a haphazard mix of materials,
but all with the same purpose.
Basically, how do we spread this net out,
using this new tool that is having mass reach and get as many young people as we could to join
our movement. So we have this body of data that we sort of put at the lap of Facebook and said,
you know, you have to know that this is happening right under your nose in East Africa.
Can we talk to your subject matter experts who are supposed to be looking out for this stuff
and they didn't have any? And it's not just, you know, again, not just people who speak the
language, but people who understand the nuances that groups use to sort of get noticed and the
trends and so forth. What was that like for you in that moment when you discover all these
patterns and then you bring it to Facebook? And this is in what year?
2017 before the elections took place in November. In fact, when we started the project,
the first thing we did is to reach out to their teams in East Africa. Google had just set up
and Facebook was just putting it together. They didn't even have an office in Nairobi for us to go
to. Their subject matter experts were in Palo Alto. We felt it was a huge vulnerability.
At the time, we were six months from the presidential election. And we knew that,
that this had a potential to destabilize society in a way that the election 10 years earlier
in 2008, which killed over a thousand people, didn't. So all of a sudden, this turned from a
project where we thought it was just purely academic to something where we thought that
lives were really at stake. What's an example of something that you would know if you were
on the ground with subject matter expertise versus if you, that you wouldn't know if you're
sitting in Menlo Park and having lunch at nice cafeterias and thinking about East Africa? Well, the
window for us was talking to the dozens of civil society organizations in Kenya who had
firsthand experience getting specific pictures of young people who disappeared overnight and were
later found with al-Shabaab in Somalia or had escaped from al-Shabaab and were captured
by the government and disappeared by the government. You know, piecing together stories
about the messaging that they had engaged with online and how that may have led to their
disappearance. It was putting that picture together. We're so used to sort of looking for
keywords in English, I might add, that we miss that picture. In English, we talk in memes and
coded language and dog whistles. Imagine a dog whistle in Swahili. So that's the kind of nuance that we're
missing in developing countries around the world, especially where you have languages that aren't
universally spoken like Swahili. I think people tend to underestimate the propaganda kind of power of
some of these organizations. Production value might look low to an outside audience. Could you talk a little
bit about what people might underestimate about the power of terrorist propaganda.
When we look at some of the ISIS propaganda that came out and the production value of it,
that was the big game changer, that this stuff could be done on a production level that was
equivalent to what Hollywood could produce. But you don't need to have high production values
to be effective in recruiting. A lot of the stuff that we saw was actually very crude
coming from al-Shabaab in the region. It didn't mean it was less effective. What's important
is that something appears genuine and authentic. And remember, al-Shabaab is building on grievances
that already exist. And to do that, they didn't need to be that sophisticated. Their recruitment
was and continues to be very effective. And what lets us know that it's so effective, even though
it has low production value? Because I think one of the themes that you're tracking here is just that
the cost of doing recruitment is going down over time. And the ability to do a highly personalized
recruitment is going down, the ability to reach exactly who you want, the youth in the specific
geographic areas that have the grievances that you want to target. And then the actual literal
dollar cost, I think you've said in when we first met that it costs less than $10,000 to reach
the entire country of Kenya. And if you think about $10,000 might be hard to find in Kenya,
but it's not hard to find just about any Western country where you want to basically say,
hey, I'd like to own the next election or I'd like to cause grievances or tribalist violence
and how cheap that is. It was free, really. It wasn't even low cost. It was, you know, all they had
to do is to create accounts, create pages, and look out for those who were engaging with that
content. That was all they needed to reach potential recruits. And surely this is happening
all over the world where you have the sort of mix of grievance, conflict, and extremist
groups that know how to use these platforms. So there you are in Kenya playing with your $10,000 in
credits. At the same time, you didn't realize there was actually a race to reach every Facebook
user from another group, which was at the time Cambridge Analytica. Do you want to talk a
bit about what we now know Cambridge Analytica was doing at the same time that you were, you know,
doing this work on the ground and with those ad credits? Yeah. So when we started, we built up the
capacity through a number of workshops in Kenya for organizations to develop impactful campaigns.
They're the ones who identified the, they're sort of like public service announcements. We worked
with local Kenyan videographers, the subject matter experts on the ground, and the storytellers.
You know, at the end of the day, we were telling stories. And we wanted these stories to resonate
across Kenyan society. There were about 20, 22 campaigns of varying quality, but all authentic
and all Kenyan made. And so the strategy was to disseminate these videos across all the social
media platforms in the run-up to the presidential election in 2017 and shortly afterwards,
sort of the zone in which electoral violence was likely to happen. And what were some of these
PSAs, when you say PSA, what kind of public service announcement was it? Well, the four issues that
the NGOs told us that they felt were important to deal with in the atmosphere of an election.
We're combating extremist recruitment from al-Shabaab, combating the incitement to tribal violence,
gender violence, so violence against women, and religious bigotry.
Kenya is sort of roughly divided between Muslims and Christians, and those tensions often flare up
because of there's also a socioeconomic divide between the two.
So they created campaigns based on those four issues.
during that same time, you know, we had people on the ground in Kenya who were alerting
us to other campaign ads that appeared to be supportive of the incumbent president Kenyatta
that were very sophisticated but unattributed. And they were wondering whether there were
some of ours because they were, some felt quite insightful. And we had no idea what this,
you know, who was running these ads. We had seen the ads from the official parties who were
running for president, but not these unattributed ads. It was only a few months later after the
project when there was a sting on Channel 4 News here in the UK, which showed Cambridge Analytica
admitting that they were behind the campaign for the incumbent president in Kenya and had delivered
ads on Facebook throughout. And when that ad surfaced, it would appear to be almost a carbon
copy of an ad made for Senator Trent Cruz by a Texas PR firm in terms of the visuals, but it was
sort of localized for a Kenyan context. Was it the same Texas PR firm that had been creating those
ads for Ted Cruz? Apparently it was. I think it was called Harris
Interactive or something like that, but they had
apparently been the ones to make the
ad for the Kenyan election.
Now, I don't know that, I don't think
that there was any data stolen
from Facebook. The Cambridge Analytica story
in the U.S. with regard to stolen data
was not applicable in the Kenyan context.
It was sort of your garden variety, dirty
political campaigning. But nevertheless,
they were leveraging the fact that
you could reach so many people
so cheap in a Kenyan context.
And that's what they did. And we were up against
that. So our 22 civil society organizations were putting their authentic peace building ads up
against Cambridge Analytica and their insightful ads on behalf of the president.
What's amazing to me is that you don't realize that you're competing for that until after
you discover these things and you're actually competing with opposite messages. And here you are
with 22 civil society groups locally made Kenyan produced content, right? And then you're
trying to reach all these people. And then suddenly there's this Texas PR firm out of nowhere working
for Cambridge Analytica based in the UK. And you realize there's a kind of a geopolitical.
contest that you don't realize you're in. I think that's one of the illusions of the
internet is the felt sense of locality. If I go to my Facebook event today and see that, oh,
there's an event happening in New York or in San Francisco, and I see 20 friends that are going
to it. It feels very local to me. It feels like a very local experience. But, you know, little
do you know, even you with these 22 groups, that there's this geopolitical contest happening
underneath your nose. Absolutely. That was really eye-opening for us, because what's the
bulwark against that if the social media companies are sort of willing participants, if the
government is either unaware or semi-complicit? What's the ballwork against that? In Kenya,
it was us. That was all we knew. We had 22 organizations, scrappy organizations, sort of getting
their stories together and turning them into video content and putting it online. We weren't aware
of anything else that was pushing back against that. And also, you discovered this by accident, right?
I mean, I often give it the metaphor that, you know, if Russia tries to fly a plane in the United
States, there's a Pentagon whose job it is and will successfully shoot down that plane before it gets
anywhere near the United States. But if they try to fly an information plane into the United States
to show division, you know, Facebook and Google's algorithms say, yeah, what zip code do you want to
target? And so the comparison between a protected zone and a protected stance versus one in which
is completely open, that here you are the ones describing, you know, identifying this threat
that's coming in, as opposed to, you know, even the people in California didn't see this one
happening, it sounds like. Exactly. That was the big message that came out of this project, was that
this stuff could happen anywhere. If it wasn't for the sting that, you know, where Cambridge Islandica
was caught, admitting it, no one would have ever known. So people could have gotten away with this
over and over again, you know, completely legally and completely without, you know, anyone
attempting any kind of regulation. I think that's one of the things that was also so interesting to me
about your work, because on the one hand, you had studied how these tools are used for terrorist
recruitment, but then on the other hand, you developed all these counter-narrative, positive powers,
like using the exact same tools, doing hyper-personalized anti-recruitment persuasion that says,
hey, like, here's these different way to do it. But then if you map the growth rate of terrorism
or hate creation, and then the growth rate of counter-narrative solutions, on the one hand,
you know, you have these positive examples that I hope we get to about the mechanics of what
counter-narrative means and how do you anti-recruit people from that. But in the other hand, those
things aren't scaling at the same growth rate. So I think one of the things that's sitting
inside of a technology company right now, you run into this problem. Of course, like, there's many
goods that Facebook does. And then there's these things that we don't feel so good about. People
dying, tribalism, election destabilization, et cetera. But there's these two balance sheets, and it's
really hard to compare the goods to the bads. And I really want to get people to understand that
the growth rate means that there's an urgency to this that's not being addressed. We're still
acting totally reactively to something that is really destabilizing geopolitics.
And that's exactly it.
When we did our early research into counter-narratives,
I mean, we were trying to find out how a targeted message could push back against extremists recruiting.
We tested this out against ISIS propaganda and trying to reach the audience that they were trying to reach
and see if we could build some sort of resilience or get some sort of positive feedback from that audience
before they became susceptible to that propaganda.
And we tried it with far-right groups in the U.S.
And a pilot study that we did in 2016 with groups like exit USA,
where we're able to actually get some really, really good results from targeted campaigns.
But the challenge with counter narratives, and we were very objective about it,
the truth is they don't always work unless a whole bunch of things line up.
You've got to have a really credible messenger.
You've got to have really well-crafted content.
And you've got to reach the right people.
And there's an algorithm that we would use to try to identify the target audience.
We work with organizations like Life After Hate to do that.
In their case, it worked really, really well.
I mean, we did a targeted ad campaign aiming at white nationalists and neo-Nazis in the U.S.
And out of that campaign, eight people responded to the campaign by approaching the organization
and saying that they were in a movement, an extremist group, and they wanted to leave.
And for us, that was like the gold standard, the most impactful that a counter-narrative campaign could be.
By the way, and that actual video ad that we targeted ended up winning an Emmy Award,
that's how well it was crafted.
But most of the time, one of those channels sort of falls short, and you don't have the efficacy
that you want. And this is where we've sort of evolved from a strategy of countering the narratives
of extremist groups to allowing your sort of target groups to sort of build positive narratives
about who they are and about their identity. You know, a lot of this stuff is centered around
identity politics and the ones that are vulnerable to joining extremist groups, they're the ones
that are sort of searching for an identity and they just seem to find that in an extremist group.
So let's talk about that for a moment because there's like these different levels of persuasion.
You can change someone's environment. You can change their environment.
you can change their behavior, you can alter their habits, you can alter their relationships,
you can alter their beliefs, or you can alter their identity.
And there's this famous study and behavioral science of, if you get people to rewrite the phrase,
I like chocolate, I like chocolate, I like chocolate, I like chocolate, is one group.
And then you test them later on some, there's some way they do the control effectively,
because I know this probably sounds like a tried example.
But then they have another group that says, I am a chocolate lover, I am a chocolate lover,
I am a chocolate lover.
In other words, one is speaking to a belief, like I'd like chocolate,
and reinforcing that. The other is reinforcing an identity. And the one that reinforces identity is far
stronger. And so in general, when you're doing, you know, competition in this war for persuasion,
you have all these terrorist groups that are trying to, you know, win on the war for one identity,
which is I am a jihadist or I am a white nationalist or something like that. And then how do you
effectively compete with a positive alternate identity? I heard you also saying that there's a way
of letting them create their own identity. So it's not about dictating what that other identity
should be. But, you know, offering some space to define it on our own terms. Yeah. And that's our work
with young people generally sort of falls in that category of finding an identity that is
positive and constructive and meaningful. We base a lot of our work in the narrative space on how
people feel about their place in society and working with other groups and living amongst
different cultures. We measure how people feel about that on the back of it. And that to us is a win.
Could you talk about some of those programs? I know one
was called Average Muhammad. It was helping Somali youth. Yeah, average Muhammad was, I mean,
it's a great guy that we've worked with for a number of years who's, you know, again, he created
content on his own. He's a Somali American who saw what was happening with al-Shabaab and decided
to make, you know, sort of tongue-and-cheek content on YouTube about what it means to be a Muslim,
what it means to be Somali in particular. And we, in fact, we tested some of his content in
our study in 2016 to see that if he could influence people in his target audience. He was very
interested in reaching young Somalis. At the time, there was a number of them who were joining
al-Shabaab from the U.S. So we worked with him on a number of campaigns to see if they could influence
that target audience. And the one thing that he had was authenticity. I mean, he's a really,
you know, he's a very authentic guy who's made, it's a bit tongue-in-cheek the content, but, you know,
you remember it, especially if you're from that background. And it can work. I mean, some of the other
content that we've worked with. We don't disclose publicly. We didn't want people to counter,
you know, the positive message explicitly. Well, this is the real paradox of all of this,
is that it's essentially down to what is the persuasion that you trust. So here we had Cambridge
Analytica being based in the UK or the United States or the Texas PR firm, doing stuff for
Kenya. And then we've also got this other guy, Average Muhammad, is doing pro-tolerance,
you know, respecting religious differences, social cohesion, pro-social, you know,
all that good stuff. But then again, based on a remote basis,
And the person on the other end of the under the wire doesn't necessarily know that there is this asymmetric campaign going on.
And so one of the uncomfortable things about this whole conversation is just can we get comfortable with forms of persuasion that are pro-social?
And again, pro-social on whose terms?
But between a world where you have runaway terrorist recruitment, which results in lethal effects in society and the destabilization of democracies and elections versus things that are at least trying to create more.
pro-democratic psychological mindsets of tolerance, diversity, egalitarianism, respect for
religious and gender differences, you know, we need to be able to declare that there's some
values we're preferring here. And we can't always be transparent about it, as you said, because
if you do, it actually, it's weird. It backs up the sort of, oh, there's this whole conspiracy theory
to get all of us to believe this one thing or another. And we have to have a calmer way to
recognize that persuasion is ubiquitous, is diffuse, is happening all the time. But that doesn't
mean that it's all evil or manipulative. This brings me to my next question, which is about
in your counter-narrative work, tell me about the importance of the offline work, and how does
that balance out with the online? Like, I'm curious if there's a portfolio of online interventions
to counter-narratives that are pro-egalitarian, pro-tolerance, and offline work as well.
You know, what is the appropriate distribution? How much do you need both?
Online messaging will always reach more people than offline, but offline content is
much more impactful. We actually worked with Google.org two years ago to sort of put together a
one million pound fund in the UK to sort of get civil society organizations to give us their
ideas about what worked. And we had like 120 applications for the funding that had some
amazing ideas, most of which were offline. There was this one project that was a boxing academy
in London that said that they were starting to see a lot of far right sentiment in there.
You know, the people who came there and they felt that they were in a good place to sort of put
some messaging or some sort of training to the people who actually came to their boxing academy.
I mean, offline stuff. And it was so successful that, you know, we were able to launch on the
back of that a 10 million euro fund through a Google Impact Challenge that's covering all of
Europe to do some of the same work, but also including child safety. Again, most of it
dealing with some sort of offline activity. So finding real world offline spaces, whether it's
boxing academies or theater performances to somehow embed positive.
pro-tolerance messages and identities. And find those local models that can be replicated. The question is
how do you flush that out and how do you make sure that the resources are there? The challenge then
what comes to my mind is what is the coefficient of growth on the offline counters? Because we can do
some online, but then the offline doesn't scale as easily. And so I'm just wondering, like, if the
growth rate of online harm is greater than the growth rate of offline counter positive narrative,
how do we deal with that? Well, this is where governments should be involved from a city level to a
national level, the one thing that you wouldn't be short of is people on the ground who want to
participate in those things. It really just does come down to money. You have a strong cities network,
is that right, that works with municipal government officials to do this? Yeah, I mean,
one of our strategies to sort of promote this sort of city-led approach, you know, it was the creation
of the Strong Cities Network, which we founded in 2015 at the UN General Assembly in New York.
And the idea was when it comes to like countering hate and countering extremism, you know,
Usually they're driven on a national agenda, like from a national government.
But cities are the sort of front line where a lot of the impacts of extremism happen.
And cities are much more attuned to what's happening.
They're much more agile when it comes to responding.
It's their communities that are often pitted against each other or threatened by extremism.
And so we wanted to make sure that cities could actually coordinate between themselves
and, in a way, bypass their central governments and really just share solutions
with each other. And so part of our strategy was to take our research in counter-narratives,
take our policy research, take our networks of young people, and make them available to this
network of cities globally, which includes a number of American cities, but cities in Africa, Asia,
all over Europe, and so forth, and get them to coordinate between themselves, to sort of pick
and choose between them what works and implement them in different environments and learn from
that. And so we help facilitate that by making sure that everything that we've learned and everything
that other organizations that do similar work is made available to this network of city so that
they can just get on with it and not wait for their central governments to sort of create
legislation that may or may not be in tune to what's actually happening on the ground in those
cities. This reminds me of what's been very successful in the climate movement is the C40,
the top 40 cities collaborating on how to deal with
climate change at their local city level, and just because cities are such a large powerhouse
for both generating emissions and also responding to things like wildfires or flooding risks or
hurricanes, things like that, what exactly are you getting them to do? I mean, is this just like
a Zoom call with, you know, people once a month? No, no. Or how do you actually make that really
effective? Well, we've, I mean, we started by having global summits, actually, for the first three years,
we had global summits where the cities themselves would gather for a number of days to share what
they've done. A lot of the cities, almost every city that's joined has been the victim
of a terrorist attack and has learned something from it.
Those things can be shared either in person or they can be shared online.
The cities are in constant engagement with each other based on the experiences that they
want to learn from each other.
So our job is to facilitate that exchange.
In some cases, a lot of the cities do programs with each other where cities are
twinned, for example, between Europe and the Middle East or Europe and Africa.
We just provide the network for them to engage with each other.
So really what we're trying to do is spur innovation.
I want to make sure we cover this paradox that's often cited in the work against hate and extremism.
And there's this big debate going on about, obviously, in a world of free speech and the technology companies being anchored in a country that is free speech absolutist, really, above all else.
There's a notion of never take down or de-platform anyone.
Obviously, there's some shadow-bending, like lightweight down-regulation of certain folks.
YouTube has recently, I think, down-regulated some conspiracy theory recommendations by more than 50.
percent as of this month in March of 2020. But talk a little bit about this. There's this fear that
if we take them down, they're just going to go somewhere else and it's going to get even
worse because they're just going to show up on GAB or one of these other alternative social
networks or Discord and it's going to get a lot worse. So what should we do about this? How do we
think about this? I mean, social media has been the lifeblood of a lot of extremists who are
influential. Hope Not Hate came out with a study, which showed that the deplatforming of Tommy
Robinson significantly reduced as influence and his ability to read.
the audience that he once had.
Surely he can go to Gab, he can go to one of these other platforms,
but it's the reach that matters.
They don't have nearly the reach
that they would have had on a major platform, and they know it.
The ones who are going to step over the line
from just trying to influence
and then coordinate a terrorist attack,
they'll often go to an encrypted channel anyway.
That becomes a whole other challenge
for counterterrorist agencies,
and that's, of course, an intelligence issue.
But in terms of extremism
and promoting an extremist view,
We're catching extremist content on platforms like SoundCloud and places that, you know, you wouldn't normally expect extremists to flock to.
But that de-platforming does.
And that deep-platforming can continue.
Every time we find content like that, we talk to the platforms and we talk about how they could better monitor the content and remove it.
And there are platforms, as you're saying, that do take more aggressive approaches, whether it's the founder of Twitch who's been taking down, you know, some of the more aggressive, angry kind of hate-troll-type players.
And like you say, there's maybe SoundCloud or Spotify.
that if you say, hey, look, there's this music that's really extreme that you're creating and amplifying and doing a whole suggested user's flow.
Here's more white nationalists you can listen to. Here's more al-Qaeda music you can listen to.
And, you know, they take them down. And that actually doesn't mean that they all go rushing somewhere else in anger and want to take out their knives and guns.
There is a value in actually just having values and saying we don't serve that here.
There is huge value in chasing them off the platforms because they don't all migrate to the same alternative platform.
And they can't necessarily reach as many people when they land somewhere else.
Yeah.
They're going to go to any number of platforms.
Their ability to coordinate is limited, right?
You know, just because there's someone on Twitter with 300,000 followers,
those followers aren't going to go to gab.
You know, every time someone is deplatformed,
their ability to influence goes down and down and down,
like to an incredible amount, an incredible scale.
Yeah.
One of your earlier interviews, you talked about extreme speech mapping,
and it sounds like some kind of crazy project where you're doing trends,
mapping on speech and getting kind of early warning systems. And this relates to another effort
that I think people don't know that much about, which is the Global Internet Forum to Counterterrorism,
the GIFCT. Do you want to talk a little bit about how do you do this sort of early warning
system for hate and the Global Internet Forum to Counterterrorism?
Yeah, I'll step back a little bit. I mean, the Global Internet Forum to Counterterrorism was
an effort that we tried to influence since its inception, really, working closely with all the
companies involved, Microsoft, Twitter, Google, and Facebook, you know, sort of launched it in a way to
create a collaborative body to sort of coordinate resources and strategies to counter extremism
and terrorism online. And it's sort of started with kind of a shared hash database of content,
hash database really meaning digital identifiers of audio and video content that could be shared
amongst platforms so that they could be removed before they were even uploaded. And that
hash database has grown year after year, which is a great thing. But, you know, GIFCT is an
industry body. It still relies on the cooperation and the input from all of the major social
media companies. It needs civil society organizations and governments to sort of all point out
where the holds are. GIFCT has been good in the sense that, for example, it can provide
resources to smaller platforms like where extremist content has been hosted before, like just pasted,
or platforms that are run by like a handful of people.
It's a boon for them because they can now benefit from some of the resources
that the bigger tech platforms can provide.
Are there ways that you'd like to see that expanded or, you know,
given the growth rate of hate and terrorism online that I think we were both,
I was actually at the Paris Christchurch call event with Yassinda Arden from New Zealand.
And, you know, the thing that came out of the Christchurch call with Yacinda was,
hey, we're going to expand the GIFCT.
We're going to expand the hashing that we're doing of,
terrorist videos, terrorist recruitment, text, you know, snippets, et cetera, because there's an
area of moral consensus that we agree that this kind of speech that is actually can lead to
people dying, we're going to do, you know, an early hashing and warning system and not allow
people to run around and create hate everywhere else. And now there's sort of a rising sea,
lifts all boats kind of phenomenon. But then what's interesting is we're not expanding that
set of categories to include the more gray areas of, well, what is sort of lethal speech versus
subtly divisive speech. And we have to be really careful here. But I think one of the
interesting things is as the growth rate of the harms and the risks that are posed goes up
because we're not growing our counter responses in terms of offline and online commensually with
that. Like what are we going to do to scale that? And is that going to work or what else do we
need here? What resources do you need to be more successful? That's a really good question.
I'll just give you the Christchurch example. I got a call from one of our former employees
actually who joined Facebook, describing to me all the ways that the video was being re-apploaded
onto their platform to dodge the hash database entries that were being made over the original
video. And she told me it was in the hundreds, you know, slightly tweaking the videos,
slightly changing the aspect ratio, flipping it, adding a graphic here or there in order to
get that video up. And of course, that coupled onto the fact that it was live streamed,
which, you know, it was a newer technology that no one has really been able to figure out how to intercept in real time.
There was an attempt to do it shortly after, I think in Poway in California, that failed.
But the fact that that was out there showed us the limitations of just relying on hashes.
So there's no harm in the hash database growing.
But one thing to remember is that hash databases consists of known content.
You know, content can be created on an infinite scale in a new way every day.
So just because you create new hashes for all the new content's created, it doesn't prohibit new content with original identifiers from being created on an ongoing basis.
I'll say from personal experience, it was about a month ago that someone who studies this showed me a video that was the Christchurch video on Facebook.
The upload date was the same day as the Christchurch event, and it still had not been taken down.
And this is in March of 2020.
It happens all the time.
I mean, we find this stuff all the time, and sometimes we're not even sure how it bypassed or escaped, you know, the hash database.
But this is the new world we're living in.
This is why we work so hard on the resilient side of the equation, as much as we do also work on the takedown side, because there is a chance that we'll never be able to close the loop on this and that the new norm will be this stuff existing in some form online.
and the response has to be, how do we inoculate society from being adversely affected by it?
Well, so this brings me to, I think, our close, which is talking about solutions, and let's say it's 20-30.
We've done it.
You know, extremism is grinded to a halt over the last 10 years.
We have some version of a social media set of technologies that we still use and empower our lives
and hopefully more humane regenerative waves.
And we still have a world that relies, at least in part on user-generated content.
essentially what we're talking about here when you talk about a society that's more resilient
is the fundamental incontensurability of a model where any of the three billion people on planet
earth can create, upload live, brand new content that's never been seen before, which has the
potential for harm and amplification with viral scales. That cannot be solved with technology.
It can't be. There is no solution to this problem. You need to have a more tolerant and cynical and resilient
society that is more aware of this being done. But in a world where you cannot guarantee that each
user, each human mind that is being exposed to this uncontrollable Frankenstein of long-tail
user-generated content in hundreds of languages and, you know, not the Swahili, but the language
no one's ever heard about, and then that's sitting on social media, and there is no AI, and there is
no hashtag. And you cannot guarantee that the person watching that video or looking at that content
has even any amount of digital literacy, because all of our trust mechanism,
have been miscalibrated.
But so if we zoom forward 10 years where somehow we did solve this problem and we did
have a more resilient society and we have done what we needed to do with the tech platforms,
in your view, given what you know, what have we done?
I mean, that's the most existential of all questions.
It's something that I've thought about every year that I've worked at ISD.
Are we ready to aspire to that world or are we doomed to live with it no matter what
happens no matter all of our efforts. I think the regulation issue is going to be a really,
really protracted challenge that requires a whole of society approach. You know, somehow we've
all got to get on the same page across borders, across cultures, across languages, like you
said. And I tend to put my faith again in the innate goodness of people and their ability to
build beautiful things in their communities and their cultures and their societies and try to
nurture that in some way so that the efforts to destabilize that are seen as futile.
Most people are very, very passive users of this technology.
It's just the very few active people that are able to destabilize.
To me, it's not a question of extremism disappearing.
It's a question of it being impotent.
Zad, thank you so much for coming on the podcast.
It's been wonderful to have you.
Thanks for having me.
Your undivided attention is produced by the Center for Humane Technology.
Our executive producer is Dan Kedmi and our associate producer is Natalie Jones.
Nor al-Samurai helped with the fact-checking,
original music and sound design by Ryan and Hayes Holiday.
And a special thanks to the whole Center for Humane Technology team for making this podcast possible.
A very special thanks to the generous these supporters of our work at the Center for Humane Technology,
including the Omidiar Network, the Gerald Schwartz and Heather Reisman Foundation,
The Patrick J. McGovern Foundation, Evolve Foundation, Craig Newmark Philanthropies, and Knight Foundation, among many others.
Huge thanks from all of us.