Front Burner - The 'algorithmic fog of war' with Israel and Hamas
Episode Date: October 18, 2023Avi Asher-Schapiro, tech reporter with the Thomson Reuters Foundation, takes us through some of the reasons fake news or misleading content about the fight between Israel and Hamas is being amplified ...on social media feeds. For transcripts of Front Burner, please visit: https://www.cbc.ca/radio/frontburner/transcripts Transcripts of each episode will be made available by the next workday.
Transcript
Discussion (0)
In the Dragon's Den, a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem. Brought to you in part by National
Angel Capital Organization, empowering Canada's entrepreneurs through angel
investment and industry connections. This is a CBC Podcast.
Hi, I'm Damon Fairless. Before we start this episode, a warning for you. We mentioned some graphic descriptions of violence, so please listen with care.
So I don't know about you, but I've been trying to keep up with every single update on what's going on in Israel and Gaza right now.
It's spent a lot of time online.
And it's meant sifting through a ton of information, pictures and videos, trying to figure out what's real and what's not.
In some cases, things that are widely reported or shared one day are disputed or debunked the next.
So what is it about this conflict right now that's bringing up so much misinformation and disinformation? And how much
worse is the fog of war when fake or sensational content can be pushed in front of you by an
algorithm? Here to make sense of it all with me today is Avi Asher Shapiro. He's a tech reporter
with the Thomson Reuters Foundation. Hey Avi, thanks so much for coming on FrontBurner. Thanks so much for having
me. Okay, so let's think about this last week of coverage of the war in Israel and Gaza. What are
some of the things you've seen or heard that stand out to
you and turned out not to be true? I think something that stood out to me,
just because it combines a couple of phenomenons that I'm thinking about, was a verified account
on X, formerly known as Twitter, that was imitating the Israeli Mossad, which is Israel's
spy service, and had hundreds of thousands of followers and had an authoritative looking kind of badge and had showed images of Israeli weaponry.
I think it was the iron beam weapon to counter rockets, and it turned out to be an image from a video game.
game. So on the one hand, not perhaps not the most harmful image, but also emblematic of what you can do these days on social media, which is sort of take on the identity of a spy agency and
rack up, you know, viral level views on something that's, you know, you've reskinned from from a
video game, there's obviously been much more sort of horrifying and potentially dangerous
things out there as well. There was this image that went viral of a girl being burned alive.
It was passed off as being from a video from Hamas, and it turned out to be from Guatemala.
We're seeing all sorts of things being thrown around, in some of which, you know, from accounts
that from a first glance, someone who might not
spend a whole lot of time parsing these things, things that look authoritative,
which I think is a big and a more new problem.
Right, yeah. So it's interesting because, as you say, this stuff is everywhere in different forms,
different kind of, I guess, for lack of a better term, quality and believability.
But there's one specific instance I want to talk about, but I'm just going to ask you
about it, but I kind of want to set up a little bit of context first because it's a little
complicated and maybe some people listening haven't heard or haven't followed it.
And one thing I really want to make clear before I launch into this is that one thing
that's really clear is that Hamas murdered children on the October 7th attack. And then last week, there was also this claim that during the attack,
Hamas decapitated babies during its attack on a kibbutz. That information was aired on CNN.
We have some really disturbing new information out of Israel. The Israeli prime minister's
spokesman just confirmed babies and toddlers were found with their heads
decapitated in Kfar Aza in southern Israel after Hamas attacks in the kibbutz over the weekend.
That has been confirmed by the prime minister's office. Let us go now to CNN.
U.S. President Biden said he saw those photos and he said,
U.S. President Biden said he saw those photos, and he said, I never really thought that I would see and have confirmed pictures of terrorists beheading children.
I never thought I'd ever. Anyway, I.
However, later, a U.S. administration official clarified that Biden actually hadn't seen the pictures and
they hadn't confirmed reports of children or infants being beheaded by Hamas. Israel since
said, there have been cases of Hamas militants carrying out beheadings and other ISIS-style
atrocities. However, we cannot confirm if the victims were men or women, soldiers or civilians,
adults or children. That's the official quote there.
My point is that this is an unverified claim, but it made its way out there almost instantly at the highest levels. So my question for you is, what do you make of the way this specific
claim got amplified? I think when you have algorithmically driven feeds that are designed to capture our attention
as quickly and for as long as possible, and you have those be the engine of attention
in a violent conflict, you have a dangerous mix of elements, right?
And I think especially with some of the changes that have been made on Twitter, where you
have the algorithm sort of injecting accounts that you haven't decided to follow, but the algorithm has decided that are most likely to capture your attention, inject it in front of your eyes.
You have a situation where some of the more salacious and shocking and titillating information is sort of potentially supercharged right in front of your face as quickly as possible.
So I think that it's important to remember that in all conflicts, since the beginning
of time, you know, initial reports and initial accounts will be contradictory.
It's the job of human rights organizations, international, you know, monitors, careful
journalists to confirm accounts, you know, with witness testimony through verification,
through newer techniques of open source investigations. And I'm hopeful we'll get
to the bottom of all the court sorts of claims of horrific atrocities that have come out of the
region. But I think that what's new here is that, yes, you have the capacity for a single source
account to appear in a regional publication, let's say, and for it to immediately be picked up by hundreds of thousands of people online to pick it out, potentially take it out of context, strip it of its sourcing, and then share it and then have algorithms pick it up that is unique to our era. But I think that it's really,
as you say, it's the way that this information gets spread and that is sort of new.
Beyond that specific incident, where is most of the misinformation or disinformation coming from?
It's a great question. And I think the motivations for people spreading information that's untrue or complicated, I think it's also important to keep in mind that, you know, we don't know because we don't know what's true. You know, initially, it's hard to know, you know, if the viral tweet you see is someone
who's been confused or someone who's part of a network of state backed, you know, propagandists.
And I think that that there's a mix of all of that out there.
You know, there's been some initial reporting from sort of the kinds of like the types of
monitoring groups that suggest that
there are some networks of accounts that
have sort of sprung into action around this.
You know, accounts that
like, and you can tell this if there's, let's say,
an account that's been lying dormant
forever or just
sharing basketball memes or something random
and then all of a sudden it and a
bunch of other accounts start sharing very similar
kind of misleading information suggesting there's some document linking the, you know, the Ukrainians and the Mossad or something.
Right. And they all researchers do this and they try to parse, OK, maybe there's a state backed actor that's coordinating this.
But I think, you know, there's a lot of there's been some reporting to suggest that a lot of the sort of most damaging rumors and propaganda is being
circulated, you know, in WhatsApp notes from the region, right? And this is a much harder to study
because it's, they're encrypted, you know, audio messages or voice notes that are passed on, you
know, Meta's WhatsApp platform, which can be, you know, generated locally, perhaps by people who
are partisans in the conflict who are trying to inflame tensions or
by people who are just, you know, confused and engaged in rumor mongering. So I think you really
see the full spectrum of motivations and also the full spectrum of actors here from people
on the ground to, you know, potentially, you know, foreign backed propaganda cells from across the
world who are, you know, getting their hands in this.
Yeah, well, it's really interesting because there are these reports of misinformation and disinformation being tracked to people, to users all over the world, including some places that I wasn't, yeah, I guess I found surprising.
India, for example, is, you know, there's a lot of accounts that have been linked back to users in India.
So I guess what I'm curious about is what reason would users who are seemingly not connected
to this conflict, what interest would they have in spreading fake news or content?
Yeah, I mean, in that case, I think some of the major Indian fact-checking websites or outlets like Alt News have sort of noticed that some, you know, the BJP, the ruling party in India, has a very sophisticated online propaganda arm, an IT cell that kind of springs into action to support the ruling party's line on various things.
And people have noticed, wow, like a lot of these accounts and networks seem to be activated around what's going on in Gaza and Israel.
And, you know, they, you know, they're a Hindu nationalist party that often engages in the demonization of Muslims.
So you can imagine why they might want to be amplifying certain messages that might, you know, conflate Hamas's actions with the wider Muslim world.
Right.
All over the world, you could have motivated political groups that could see a way to
instrumentalize this terrible conflict for their own political purposes, the way they
could fight their own battles through what's happening.
I'm interested in other potential state actors here.
We've talked about India.
I guess the question that springs to my mind is, have we seen stuff that can be linked
back to Hamas, stuff that can be linked to Israel or any other of the peripheral state
parties around the conflict?
I mean, I think it's important to recognize that, you know, yeah, the Israelis and Hamas are going to
engage in propaganda. Obviously, they're going to spread their version of events, they're going to
shade things in the way that they see fit. Part of what I'm interested in this moment is, you know,
the way that the structures of the social media platforms and their business models, you know, contribute
to or don't contribute to a healthy media ecosystem.
I think one of the things that's changed most dramatically about, you know, Twitter in the
last year is it used to be sort of this mixed platform where you had a lot of editorial
functions being done by staffers, where they would curate information, they would make
carousels.
And then that would be alongside user-generated content. functions being done by staffers where they would curate information they would make carousels and
then that would be alongside user-generated content and it would sort of allow for people to
choose their own adventure to a certain extent would you like to see the carousel of information
that's been curated by our team that has like reuters and you know this kind of standard news
organizations or you could spend hours kind of digging into user generated posts and theories about what's going on or spend all your day on the idf account or you know that's
fine but there was sort of these different options now you know the curation is much less apparent
it's it's sort of all algorithmic it's all out there so i think it becomes more difficult for
people um to sift when that kind of approach um is tried in the middle of a conflict with a bunch of
confusing and propaganda, propagandizing information flying around, you see, yeah,
I mean, you see like a real, like algorithmically driven fog of war,
which is what we're, what we're sifting through right now. In the Dragon's Den,
a simple pitch can lead to a life-changing connection.
Watch new episodes of Dragon's Den free on CBC Gem.
Brought to you in part by National Angel Capital Organization,
empowering Canada's entrepreneurs through angel investment and industry connections.
Hi, it's Ramit Sethi here. You may have seen my money show on Netflix. I've been talking about
money for 20 years. I've talked to millions of people and I have some startling numbers to share
with you. Did you know that of the people I speak to, 50% of them do not know their own household
income? That's not a typo. 50%. That's because money is confusing. In my new book and podcast,
Money for Couples, I help you and your partner create a financial vision together. To listen
to this podcast, just search for Money for Couples. Okay, I want to drill down into that a little more.
So these changes that you're talking about, Elon Musk making over the last little bit on the site,
you mentioned this, it's a great turn of phrase, an algorithmically driven fog of war.
Tell me more about that algorithm specifically and what you think that's doing in terms of propagating this misinformation and disinformation.
Yeah, so I think I'd step back and say all of the platforms have, to some extent,
cut down on their capacity to handle this deluge of content.
We know that Meta has long underinvested or at least been accused of underinvesting in Arabic language content moderators.
They won't just they'll never disclose.
None of these platforms will ever disclose how many speakers of a particular language they have. across the board in the tech industry on the sorts of teams that were tasked with curation,
with tackling, you know, hate speech, with moderation.
So we, you know, this is in line with a larger downturn in the tech industry.
We've seen cuts there, right?
So that's sort of the big picture.
Twitter is its own unique case where, you know, when Musk took over, he basically fired half of the entire company and liquidated, you know, basically all of the people working on these issues.
He has since said that he's committed to rebuilding these teams and they have people working on these questions.
But he's also made some major structural changes to the
way the platform works that does set it apart from some of these other platforms. I think the main
thing that he's done is he's allowed people to buy verification. And then in turn, that verification
allows them to be injected into the stream, algorithmically injected into the stream of
people's feed you know,
people's feed. Right. So it's like paying for position, right?
Right. Paying for position. Exactly. And, and then also like the other side of that is that
these accounts actually get, or at least they're eligible for getting an ad share so they can get
paid out for how popular their posts are.
So you create a very potentially, you know, uneasy incentive system here where you're paying to be in front of people's eyes.
And then the more you get in front of people's eyes, the more you get paid.
And when you have, you know, a situation like a war where everyone is reading and learning about this war on your
platform, you potentially set up a situation where people are trying to game this algorithm
to get in front of people's eyes. And maybe the easiest way they find to do that is to take a
video game image or maybe take a fake image from a previous conflict, right? It's definitely doesn't
set up the right incentives to be sort of careful and considered. Now, I shall
say that X has said that they, you know, they accounts, you know, chronically spread, you know,
false information accounts to try to use the war to juice their revenue or, you know, will not be
allowed to do so. But it takes a lot of staff, a lot of investment to actually enforce that kind of rule structure.
And it's unclear if such a, you know, scaffolding has, has been built.
It sounds like, I mean, it sounds like there's a, there's a real tension at play there from
understanding what you're saying is that you've got, you've got this reduced human staff whose
job it is to, you know, look at the veracity of these things, working against
an algorithm that essentially incentive incentivizes for lack of a better word,
sensationalism as opposed to truth. Right. And those, those things are not in balance
from the sounds of it. Right. There's definitely a potential tension there. I think, um,
you know, and obviously, you know, you've been building a social media platform,
you want people to, you know, share information that's of interest to other users. So I mean,
you can understand why they try to create a feed that has popular information in it. But I think,
obviously, you're correct that in a moment where you have, you know, people who are taken to this platform to sort of digitally fight over this,
become partisans in this bloody conflict,
you create all sorts of incentives.
Getting back to X, like one of the things that X has implemented, I think to maybe, but the idea is to moderate some of the stuff that is being curated on the site is this community
notes feature, right? Where the users can use it to debunk or add context or whatever to posts that have been shared.
I'm curious how that's been working for some of the posts that have to do with the Middle East.
Musk has made Community Notes a sort of hallmark of his content moderation strategy.
Community Notes is this process whereby users volunteer and sort of engage in deliberation in public over if something is
truthful or not. And so it's an interesting feature. But I think, you know, it's volunteer
driven. It's not professionalized. And you've seen reporting. I think NBC had a really good
piece last week that got kind of behind the hood or under the hood of the Community Notes program
and showed that just they were inundated, right? There weren't enough volunteers to label
and annotate the viral posts that contained falsehoods in real time. And so millions of
views were, these videos were racking up millions of views while the beleaguered volunteers were
sort of like going through their process of coming to a consensus about what a note they could
all agree upon would look like. So these are businesses and they try to cut costs and they
make investments in certain places. In content moderation, curation, having teams who study the
spread of information on their platforms is expensive. Community Notes, although it might have some interesting theoretical, experimental, you know, ideas behind it, you know, it is also
happens to be free, right? Which might not be a small part of the consideration. Yeah,
for the company that is clearly, you know, struggling to turn a profit.
You've mentioned, you know, a few platforms.
So it's not just X.
There's been posting videos pulled down from TikTok, Instagram.
You've mentioned Telegram, WhatsApp, all, you know, various post information, disinformation,
what have you on these, on the israel gaza war i'm curious like what you make of the way people turn to this kind of user-generated
content for news updates especially now at a time when trust in traditional media is kind of waning
i don't i don't like the cliche but to some extent it's a double-edged sword. I think that having access to posts and images coming from the ground has a bunch of positive benefits.
I think it helps with human rights investigations.
It helps to not have the media have a total stranglehold on the narrative.
It is important to have checks and balances, to have people be able to learn for themselves.
You don't want to just learn everything from, you know, one or two media outlets.
But on the other hand, I think it creates, yeah, some, it generates some myths about
what's knowable.
Yeah, it generates some myths about what's not even worth worrying too much about because
or fretting too much about because it is the way that we will be experiencing conflict
from now until forever.
Right.
And it is the way we've been experiencing conflict for many years, you know, going back
a long time.
I think the one thing that we have to just really think hard about and be attuned to is that, you know, the platforms
that mediate these experiences for us are advertising companies, you know, they and they
are, there's not a lot of them. And they are making business decisions that are based upon,
you know, mitigating their own legal risk and maximizing their own revenue. And they are not
in the business of being, you know, they don't have an ethos necessarily
of being like, you know, balanced or in the public, spreading information in the public
interest.
I mean, that's just not what they're set up to do.
These are often publicly traded companies that have a obligation to maximize their revenue
or their fiefdoms like Twitter, run by the richest man in the world.
And I think that when you have the incentives of running an advertising and attention platform
collide with the imperatives of sharing accurate information in the public good about an ongoing
conflict, you often have just a real clash, right?
And I think that a lot of the stuff we've been talking about today is just the sort
of runoff of that misalignment.
Okay.
So, so, I mean, I think that's like important takeaway alone, just being aware what these, what these, what these platforms are.
In addition to that, you know, people are going to be trying to keep up with things going on between Israel and Gaza, you know, for the foreseeable future.
going on between Israel and Gaza, you know, for the foreseeable future.
Is there anything else that they can be doing to make sure they're not falling for misinformation, disinformation, lies, fake news, fake content?
Yeah, I mean, I think that to the extent that people, you know, are able to take a beat and
recognize that the fact-finding process is iterative, right? You have, you know, someone
might interview a source that says one thing or see a video that suggests one thing, and then a
journalist, another journalist will come in and take another swing at it and broaden and deepen
our understanding of it. And that, you know, that is the process of truth-seeking. And so the extent that people are willing to not jump to conclusions to
check multiple sources to take some time before feeling 100% confident of any given thing,
I don't know, that might be too much to ask for a moment like this. I mean, it is an urgent moment,
right? I can imagine people saying we don't have time for that right now. We have a humanitarian crisis. We have a
water shortage. We have hundreds of thousands of people relocating all around the region.
It's good enough for you sitting in the US to say everyone should calm down and take a beat,
but that's not something that people are able to do who you know, who are on the front lines of this.
And I think that's fair enough. I think I wish that I had a sort of easy and simple way of
suggesting people metabolize this conflict, but there isn't one, you know, and there never has
been. Yeah. Great, Avi. Thank you so much. It's been great talking to you.
Yeah, it was, it was, it was a real pleasure chatting with you.
After we recorded the interview, you just heard Palestinian officials said that an Israeli airstrike hit a hospital in Gaza City,
that hundreds of people were killed, and that thousands of civilians had been sheltering there. The Independent Medical Humanitarian Group, MSF, wrote on Twitter,
quote, nothing justifies this shocking attack on a hospital and its many patients and health workers,
as well as the people who sought shelter there. Hospitals are not a target. This bloodshed must
stop. Enough is enough.
The Israel Defense Forces blamed the militant group Palestinian Islamic Jihad,
saying a rocket they fired malfunctioned after launching.
U.S. President Joe Biden is set to meet with leaders in Israel on Wednesday.
Following the hospital attack, Biden's meeting with the leaders of the Palestinian Authority,
Egypt, and Jordan was canceled.
We'll be watching this story.
I'm Damon Fairless.
Thanks for listening to FrontBurner.
I'll talk to you tomorrow.
For more CBC Podcasts, go to cbc.ca slash podcasts.