Angry Planet - ICYMI: Fake Journalists Are the Latest Disinformation Twist
Episode Date: December 24, 2021Last week The Daily Beast broke some bizarre news. Several news outlets, including The Washington Examiner, RealClear Markets, and The National Interest, had been running op-eds of journalists that di...d not exist. AI generated photos attached to profiles and credentials that, once scrutinized, collapsed. It was a massive effort at digital propaganda and questions still remain about its provenance and purpose.Here to explain just what is going on is Marc Owen Jones. Jones is an assistant professor in Middle East Studies and Digital Humanities at Hamad bin Khalifa University and an expert in social media disinformation who helped sound the alarm about this campaign.Recorded 7/13/20Fake journalists have joined the frayTracking response of the dupe outletsThe difference between misinformation and disinformationMedia literacy in Estonia and FinlandA website that generates people who don’t existWar College has a substack! Join the Information War to get weekly insights into our angry planet and hear more conversations about a world in conflict.https://warcollege.substack.com/You can listen to War College on iTunes, Stitcher, Google Play or follow our RSS directly. Our website is warcollegepodcast.com. You can reach us on our Facebook page: https://www.facebook.com/warcollegepodcast/; and on Twitter: @War_College.Support this show http://supporter.acast.com/warcollege. Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
Love this podcast?
Support this show through the ACAST supporter feature.
It's up to you how much you give, and there's no regular commitment.
Just click the link in the show description to support now.
It appeared that at least another half of the accounts were using artificially,
or sort of images generated by artificial intelligence.
Now, this is very straightforward to do now.
You can go to the website, This Person Does Not Exist.com,
and every two minutes it will generate a very, very,
very believable, uh, fake face using artificial intelligence. And you could just keep clicking
until you find a face that you like and then use that as your profile picture. You're listening to
War College, a weekly podcast that brings you the stories from behind the front lines. Here are
your hosts. Hello, welcome to War College. I'm Matthew Galt. And I'm Jason Fields. Last week,
the Daily Beast broke some bizarre news. Several news outlets, including the Washington Examiner,
real clear markets and the national interest have been running op-eds of journalists that didn't exist.
Not just that their articles didn't exist if the information was bad,
the journalists themselves did not exist.
AI-generated photos attached to profiles and credentials that, when scrutinized, collapsed.
It was a massive effort at digital propaganda,
and questions still remain about its provenance and purpose.
Here to explain just what is going on is Mark Owen Jones.
Jones is an assistant professor in Middle East Studies in Digital Humanities at Hamad bin Khalifa University and an expert in social media disinformation.
He helped sound the alarm about this campaign.
Sir, thank you so much for joining us.
Thank you for inviting me.
It's great to be on the podcast.
So give us the broad strokes of what happened here.
Again, like I said earlier, this is not just fake news stories, but fake journalists as well.
Yeah, absolutely.
Yeah, I mean, if I was to try and sum this up as succinctly as possible, I'd say we have a network of approximately 20 fake journalists, i.e., people who do not exist, who successfully managed to submit around 80 articles to around almost 50 different internet news outlets in the space of about six months.
The majority of those articles are also propaganda.
So what we really have here is an elaborate large-scale influence campaign, as you said, perpetrated by unknown actors, although we could hypothesize who might be behind it quite easily, I would say.
One question I have is, when did people start to notice that things were going wrong or that the journalist didn't exist?
That's a good question.
So actually, this, I mean, this was an interesting investigation because,
there was no real alarm bells flagged from the content as provided by the journalists themselves.
The reason this came to be in a very strange way, I mean, I've been writing on disinformation for some time now.
So, you know, I've established a quite large network on Twitter, people who often get in touch with me if they see something unusual happening in a certain part of the world.
So what actually happened here is that I got a message from someone who's based in the United Arab Emirates.
he messaged me because he'd received a strange message from Rafael Badani.
Now, Raphael Badani is one of the fake journalists or one of the journalists who later turned out to be fake.
And this message basically said, oh, it seems that we have something in common.
Tuakal Carmen, who's a Yemeni activist and who was appointed recently to the head of Facebook's oversight board,
she is a Muslim Brotherhood sympathizer.
and we must do our best to try and make people aware of this.
And this was the message sent by Rafael Boudani to my friend.
And my friend found this slightly odd.
So he just got in touch with me and said, hey, I got this message.
What do you think?
And I saw the message and I thought, that is very strange.
So I told my friend to play along.
I said, well, you know, play along and see if you can get any more information from this guy.
So I told my friend to ask this Raphael Bidani to do a Zoom meeting or a Skype meeting.
and so my friend did.
And the first thing Rafael Bidani did was refuse.
He refused to do a Zoom meeting, and he cited security reasons.
And obviously, that's a red flag straight out the box.
You know, why is this guy cold calling people on Twitter refusing to appear on camera?
And that just kind of affirmed some of my suspicions.
And instead of, obviously, appearing on camera, Raphael Bidani then sent a link to an article.
And the article itself was on the Asia Times, which is a Hong Kong-based news outlet.
And the article was about Tawakul Karmann's appointment to the Facebook Oversight Board and how this was dangerous for freedom of speech and democracy.
And this is an argument that I've heard coming a lot out of the United Arab Emirates and Saudi.
You know, there is a strong push now to kind of stigmatize the Muslim Brotherhood.
And there was a disinformation campaign, a separate one that I identified attacking Tawul Karmat.
So I was very suspicious about the content of this article.
And then because I was suspicious, I also looked at the biography of the, biography of the,
journalist, Lynn Nguyen, who had a very generic biography.
There's nothing specific about where she'd studied or where she'd worked before.
And it just sort of said South Asia analysts interested in analyzing markets.
And I found that very odd.
And so that really started this whole kind of this kind of investigation.
And from that, we looked at Rafael Bidani's account.
That's when I got in touch with Adam.
And we started to look into Raphael Bidani's account, found that he'd appropriated someone
else's photograph.
And his name led to
a few other websites,
including the Arab Eye and Persia Now, which was set
up specifically by this network, presumably
to give them a portfolio of pieces
that they could use to gain credibility
when approaching other editors to provide content.
So that's how started.
Sorry.
No, no, that's wonderful.
Explain the Persia Eye Network.
And what was the other one called?
So, yeah, the Arab Eye and Persian
now.
Okay.
So I got to, yeah.
Yeah.
I mean, they're the same anyway, so it doesn't really matter what we call them.
So the Arabi and Persian now, both self-described new sites providing sort of more right-wing
commentary.
And I say that because they describe themselves as wanting to provide or fill the gap in the market
for the absence of right-wing commentary on the region, the region being the Middle East,
the Arab world, Iran, and Turkey.
these were relatively new websites.
They were ostensibly separate.
So there was nothing on the web pages that linked them to one another.
However, the branding was entirely the same.
The color scheme was the same.
And what Adam found was that they shared SSL certificates and Google Analytics Code.
And also, it became clear that some of the journalists on the network were writing for both the Arab Eye and Persia now.
So the Arabi and Persian now, they essentially published articles about the Middle East.
Again, with a very specific slant.
Most of them were critical of Iran, critical of Turkey, critical of Qatar, and critical of the Muslim Brotherhood.
And what was interesting about them, although the majority of the content was provided again by these fake journalists,
they had tricked some real people into providing content.
So that was quite interesting.
It was obviously set up as a fake enterprise.
yet they were obviously trying to gain legitimacy and credibility by roping in kind of fake accounts,
by real people, sorry.
So wait, does that mean that they were actually paying real journalists to submit content,
or were they grabbing it and stealing it from other locations?
So they were getting, none of the stuff that we found,
none of the articles that were provided, and this is what's very interesting about this whole situation,
none of it was plagiarized or at least none of it was explicitly copy-pasted plagiarized.
The level of English was generally outstanding.
You know, what happened since the release of the investigation,
we've had a couple of other editors who've come out and say they were approached at some point by these fake journalists.
There was some, I forget the name of the editor, I think it's Tom, at the Hong Kong Free Press.
He said he was approached by, I believe, Cynthia Chi and Lynn Nguyen, who wanted to provide articles.
And the reason he didn't accept those articles is he was slightly suspicious because the articles, the op-eds that they were suggesting, were so polished.
And that these journalists had sort of financial backgrounds.
He was kind of suspicious about their provenance because they were just so polished.
And that's what's very interesting about this operation.
It's not a kind of two-bit, let's just piece together some plagiarized content.
and hope it sticks.
The people providing or doing the writing, whoever they are, clearly are competent in English
and also have some experience of international relations or journalism.
So this would suggest that there is some sort of relatively well-resourced or at least
educated outfit behind it.
It's so funny to me that an editor would immediately be like,
the copy's too clean here. This can't be a real person. I think both Jason and I have been editors before and can say that that is probably, yeah, very true.
Yeah, it's an unusual problem to have, right? Right. So do we have any idea who's writing these articles at all?
The who is very tricky. I mean, the discourse, the overall, the overarching, are.
arguments presented by all the authors in their entirety offer a very specific,
I would say quite a specific foreign policy alignment that puts them or suggests that they
are an entity perhaps sponsored by the United Arab Emirates or perhaps Saudi Arabia.
I would, because, you know, this whole anti-Turkey, anti-Turkish intervention in Libya,
the anti-Muslim Brotherhood tropes, the anti-Katar tropes.
The anti-Katah one in particular is quite specific.
would suggest, again, that it was probably a PR company working on behalf of one of these entities.
There are some PR companies who I would think would potentially be involved in doing this.
But there's no smoking gun, you know.
What I tended to look, when you look at who shared the articles on Twitter, you can make two summations.
You can sort of say, if someone shared this article on Twitter, maybe they just agreed with its argument and decided to share it.
or this person knows about who's behind these articles and is sharing it because they have a vested interest in doing it.
I'd like to think people wouldn't be so stupid as to be behind it and also share it, but then these days you never know.
I mean, this is such an audacious scheme.
I suppose anything is possible.
But in terms of smoking gun, there's no smoking gun as yet.
We know that Twitter are looking into the accounts to see if they can ascertain if there is a clear state-back-linked nature to this operation.
And if so, which state is it?
So we may know some more in the coming days or weeks.
All right.
Can you talk a little bit more about kind of the broad editorial message that was coming from these journalists?
Like what are some of the headlines for some of the pieces?
Like, what were they saying?
What was the message they're trying to push?
Right.
So I would sort of say there's two broad aspects of this.
There's generic analysis of financial markets.
And I think some of that is fluff.
Some of that is designed to create, give the journalists a point.
portfolio that they can then go to other outlets and say, listen, I'm ex-journalists, I've written
for Newsmax, I've written to the Washington Examiner, can you publish this? But the dominant
trope was specifically anti-Iranian influence in countries such as Iraq and Lebanon. So
some of the pieces talked about the role of Hezbollah, the negative role of Hezbollah in Lebanon.
And so that's, again, it's a way of critiquing Iranian expansionism in the Gulf. There was
several pieces on Iraq and sort of blaming Iran and Iranian influence for the instability in
Iraq. Similarly, there was, there was, you know, obviously with the conflict in Libya at the moment
and the various sides backing different parties, the editorial stance of a number of the articles
was specifically about against Turkey's involvement in Libya and sort of more pro-HFTA.
So Haftar is the kind of UAE-backed warlord in Libya.
So that was another very specific angle of it.
There was also a few articles that were praising the United Arab Emirates' response to the COVID-19 crisis,
which was kind of an unusual departure, I suppose, from some of the more kind of conflict-based and political articles.
But also it kind of would make sense, I suppose, if that entity was involved in it.
And some of the others was specifically about Facebook's appointment of Tuakal-Karman.
And that that was, you know, that's, again, unsurprising if you know the foreign policy of the Gulf states, because again, there's a big push following the Arab uprisings in 2011, or really the overthrow of Morsi in Egypt, to stigmatize the Muslim Brotherhood.
And there was a massive backlash from, on Arabic Twitter, against the appointment to Swackle Common to Facebook's oversight board.
So a number of the articles dealt specifically with that.
and in others were about Qatar sponsoring terrorism in the world.
So these are the dominant tropes, mostly anti-Kata, Turkey, anti-Iran, and anti-Muslim Brotherhood.
And most of these articles, apart from being targeted at a few international outlets like Asia Times and South China Morning Post,
were generally targeted at right-wing news outlets, let's put it that way, such as Washington Examiner,
the British spiked online, human events, newsmax, that kind of thing.
You mentioned PR agencies.
You know, the question I have is, who has such agencies?
Is this the kind of expertise one can actually find in the Gulf itself?
or would you have to look to a PR agency in the United States, Britain, some other place like that to actually do the writing?
Well, I mean, this is, traditionally, this type of work has always been outsourced to PR companies based in Washington and London.
I mean, even up to about nine years ago, London had a reputation as being, you know, the reputation laundering capital of the world.
because so many of the big PR companies doing work,
whitewashing human rights abuses perpetuated by dictators around the world,
were based in London.
And we know more recently, for example,
we know that Linton Crosby, which is a big PR firm
that has done work for the British Prime Minister, Boris Johnson,
has been involved, for example,
in creating fake Facebook profiles and pages and accounts
to burnish the reputation of Mohammed bin Salman,
the Crown Prince of Saudi,
following the killing of Jamal Khashog,
in 2018.
But we know that they exist, right?
So we know that they're doing work for these countries.
At the same time, we know that some of these companies that often have, or headquartered in
the UK or the US, then have branches out in the region.
And so, you know, they'll have offices in Abu Dhabi or Doha or Kuwait or wherever and then
operate from those places.
We can also look at Farah filings, you know, the Foreign Agents Registrations Act,
to see who's, I mean, this is the nice thing, I suppose, about the US, is that at least there is some attempt at transparency.
We know there are some companies who have done work, for example, you know, on behalf of the Saudi government, I think South Five Strategies signed a deal in 2018 with Saprak, which are the Saudi American, I don't, Public Relations Council or something.
and they did work basically to, again, burnish the reputation of Mohammed bin Salman after the murder of Jamal Khashoggi.
So we do know that there are American and British companies working for Gulf governments who are and have been found to be engaging in what I would say are certainly deceptive tactics and manipulative tactics or social media astroturfing.
So it's not surprising, I think, or controversial to suggest that whoever's behind this particular operation could,
be one of these PR companies.
The flip side of this is the, are the organizations that are actually publishing this stuff.
So Newsmax or the Washington Examiner.
Are the editors at these operations lazy, bad?
Or are they just short-staffed?
Do we have any idea of what their motives are?
I mean, I think this is a very good question.
I mean, after the investigation was published, I sort of tracked the responses of the various editors of these publications and created a rough typology of the reactions.
I mean, firstly, let's talk about how they responded.
So a number of the organizations, I say, responded more professionally.
And by that, I mean, they published a retraction or they created some sort of note saying, we believe that this content was provided by.
by a fake journalist.
We've removed the content, but left the headline.
For example, that's what the South China morning posted.
Spiked online at British publication,
acknowledged that there was an investigation
that suggests that the journalist was fake.
However, they decided to keep the articles up
for quote-unquote transparency,
which I find incredibly bizarre a response, but anyway.
And then
one publication,
human events,
which again is a kind of
Republican-leaning, right-wing
leading publication.
The editor there
basically went the other
way and said, the editor
or the manager-director,
I think his name is Will Chamberlain.
He said they agreed
with the arguments being presented
in the article, even though they
acknowledged that there was an investigation into the
veracity of the journalist. But because
they agreed with the article, they, again, quote,
unquote, adopted it as their own, which to me is the most ludicrous response to a...
That was my favorite of the batch.
Yeah, it was absolutely incredible.
And not only was it, you know, showed no humility, this guy started to kind of ad hominem attack me on Twitter for basically being part of this investigation.
And, you know, one of the editors of the same publication told me to go and F myself.
So, I mean, I don't think it's possible to even generalize about the motives of it editors.
What we can see, and again, some of the ones we interacted with, I think a lot of editors are under pressure to publish.
I think the business model of a lot of publishing, a lot of new sites now is very much, you know, we need to drive advertising revenue.
We need to get more content out there.
And that, of course, leads to some pressure on editors to get more novel content.
obviously that exists. However, that doesn't negate the fact that editors need to carefully scrutinize
those who are writing for them. And at the same time, these accounts, these fake accounts,
behaved in such a way that, you know, even if an editor was to exercise a minimum amount
of due diligence in Google who they were, who was submitting articles, them, they would find
a LinkedIn profile, probably a Facebook profile. They would find a profile on muckrack,
which is journalists used to detail their portfolio.
I think editors would have had a good reason to believe that those submitting the articles were real people, right? Because they'd had an established profile. But scratch beneath the surface, if any of those editors are scratched beneath the surface, I think they would probably have had a lot of questions about the kind of content that was being asked. So in some ways, I don't blame the editors because I can understand how they could have been tricked easily. Having said that, I think if you are, I think there's so much news out there. There's so much information.
I think, you know, if I think editors should in this kind of post-truth age where deception is so prevalent, pay more attention to who they commission to write articles.
And I think, I think that's lacking.
And I hope that this investigation has kind of reiterated the need for more verification, you know, the need for more verification from newspaper editors.
we should just mention, hold on, there's just one thing we should mention, Matthew, if you know.
Oh, I think I know exactly where you're, I think I know exactly, let me do it.
You go for you.
I think I know exactly what you're going to do.
Okay, so as you were kind of saying that, it really speaks to something that Jason and I have been talking about and thinking about a lot lately,
which is that increasingly it feels like a good journalist is not just writing articles and processing the news and kind of telling you the story of the world, but helping you.
sort the signal from the noise.
Right.
Because there is so much information going on there.
And to that end, we have launched a substack, which you can get at warcollege.substack.com,
and we will help you sort the signal from the noise on a weekly basis every Monday in your inbox.
I think that's where we should always go, that and begging for money.
But actually, one other thing I want to say is in the interest of our own.
transparency, we actually started this podcast when I was the opinion editor at Reuters. And so I just,
we had one incident where the person who was writing for us, it wasn't that she wasn't real,
but she didn't bother to mention that she was being paid by Armenia. And there was really only one
paragraph that was questionable in the entire piece, which sort of showed the brilliance of
the whole plan.
Do you know what I mean?
By having so much correct information, just a little bit of spin, she got what
she wanted, and it got through us.
I mean, that's, you know, again, that's when a talented writer is important.
And again, that's one thing that was evident in this campaign is that when I looked through,
you know, every article written by everyone these fake journalists.
Most were pretty good.
I could tell the bias because I study the Gulf, I study propaganda in the Gulf, I probably know what to look for, right?
If you were a general editor looking maybe at an article, for example, about the appointment of someone to the Facebook Oversight Board, you might not know some of the nuances.
But I could see that the way in some of these articles, a small amount of, I say a small amount of balance was evident, but generally it was stacked in such a way that it was clearly propaganda.
But I would say that these articles were well written, and that would also contribute to perhaps fooling even a very diligent editor.
All right.
We're going to pause there for a break.
You're listening to War College.
We are on with Mark Owen Jones talking about misinformation.
Welcome back, War College listeners.
You are on with Mark Owen Jones, and we are talking about misinformation.
Can we talk about this kind of a tangent side story to this thing, but I think it's very interesting.
and I think it's important for understanding
like why these things are believed
and how the information is created.
Can we talk about the profile pictures and images
for the journalists themselves?
Where did these contributors come from?
Because none of them were real, so to speak, right?
But they had real photographs.
Well, this is the interesting thing.
So half, I say half,
roughly half of the fake journalists
had photos that were stolen from real people.
So, for example, Rafael Badani
which is the fake person.
His photo was stolen from a guy called Barry Daydon who lives out in California.
And whoever had stole it had gone to this guy's Facebook page,
gone to his wife's Facebook page and found this photo and edited it.
And what they had done, in all the cases where the photos were of real people,
the people had downloaded, the fake journalists had downloaded them,
flipped the image.
So mirrored the image along the vertical.
access in order to fool or make it more difficult for people using reverse images searches to find those pictures, right?
Not foolproof because we did find a lot of those images.
It appeared that at least another half of the accounts were using artificially or sort of images generated by artificial intelligence.
Now this is very straightforward to do now.
You can go to the website, This Person Does Not Exist.com, and every two minutes it will generate a very, very believable fake.
face using artificial intelligence.
And you could just keep clicking until you find a face that you like and then use that as your profile picture.
And so what's very disturbing, I suppose, about this campaign is not just its audacity, but the use of new technologies that are making traditional methods of investigation somewhat more difficult.
I mean, the best thing about someone stealing someone else's photo and then using it is that if you find it, you know, that's almost immediately.
grounds to say,
there's something
suspicious with this.
The problem with the
artificially
generated images
that they're unique.
So there's no
sort of trail
back to an original
source, you know?
No, but with
those artificially
intelligence-created
images, they use
this something called
a generative
adversarial network.
Yes.
To kind of
basically like
compile,
my understanding
is they takes in
a lot of input
and then kind of
mixes and matches
different aspects of
people to create
a realistic-looking
photograph.
Yeah.
But there are, even within that, there are tells often, like you go to the website, you can kind of see some weird ones, but you'll get some that are pretty good.
But even within those images, there are tells, right?
Like, there is ways that you could, can you talk about, like, the things that you saw in the photographs that let someone know that it's an AI generated image?
Yeah.
So some of the tells that are quite clear from at least the current kind of iteration of this generative adversarial network,
produced images are what's called water drops or tear drop effects. So this looks like a blurring
of aspects of the image. It will look someone's like someone's distorted a part of the image,
so it swirls in a small way. What you'll see on a lot of these pictures is when they generate an
image of a person is that often it looks like the image has been cropped from a group of people.
And to the side of the main image, you'll see someone else whose face is highly distorted,
which is a very clear giveaway.
And also the GAN technology, again, in its current iteration,
struggles to render ears.
Ears are quite difficult for some reason.
And so ears won't look 100% convincing.
Again, they'll either exhibit that tear or water drop effect
or appear not to be stitched together correctly.
There's also a certain element of symmetry.
Eyes and the mouth will appear in the same position
across different images, even though they're different people,
they'll appear at the same sort of position within the image.
You could stack a lot of these images on top of each other,
and you'd find that the eyes in the mouth are all in exactly the same place.
So that's another aspect that will do it.
But if you're savvy, all you need to do is regenerate the image
and find one that doesn't look like there's any artifacts on it.
In the case, in the photos that we look like,
one of the fake journalist was called Joseph Labber.
And I know Adam, he ran the guy's picture past the dentist
and also using this tungsten technology,
which is designed to help spot manipulated images.
The tungsten technology flagged that the mouth and the ears were suspicious.
And the dentist also said that there was definitely something unusual about the mouth and the smile
and that there was probably, it was either fake or there was a really sad dental story behind it.
So there are ways you can suss out these fakes, but it's important to bear in mind that this technology is done in its infancy, and it's already very good.
You know, they're already working on the second iteration of this.
The guys at NVIDIA are working on the second iteration of this, where they're going to be fixing those problems quite soon.
So soon those problems probably won't exist.
And like you said, this is not something that anyone needs to build, you know, a bunch of computers and string them together to do.
there's literally a website where you just hit refresh.
Absolutely.
And you can get generated images on the game.
Yeah, unique images at infernalism.
All right.
What do you think, we can kind of zoom out a little bit,
because disinformation and misinformation is your specialty, right?
So the game is changing every day.
It's changing rapidly.
Yeah.
What do you think are the biggest myths and misunderstandings?
about social media and disinformation in general right now?
What do people need to know and what are they scared of that they shouldn't be?
It's a good question.
I mean, I think one of the things that I see a lot now that I think is important to note is that,
and this is sad in a way, is disinformation has become not well understood per se,
but people are aware that there is fake news and disinformation.
And what this has led to in an age where we see a lot of polarization in politics
is people dismissing opinions that they do not find matching their own as the work of either bots or, you know, malicious actors.
You know, and I see a lot of cases where people just dismiss something they don't like as a bot, you know, simply because that is an easy defense mechanism.
So I think one of the sad things about disinformation, and one thing that we need to understand is one of the points of this information, one of the points of malicious actors, is to,
remove trust between people in communicative spaces such as social media.
And because this information has become such a scourge, I think it's succeeded really in
creating a lot of eroding trust between networks of people, which obviously leaves people
more isolated, leaves people more divided.
And I think people need to be aware, not just of disinformation, but that disinformation
isn't just about the message is spreading, but about the attempt to kind of fragment people
in different lines and create divisions.
So I think people need to be aware that that is an outcome and that is actually happening.
And that might be a more abstract approach to this, but I think it's a very important one.
No, I think that's true.
You know, Steve Bannon's famous line flood the zone with shit, right?
Yeah, absolutely.
And it's something I would argue that we've seen in Russia and the surrounding countries for a long time before it's kind of come to America and taken center stage.
I think you're right.
I think the problem is, is,
to warn people of disinformation becomes you need to try and start thinking imaginative ways to do it because to cry disinformation again is it can be rendered meaningless because people will just say that oh you're just saying that because you disagree with the message you know people have become so tribal in their beliefs that they can just throw the term disinformation bot or fake news at anyone they disagree with so how do you then encourage people to become critical and actually take apart the information themselves
Part of the whole point is to make it not just to divide people, but to create an environment where there is no truth.
Or at least it's so hard to find that people, what are they going to do, spend the rest of their afternoon chasing down some minor piece of information?
No, absolutely.
I mean, the time costs are impossible.
It's an unreasonable thing.
And I think that's why this case is interesting, because at least a lot of us still rely, and most of us probably still rely on.
on the gatekeepers of information, which are journalists and media outlets.
So if those media outlets can then be reproducing this kind of propaganda, that's an alarming
state of affairs, I think.
Right, because you, what's going on now broadly, I think, Jason and I are both in the journalism
game, and again, it's something that's rapidly changing.
It's increasingly feeling like audiences are developing relationships with specific
either specific outlets or especially specific personalities, right?
That's in part also why you're seeing like the rise of the YouTuber as a trusted news source.
And so you kind of, you pick your people that you trust.
And this feels like an attempt to also co-opt that.
Like, okay, well, then we could create a personality that then has a message that is backed by some sort of shady force, right, that has an ulterior motive that is not necessarily about just getting you the information, but also about kind of injecting it the agenda into you as well.
yeah absolutely i mean i think what scares me about this recent case is imagine i mean some of the
news outlets still haven't taken down the fake articles but you could get to a point where
that you could easily have a group of people who are convinced that the the fake journalists
aren't actually fake right so the lindu and the raffaille badanis are probably actually real
people but you know some liberal conspiracy has sort of try and smear them and there would be
I'm not saying it would be a majority of people,
but there'd be plenty of people I think
who'd be willing to believe that too.
Which, you know,
I find quite alarming
because, again, this ties in
with that human events editorial.
They knew that the person was fake,
but they decided to keep it up anyway,
just because they agreed with the argument.
Have you seen any responses to this
where people have outright rejected it
because it was the Daily Beast?
Like, oh, I don't trust the Daily Beast,
so.
Well, yeah, I mean, certainly Will Chamberlain,
the head's human events,
decided to launch an ad hominem attack on Daily Beast.
Again, it was classic What Aboutism?
It's like, how can the Daily Beast talk
when they've been guilty of X, Y, and Z?
Again, not refuting the arguments,
but saying, oh, it's the Daily Beast.
And I've seen one or two people perhaps mention it,
not necessarily disputing or disputing the findings in the investigation,
but using the investigation as an opportunity
to criticise the Daily Beast saying, oh yeah, but it's the Daily Beast.
So there is an element of that.
But I think, and I think this is what's interesting with the forensic nature of the investigation.
I think once, if you present a certain amount of data in a very, in a way that's very hard to refute,
even people who might be, you know, more inclined to be conspiratorial can be convinced,
you know, so I think there is a, there is a space for that kind of forensic type journalism,
which I think is useful in this day and age
because, you know,
people like to often see the kind of ins and outs
of what's actually happening.
You know, I spend a lot of my time now
doing these big threads on disinformation
doing network analysis because I like to expose
the nuts and bolts of disinformation campaign.
And I think one of the things we see with online journalism
is there's such, there's often this kind of slavery to format,
you know, you do an 800-word article,
it doesn't get too technical because, you know,
that's not what people want.
When actually there probably is a market for this kind of blow-by-blow account of why something might be fake or false.
Maybe we underestimate people, actually, in terms of the content we might provide them.
That's why Bellingcat exists, right?
And it's doing well.
Yeah, absolutely.
There's also one other thing about the news organizations that are publishing this stuff,
which is that they used to pay, they used to pay their op-ed writers.
and they used to pay them pretty well.
So now they can't pay, partially because the way the news industry is falling apart,
and partly because they don't want to at some of the places that are making money.
And so you're getting, they just want content.
And they want whatever content they can get for free.
And I think that makes them even more vulnerable to this kind of stuff.
no absolutely i mean there's actually a labor exploitation element to all this you know people will
i mean let's forget a fake journalist for a minute but you know i'm i'm an academic right so it's
very common for people especially doing phds or even their masters to to want to get their name out
there there's a lot of pressure on academics to get their name out there and one of the ways to do this is
to start writing op-eds so there is a lot of people a lot of hungry academics providing free content
to these new sites because you know they're told that you need to get your name out there and
and academics aren't used to being compensated.
Not in the ways,
not like per article,
which is certainly.
So there's certainly,
I think there's a kind of understanding
within the journalism industry,
certainly within the political analysis,
foreign affairs element of,
elements of journalism,
where you do have this kind of model
where people are willing to work for free.
And obviously,
editors will certainly exploit that.
And what's interesting about this case,
we saw in some cases where editors were offering money.
So I think it was the example I mentioned earlier,
when one of the fake journalists approached the editor at the Hong Kong Free Press,
the guy, the editor mentioned that they offer a fee,
offer a sum of money for articles.
But the fake journalist also said, actually, I'm willing to waive the fee.
That's fantastic.
I mean, I love the idea.
I love the idea of the PR firm that's writing this stuff, actually taking two bites at the Apple, right?
So they get paid to write it and then they get paid, you know, from the other end, from the organization.
The sales commission, right?
You get your salary and then, you know, if you pitch a successful article, you get a bonus.
Oh, the future is awful.
I mean, you know, there was that article.
I think it was Microsoft were replacing a lot of these journalists with AI or something.
I do wonder what the future of the industry is.
We've got fake journalists.
We've got robot journalists.
Well, I think at a certain point you'll end up, I think Adam Curtis said something to this effect once.
People will just flee these spaces.
Right.
At a certain point, if it becomes so, like if Twitter, say, just become so completely overrun by disinformation robots,
then that's all that'll be left is just these robots kind of talking back and forth to each other.
And the human conversation will have moved off site and gone somewhere else.
And I think he can always trust.
Sorry.
Go ahead.
I was just going to say, and you can always trust your friendly neighborhood podcaster.
Right, right.
Warcollege.substack.com.
And I believe he kind of had this vision of the future.
he said the future of the internet will look much more suburban,
meaning that there will be these kind of like blocks that are very highly regulated,
very peaceful,
but peaceful because they keep a lot of other people and things and information out.
Right.
And then you'll have kind of these wild environs that are filled with the chaos and noise
of bots and fake journalists and who knows what else we're going to end up coming up with
the next few years.
Absolutely.
I mean, to an extent, I suppose the internet's always been like that, or at least it's
been like that for some time.
The different, I mean, you can even say that dark web is already the kind of, the other
side of that suburban kind of dystopia, I suppose.
But yeah, I think I can see it becoming more compartmentalized because you are seeing
pushes for regulation in various aspects of the way the internet works.
You know, even Trump's recent threats against Twitter about Twitter, you know,
term branding itself as a publishing company and therefore being responsible for the content
on this platform would have a huge impact on the type of content produced on Twitter, right?
So that would be a good example of how you landscape the information terrain in a specific way.
And what happens to people, did they go elsewhere?
I mean, we're seeing this big exodus again from Twitter of right-wing voices who go to places
like Parlor.
I went on Parlor recently, and it's just like a lot of angry people shouting into a void.
You know, so you do see the way the space has changed by regulation and policy.
And, you know, it's a very dynamic process.
Yeah, I mean, we're living through something right now.
I always like to remember that after the invention of the printing press, there was 200 years of horrifying religious conflict.
Right.
As we sorted out, like, how this new technology was going to affect our lives.
And we're living through that again right now.
It's not necessarily a religious conflict, although there's certainly religious aspects to certain parts.
of it, but it is us negotiating how information is going to work now.
Yeah.
Well, I think, you know, I suppose to draw on that parallel more.
I mean, with the printing press, you, again, you had people with vested interests in preserving
their monopoly on knowledge, you know, I suppose the church or the clergy.
And to some extent you have that now, there are people who perhaps wish to preserve a monopoly
or knowledge.
It's not the religious context.
at the moment, we are seeing kind of positive changes in that.
You know, we see leaks.
Edward Snowden did it.
I mean, one could argue that WikiLeaks, whether it's positive and negative, is an aspect of transparency.
But at the same time, we're seeing attempts by often, I mean, in the current guys, right-wing groups to try and utilize the information space to, you know, kind of, I suppose, grab onto power or maintain power, right?
So there's all these kind of interest groups at stake trying to use the information space to their own advantage.
And it's alarming.
It's hard to know where it's going.
But I think recently in the past six years, we've definitely seen this tip in the favor of populism.
And that's because we were underprepared, I think.
I say we, I mean, the royal we.
I think the world was unprepared for how this would play out.
So is there any, you see mostly see this on the right at the moment.
kind of similar action or disinformation campaign coming from the left at all?
Is it something that you see on the other side of the political spectrum?
Yeah, I mean, it's harder to say.
So, for example, often if we look at disinformation, it's perpetuated by, or at least we think it's perpetuated by known actors.
So if we use the Internet Research Agency, the Russian agency, as an example, we know that they were creating disinformation networks who were both left-wing and right-wing.
So they were playing both sides.
However, it was stacked more in the favor of the right, right?
So they were creating left-wing propaganda, but they were creating more right-wing propaganda.
So we know that they're doing both of these things.
I mean, certainly in my experience, I tend to see it coming more from the right.
It doesn't mean it doesn't exist on the left.
I mean, it obviously exists on the left, but I just don't see it in the same scale.
And there could be reasons for that.
You know, there could be because a lot of it is coming from, for example, in the Russian case,
The argument is that it wanted to see a Trump presidency and Trump is a Republican, so it makes sense to have right-wing propaganda.
But then you have people discussing, you know, Steve Bannon's networks across Europe, him trying to contain China and trying to support the kind of election of right-wing leaders across Europe using these kind of methods.
Same with Brexit.
So a lot of the high-profile cases or political events we've seen in the past seven years are tend to be, you know, events that have been supported by right-wing causes or right-wing causes or right-wing.
right-wing parties. So that kind of makes sense. I mean, obviously there's nothing to stop the
left using propaganda. And we know that historically, that is very true, right? Well, let me,
let me toss this out at you. I would say, and this is me just kind of thinking out loud here,
working things out on the podcast, so to speak, but I would make a distinction between misinformation
and disinformation. And I would say that broadly, broadly, very broadly speaking, the left
tends to be more disorganized and more prone to misinformation as opposed to disinformation.
Yeah, I mean, the difference obviously, or at least the concept, to accept the difference
in dis and misinformation is the intent, right?
Disinformation is, the intent is deliberate.
People spread that information deliberately in the knowledge of that it's not true.
Misinformation is accidentally communicating information that is not true.
I think that's the crucial difference.
And in that definition, I could definitely see that as being true.
I think there is an intent difference between the right and the left, loosely speaking.
There's also the fact that the right wing believes that the New York Times counts as left-wing propaganda.
And the Washington Post.
No, I mean, I think there's a seriousness to that, you know?
You're right.
It is all about kind of where you're sitting and what your attitude is, right?
Yeah.
But we have to really push back again on this relativism, you know, this notion that you can't have, you know, outstanding journalism or you can't have journalists who are committed to, you know, integrity and this kind of thing.
And I would, I mean, I have no qualms in saying that I believe the New York Times is a very robust publication and that to just dismiss it as left wing propaganda is.
you know, it's an affront to the truth, I would say.
Lefty.
Well, I mean, you know, like, I don't know.
I just sort of think, you know, like, again, with this investigation in mind, you know,
you have publications who published, I mean, it really shows you where people's values
lie.
If you're an editor who published a fake journalist and stands by that, then really what kind
of publication are you?
You can't put that editorial comment on the same level as a newspaper who would take down
that article and apologize, right?
It's showing you that those two entities have a very different regard for truth, provenance, and the values of journalism.
Yeah, that's a really good reflection on the reactions to this story really tell you a lot about the values of the outlets, right?
Yeah, definitely.
What do you think you learned during this investigation?
What did it teach you about the nature of disinformation in the modern age?
Well, I mean, I've always been doing this for a while.
now, so I'm very cynical.
I think if I was to say what I'm looking for now that I wasn't before, is again, I mean,
looking at what's coming, I would say artificial intelligence is going to increase the sophistication
of the kind of propaganda of seeing.
I don't think we're just going to see artificially generated human images or videos,
or even see artificially generated content, right?
actual written content.
So, you know, I think it probably is not far up
before we'll get very plausible articles generated by computers
that are propaganda.
And if you can generate propaganda at scale
and distribute it at scale,
then dominating the information space with shit
to use the banning quote from before
is going to actually be very easy.
So I think what these little investigations show us,
they give us an inkling into what's coming next.
Right. So if we know that, I mean, we know that PR companies have always positioned op-eds and kind of various things.
But this operation shows a slightly more industrial scale of that. You create, you probably have a few people with multiple personalities or multiple fake accounts generating dozens of articles.
The next step then is to have fewer and fewer people, ideally automation, generating even more and more articles and distributing them through more and more outlets, right?
There's always a scale, an element of scale here. So I think that's what we need to be on the lookout for.
It sounds very dystopian, but I think what this, looking into disperation,
has taught me is, you know, you can't afford to be surprised because, you know,
it's amazing what innovation technology is actually capable of producing.
Don't give in to astonishment.
Yes, exactly.
Or give in, but only momentarily.
Right.
And have a coffee and then regroup.
So do you think there are more networks like this that just haven't been uncommon?
covered. Yeah, absolutely. I mean, the interesting thing about this network is when I first came, when I first saw it, the Lin-Newan articles in Asia Times was not actually ostensibly connected to the Arab Eye, the Persian Now articles, right? So there wasn't a direct link. The only link that existed was the fact that this suspicious character had shared this link with a friend. So what this would suggest to me is that there are units like cells. Let's think of the cell structure.
right they're related but operating independently in order to preserve their overall
cohesiveness so I think absolutely exist and I think what finding these networks takes
is a combination of people obviously who have good Ocent skills but but people who also
know a specific area like I said I was I became suspicious primarily because I knew I'm
familiar with the kind of propaganda tropes or PR tropes to look for in this part of the
world that I specialize in.
Right?
So you really need a combination of good journalists, area studies, experts, and people
who are naturally suspicious like myself to look into these things.
What should, what can the normal person do day to day to protect themselves from this?
Honestly, I mean, it's a big question.
I think the normal person, if we were to take that, literally a normal person is neither
the right nor left and you could read anything.
I mean, honestly, I would, yeah.
Oh, boy.
We're in a lot of trouble, aren't we?
No, I can give advice, but it's unrealistic, you know, it's like, you know,
when you sort of say something, it's, you give advice, but it's not something you can
imagine everyone just doing, you know, something's easy for me to say, you know,
you read an article by someone, Google that person, see what else they've written.
you know, that's
the job with the editor,
right? The editor fits that person
and great if the individual
does that as well, but I just don't
expect that to happen.
I'd love that to happen, and I obviously
recommend that people do that, look into the kind of
look into who's telling the story,
look into more of the
publication itself of what their agenda is,
but I don't know if people will do this.
My plea
more is to the editors
out there, right?
Because at the end of the day, they're still gatekeepers for knowledge.
And media literacy is something that needs to be instilled from an earlier age.
We've seen in Finland, for example, the Finns have been adept,
and the Estonians have been adept at trying to encourage people from a young age
to adopt and look at sources and evaluate sources critically when it comes to information.
And that's really the root of this problem is one of education and digital literacy.
well, Finland and Estonia are also on the front lines of something that we're not, right?
They're dealing with a whole different paradigm there.
But that's come to the U.S. now.
I don't think the U.S. or Europe can, you know, the whole point of the Internet is dissolved,
or it's dissolved in many ways traditional borders.
You know, Estonia and Finland geographically have always been on the border with Russia and have,
And because of that, as you said, have been very kind of aware of this disinformation and subterfuge and infiltration.
But people, we know enough now to know that we do not have the luxury of geographical distance.
It doesn't exist.
So those kind of media literacy, information literacy endeavors need to actually be adopted by U.S. policymakers, European policymakers, African policymakers, everyone.
It's fundamental.
All right. I think that's a good place to end on.
And just Jason, do you have any follow-ups?
No. I think that we've once again managed to scare the public and leave them, you know, a little less happy than they were before.
It's not a war college episode unless it ends with a little misery.
I found my people then.
Yeah, no, this is usually how the episodes end is we put our.
everything in context, and then everything gets really sad in the last couple minutes.
And there's no hope.
You need some sad music as well just to really push it home, get the melodrama going.
Mark Owen Jones, thank you so much for coming on the show.
Where can people find your work and start learning more about, and get educated, basically?
My Twitter feed's a good place to go.
So Mark Owen Jones is my Twitter handle, and there's links there to my blog and various other things.
I've got a book coming out early next year on disinformation in the Middle East.
So keep an eye up for that, I guess.
We would love to get an early copy and then have you back on the show.
I will send you one.
I will send you one.
Absolutely.
I'd love to.
Yeah.
I have to write it first, obviously.
Actually, all you need is an AI.
Yeah, exactly, right?
I got the contract.
You don't have to write his role.
No, exactly.
I'll get my robot to do it.
All right.
Thank you so much.
Thank you guys. Enjoy the rest of the date.
That's it for this week, War College listeners. War College is me, Matthew Galt, and Jason Fields, and Kevin Nodell, who's created by myself and Jason Fields. We have a substack now.
Warcollege.substack.com. This is the Information Warfare newsletter every week. Jason, Kevin, and I are putting together a newsletter that rounds up all of the defense news that you need to be watching with a little bit of light commentary and context from ourselves.
also going to be bonus episodes of the show coming to Substack, some rare interviews, things
that are left in the cutting room four, from our day jobs, stuff that we think that is important
that you need to know. It is hard out there, as you know from the episode on the internet
to find out just out what the hell is going on, who's telling the truth, who are the journalists
you can trust. You've been listening to us for years now. You can trust us. Go to warcollege.
substack.com and sign up for the newsletter.
It's free right now.
Very soon, we're going to start charging a dollar a month for the newsletter,
and then $9 a month will get you the bonus episodes that we're going to be putting
out every month from War College.
Failing that, you can find us online at Warcollegepodcast.com.
We're on Facebook at Facebook.com, forward slash warcollege podcast on Twitter at War
underscore College.
We're on iTunes and everywhere else, find.
pods are casted. We will see you next week for another conversation about a world in conflict.
