Angry Planet - Fake Journalists Are the Latest Disinformation Twist

Episode Date: July 15, 2020

Last week The Daily Beast broke some bizarre news. Several news outlets, including The Washington Examiner, RealClear Markets, and The National Interest, had been running op-eds of journalists that di...d not exist. AI generated photos attached to profiles and credentials that, once scrutinized, collapsed. It was a massive effort at digital propaganda and questions still remain about its provenance and purpose.Here to explain just what is going on is Marc Owen Jones. Jones is an assistant professor in Middle East Studies and Digital Humanities at Hamad bin Khalifa University and an expert in social media disinformation who helped sound the alarm about this campaign.Recorded 7/13/20Fake journalists have joined the frayTracking response of the dupe outletsThe difference between misinformation and disinformationMedia literacy in Estonia and FinlandA website that generates people who don’t existWar College has a substack! Join the Information War to get weekly insights into our angry planet and hear more conversations about a world in conflict.https://warcollege.substack.com/You can listen to War College on iTunes, Stitcher, Google Play or follow our RSS directly. Our website is warcollegepodcast.com. You can reach us on our Facebook page: https://www.facebook.com/warcollegepodcast/; and on Twitter: @War_College.Support this show http://supporter.acast.com/warcollege. Hosted on Acast. See acast.com/privacy for more information.

Transcript
Discussion (0)
Starting point is 00:00:00 Love this podcast? Support this show through the ACAST supporter feature. It's up to you how much you give, and there's no regular commitment. Just click the link in the show description to support now. It appeared that at least another half of the accounts were using artificially, or sort of images generated by artificial intelligence. Now, this is very straightforward to do now. You can go to the website, This Person Does Not Exist.com,
Starting point is 00:00:30 and every two minutes it will generate a very, very believable, uh, fake face using artificial intelligence. And you could just keep clicking until you find a face that you like and then use that as your profile picture. You're listening to War College, a weekly podcast that brings you the stories from behind the front lines. Here are your hosts. Hello, welcome to War College. I'm Matthew Galt. And I'm Jason Fields. Last week, the Daily Beast broke some bizarre news. Several news outlets, including the Washington Examiner, Real Clear Markets, and the National Interest, have been running op-eds of journalists that didn't exist. Not just that
Starting point is 00:01:31 their articles didn't exist, if the information was bad, the journalists themselves did not exist. AI-generated photos attached to profiles and credentials that, when scrutinized, collapsed. It was a massive effort at digital propaganda, and questions still remain about its providence and purpose. Here to explain just what is going on is Mark Owen Jones. Jones is an assistant professor in Middle East Studies and Digital Humanities at Hamad bin Khalifa University and an expert in social media disinformation. He helped sound the alarm about this campaign. Sir, thank you so much for joining us. Thank you for inviting me.
Starting point is 00:02:08 It's great to be on the podcast. So give us the broad strokes of what happened here. Again, like I said earlier, this is not just fake news stories, but fake news stories. journalists as well? Yeah, absolutely. Yeah, I mean, if I was to try and sum this up as succinctly as possible, I'd say we have a network of approximately 20 fake journalists, i.e. people who do not exist, who successfully managed to submit around 80 articles to around almost 50 different internet news outlets in
Starting point is 00:02:42 the space of about six months. The majority of those articles were also propaganda. So what we really have here is an elaborate large-scale influence campaign, as you said, perpetrated by unknown actors, although we could hypothesize who might be behind it quite easily, I would say. One question I have is, when did people start to notice that things were going wrong or that the journalist didn't exist? That's a good question. So actually this, I mean, this was an interesting investigation because there was no real alarm bells flagged from the content as provided by the journalists themselves. The reason this came to be in a very strange way, I mean, I've been writing on disinformation for some time now. So, you know, I've established
Starting point is 00:03:32 a quite a large network on Twitter, people who often get in touch with me if they see something unusual happening in a certain part of the world. So what actually happened here is that I got a message from someone who's based in the United Arab Emirates. He messaged me because he'd received a strange message from Rafael Badani. Now, Rafael Badani is one of the fake journalists or one of the journalists who later turned out to be fake. And this message basically said, oh, it seems that we have something in common. Tuakal Carmen, who's a Yemeni activist and who was appointed recently to the head of Facebook's oversight board. She is a Muslim Brotherhood sympathizer, and we must do our best to try and make people aware of this.
Starting point is 00:04:16 And this was the message sent by Rafael Badani to my friend. And my friend found this slightly odd. So he just got in touch with me and said, hey, I got this message. What do you think? And I saw the message and I thought, that is very strange. So I told my friend to play along. I said, well, you know, play along and see if you can get any more information from this guy. So I told my friend to ask this, this Rafael Badani to do a Zoom meeting or a Skype meeting.
Starting point is 00:04:40 And so my friend did. And the first thing Rafael Badani did was refuse. He refused to do a Zoom meeting. And he cited security reasons. And obviously, that's a red flag straight at the box. You know, why is this guy cold calling people on Twitter refusing to appear on camera? And that just kind of affirmed some of my suspicions. And instead of obviously appearing on camera, Rafael Badani then sent a link to an article.
Starting point is 00:05:03 And the article itself was on the Asia Times, which is a Hong Kong-based news outlet. And the article was about to Rackle Carmen's appointment to the Facebook Oversight Board and how this was dangerous for freedom of speech and democracy. And this is an argument that I've heard coming a lot out of the United Arab Emirates and Saudi. You know, there is a strong push now to kind of stigmatize the Muslim Brotherhood. And there was a disinformation campaign, a separate one that I identified attacking to Okul Kama. So I was very suspicious about the content of this article. And then because I was suspicious, I also looked at the biography of the journalist, Lynn Nguyen,
Starting point is 00:05:40 who had a very generic biography. You know, there's nothing specific about where she'd study. or where she'd worked before. And it just sort of said South Asia analysts interested in analyzing markets. And I found that very odd. And so that really started this whole kind of this kind of investigation. And from that, you know, we looked at Rafael Bidani's account. That's when I got in touch with Adam.
Starting point is 00:06:03 And we started to look into Raphael Bidani's account. Found that he'd appropriated someone else's photograph. And his name led to a few other websites, including the Arab Eye and Persia now, which was set up specifically by this network, presumably to give them a portfolio of pieces that they could use to gain credibility when approaching other editors to provide content. So that's how started. Sorry.
Starting point is 00:06:30 No, no, no, that's wonderful. Explain the Persia Eye Network and what was the other one called? So, yeah, the Arab Eye and Persia now. Okay. So I got to, yeah. Yeah, I mean, they're the same anyway, so it doesn't really matter what we call them. So the Arabi and Persian now both self-described new sites providing more right-wing commentary. And I say that because they describe themselves as wanting to provide or fill the gap in the market for the absence of right-wing commentary on the region, the region being the Middle East, the Arab world, Iran, and Turkey.
Starting point is 00:07:07 These were relatively new websites. They were ostensibly separate. So there was nothing on the web pages that linked them to one another. However, the branding was entirely the same. The color scheme was the same. And what Adam found was that they shared SSL certificates and Google Analytics Code. And also it became clear that some of the journalists on the network were writing for both the Arab Eye and Persia Now. So the Arabi and Persian Now, they essentially published articles about the Middle East.
Starting point is 00:07:40 again, with a very specific slant. Most of them were critical of Iran, critical of Turkey, critical of Qatar, and critical of the Muslim Brotherhood. And what's interesting about them, although the majority of the content was provided again by these fake journalists, they had tricked some real people into providing content. So that was quite interesting. It was obviously set up as a fake enterprise, yet they were obviously trying to gain legitimacy credibility by roping in kind of fake accounts by real people, sorry.
Starting point is 00:08:13 So wait, does that mean that they were actually paying real journalists to submit content? Or were they grabbing it and stealing it from other locations? So they were getting, none of the stuff that we found, not the articles that were provided. And this is also very interesting about this whole situation. None of it was plagiarized, or at least none of it was explicitly copy-pasted plagiarized. The level of English was generally outstanding. And, you know, what happened since the release of the investigation, we've had a couple of other editors who've come out and say they were approached at some point by these fake journalists. There was some, I forget the name of the editor, I think it's Tom, at the Hong Kong Free Press.
Starting point is 00:08:58 He said he was approached by, I believe, Cynthia Chi and Lynn Nguyen, who wanted to provide articles. And the reason he didn't accept those articles is he was slightly suspicious because the articles, the op-eds that they were suggesting, were so polished and that these journalists had sort of financial backgrounds. He was kind of suspicious about their provenance because they were just so polished. And that's what's very interesting about this operation. It's not a kind of two-bit, let's just piece together some plagiarized content and hope it sticks. the people providing or doing the writing, whoever they are, clearly are competent in English, and also have some experience of international relations or journalism. So this would suggest that there is some sort of relatively well-resourced or at least
Starting point is 00:09:44 educated outfit behind it. It's so funny to me that an editor would immediately be like, the copy's too clean here. This can't be a real person. I think both Jason and I have been editors before and can say that that is probably, yeah, very true. Yeah, it's an unusual problem to have, right? Right. So do we have any idea who's writing these articles at all?
Starting point is 00:10:15 The who is is very tricky. I mean, the discourse, the overall, all the overarching arguments presented by all the authors in their entirety. offer a very specific, or I would say quite a specific foreign policy alignment that puts them, or suggests that they are an entity perhaps sponsored by the United Arab Emirates or perhaps Saudi Arabia. I would, because, you know, this whole anti-Turkey, anti-Turkish intervention in Libya, the anti-Muslim Brotherhood tropes, the anti-Katah tropes, the Antikato one in particular is quite specific, would suggest, again, that it was probably, it's probably a PR company. working on behalf of one of these entities. There are some PR companies who I would think would potentially be involved in doing this. But there's no smoking gun, you know. What I tended to look, when you look at who shared the articles on Twitter,
Starting point is 00:11:11 you can make two summations. You can sort of say, if someone shared this article on Twitter, maybe they just agreed with its argument and decided to share it. Or this person knows about who's behind these articles and is sharing it because they have a vested interest in doing it. I'd like to think people wouldn't be so stupid as to be behind it and also share it, but then these days you never know. I mean, this is such an audacious scheme.
Starting point is 00:11:35 I suppose anything is possible. But in terms of smoking gun, there's no smoking gun as yet. We know that Twitter are looking into the accounts to see if they can ascertain if there is a clear state-back-linked nature to this operation, and if so, which state is it? So we may know some more in the coming days or weeks. All right. Can you talk a little bit more about kind of the broad editorial message that was coming from these, from these journalists?
Starting point is 00:12:00 Like, what are some of the headlines for some of the pieces? Like, what were they saying? What was the message they're trying to push? Right. So I would sort of say there's two broad aspects of this. There's generic analysis of financial markets. And I think some of that is fluff. Some of that is designed to create, give the journalists a portfolio that they can then go to other outlets and say, listen, I'm ex-journalists.
Starting point is 00:12:22 I've written for Newsmax. I've written to the Washington Examiner. Can you publish this? But the dominant trope was specifically anti-Iranian influence in countries such as Iraq and Lebanon. So some of the pieces talked about the role of Hezbollah, the negative role of Hezbollah and Lebanon. And so that's, again, it's a way of critiquing Iranian expansionism in the Gulf. There were several pieces on Iraq and sort of blaming Iran and Iranian influence for the instability in Iraq. Similarly, there was, you know, obviously with the conflict in Libya at the moment and the various sides backing different parties, the editorial stance of a number of the articles were specifically about against Turkey's involvement in Libya and sort of more pro-HFTA.
Starting point is 00:13:11 So Haftar is the kind of UE-backed warlord in Libya. So that was another very specific angle of it. There was also a few articles that were praising the United Arab Emirates' response to the COVID-19 crisis, which was kind of an unusual departure, I suppose, from some of the more kind of conflict-based and political articles. But also it kind of would make sense, I suppose, if that entity was involved in it. And some of the others was specifically about Facebook's appointment of Tuakl Khamun. and that that was you know that's again unsurprising if you know the foreign policy of the Gulf
Starting point is 00:13:51 states because again there's a big push following the Arab uprisings in 2011 or really the overthrow of Morsi in Egypt to stigmatize the Muslim Brotherhood and there was a massive backlash from on Arabic Twitter against the appointment to Swakle Carmen to Facebook's oversight board
Starting point is 00:14:08 so a number of the articles dealt specifically with that and another was about Qatar sponsoring terrorism in the world. So these are the dominant tropes, mostly anti-Kata, Turkey, anti-Iran, and anti-Muslim Brotherhood. And most of these articles, apart from being targeted at a few international outlets,
Starting point is 00:14:27 like Asia Times and South China Morning Post, were generally targeted at right-wing news outlets, let's put it that way, such as Washington Examiner, the British spiked online, human events, newsmax, that kind of thing. You mentioned PR agencies.
Starting point is 00:14:48 You know, the question I have is, who has such agencies? Is this the kind of expertise one can actually find in the Gulf itself? Or would you have to look to a PR agency in the United States, Britain, some other place like that to actually do the writing? Well, I mean, this is, traditionally, this type of work has always been outsourced to PR companies based in Washington and London. I mean, even up to about nine years ago, London had a reputation as being, you know, the reputation laundering capital of the world, because so many of the big PR companies doing work, whitewashing human rights abuses perpetuated by dictators around the world were based in London. And we know more recently, for example, we know that. Linton Crosby, which is a big PR firm that has done work for the British Prime Minister, Boris Johnson, has been involved, for example, in creating fake Facebook profiles and pages and accounts
Starting point is 00:15:53 to burnish the reputation of Mohammed bin Salman, the Crown Prince of Saudi, following the killing of Jamal Khashoggi in 2018. But we know that they exist, right? So we know that they're doing work for these countries. At the same time, we know that some of these companies that often have are headquartered the UK or the US then have branches out in the region. And so, you know, they'll have offices in Abu Dhabi or Doha or Kuwait or whatever and then operate from those places.
Starting point is 00:16:23 We can also look at FARA filings, you know, the Foreign Agents Registrations Act to see who's, I mean, this is the nice thing, I suppose, about the US is that at least there is some attempt at transparency. We know there are some companies who have done work, for example, you know, on behalf of the Saudi government. I think South Five Strategies signed a deal in 2018 with Saprak, which are the Saudi American I don't know, Public Relations Council or something. And they did work basically to, again, furnish the reputation of Mohammed bin Salman after
Starting point is 00:16:58 the murder of Jamal Khashoggi. So we do know that there are American and British companies working for Gulf governments who are and have been found to be engaging in what I would say are certainly deceptional tactics and manipulative tactics, or social media astroturfing. So it's not surprising, I think, or controversial to suggest that whoever's behind this particular operation could be one of these PR companies. The flip side of this is the, are the organizations that are actually publishing this stuff.
Starting point is 00:17:32 so Newsmax or the Washington Examiner Are the editors at these operations lazy, bad? Or are they just short-staffed? Do we have any idea of what their motives are? I mean, I think this is a very good question. I mean, after the investigation was published, I sort of tracked the responses
Starting point is 00:18:01 of the various editors of these publications and created a rough typology of the reactions. I mean, firstly, let's talk about how they responded. So a number of the organizations, I say, responded more professionally. And by that, I mean, they published a retraction or they created some sort of note saying, we believe that this content was provided by a fake journalist. We've removed the content, but left the headline.
Starting point is 00:18:28 For example, that's what the South China Morning posted. Spiked online a British publication, acknowledged that they, that was an investigation that suggested that the journalist was fake. However, they decided to keep the articles up for quote unquote transparency, which I find incredibly bizarre response. But anyway. And then one publication, Human Events, which again is a kind of Republican-leaning, right-wing leaning publication. the editor there basically went the other way and said the editor or the manager director
Starting point is 00:19:07 I think his name is Will Chamberlain he said they agreed with the arguments being presented in the article even though they acknowledged that there was an investigation into the veracity of the journalist but because they agreed with the article they again quote unquote adopted it as their own which to me is the most ludicrous response to a
Starting point is 00:19:27 That was my favorite of the batch. Yeah, it was absolutely incredible. And not only was it, you know, showed no humility, this guy started to kind of ad hominem attack me on Twitter for basically being part of this investigation. And, you know, one of the editors of the same publication told me to go and F myself. So, I mean, I don't think it's possible to even generalize about the motives of it editors. what we can see, and again, some of the ones we interacted with, I think a lot of editors are under pressure to publish. I think the business model of a lot of publishing,
Starting point is 00:20:06 a lot of new sites now is very much, you know, we need to drive advertising revenue, we need to get more content out there. And that, of course, leads to some pressure on editors to get more novel content. Obviously, that exists. However, that doesn't negate the fact that editors need to carefully scrutinize those who are writing for them.
Starting point is 00:20:26 And at the same time, these accounts, these fake accounts, behaved in such a way that, you know, even if an editor was to exercise a minimum amount of due diligence in Google who they were, who was submitting articles, them, they would find a LinkedIn profile, probably a Facebook profile. They would find a profile on muck rack, which is journalists used to detail their portfolio. So I think editors would have had a good reason to believe that those submitting the articles were real people, right? Because they'd had. an established profile. But scratch beneath the surface, if any of those editors are scratched beneath the surface, I think they would probably have had a lot of questions about the kind of content that was being asked. So in some ways,
Starting point is 00:21:07 I don't blame the editors because I can understand how they could have been tricked easily. Having said that, I think, if you are, I think there's so much news out there, there's so much information. I think, you know,
Starting point is 00:21:19 if, I think editors should, in this kind of post-truth age, where deception is so, prevalent, pay more attention to who they commissioned to write articles. And I think that's lacking. And I hope that this investigation has kind of reiterated the need for more verification, you know, the need for more verification from newspaper editors.
Starting point is 00:21:47 We should just mention, hold on. There's just one thing we should mention, Matthew, if you know. I think I know exactly where you're, I think I know exactly. Let me do it. You go for it. exactly what you're going to do. Okay, so as you were kind of saying that, it really, it really speaks to something that Jason and I have been talking about and thinking about a lot lately, which is that increasingly it feels like a good journalist is not just writing
Starting point is 00:22:11 articles and processing the news and kind of telling you the story of the world, but helping you sort the signal from the noise. Because there is so much information going on there. And to that end, we have launched a substack, which you can get at warcollege.com dot com and we will help you sort the signal from the noise on a weekly basis every Monday in your inbox. I think that's where we should always go, that and begging for money. But actually, one other thing I want to say is in the interest of our own transparency, we actually started this podcast when I was the opinion editor at Reuters.
Starting point is 00:22:51 and so I just, we had one incident where the person who was writing for us, it wasn't that she wasn't real, but she didn't bother to mention that she was being paid by Armenia. And there was really only one paragraph that was questionable in the entire piece, which sort of showed the brilliant. of the whole plan. Do you know what I mean? By having so much correct information, just a little bit of spin,
Starting point is 00:23:28 she got what she wanted, and it got through us. I mean, that's, you know, again, that's when a talented writer is important. And again, that's one thing that was evident in this campaign is that when I looked through, you know, every article written
Starting point is 00:23:44 by everyone of these fake journalists, most were pretty good. Most I could tell the bias because I study the Gulf, I study propaganda in the Gulf, I probably know what to look for, right? If you were a general editor looking maybe at an article, for example, about the appointment of someone to the Facebook Oversight Board, you might not know some of the nuances. But I could see that the way in some of these articles, a small amount of, I say a small amount of balance was evident, but generally it was stacked in such a way that it was clearly propaganda. But I would say that these articles were well written, and that would also concentrate. to perhaps fooling even a very diligent editor.
Starting point is 00:24:25 All right. We're going to pause there for a break. You're listening to War College. We are on with Mark Owen Jones talking about misinformation. Welcome back, War College listeners. You are on with Mark Owen Jones, and we are talking about misinformation. Can we talk about this kind of a tangent side story to this thing, but I think it's very interesting.
Starting point is 00:24:46 And I think it's important for understanding why these things are believed and how the information's created. Can we talk about the profile pictures and images for the journalists themselves? Where did these contributors come from? Because none of them were real, so to speak, right? But they had real photographs. Well, this is the interesting thing. So half, I say half, roughly half of the fake journalists had photos that were stolen from real people. So for example, Rafael Bidani, which is the fake person, his photo was stolen from a guy called Barry Derry. Daydon who lives out in California. And whoever had stole it had gone to this guy's Facebook page,
Starting point is 00:25:28 gone to his wife's Facebook page and found this photo and edited it. And what they had done, in all the cases where the photos were of real people, the people had downloaded, the fake journalists had downloaded them, flipped the image. So mirrored the image along the vertical axis in order to fool or make it more difficult for people using reverse images, searches, to find those pictures, right? Not foolproof because we did find a lot of those images.
Starting point is 00:25:56 It appeared that at least another half of the accounts were using artificially or sort of images generated by artificial intelligence. Now this is very straightforward to do now. You can go to the website, This Person Does Not Exist.com, and every two minutes it will generate a very, very believable fake face using artificial intelligence. And you could just keep clicking until you find a face that you like. and then use that as your profile picture.
Starting point is 00:26:25 And so what's very disturbing, I suppose, about this campaign is not just its audacity, but the use of new technologies that are making traditional methods of investigation somewhat more difficult. I mean, the best thing about someone stealing someone else's photo and then using it is that if you find it, that's almost immediately grounds to say, there's something suspicious with this. The problem with the artificially generated images that they're unique. So there's no sort of trail back to an original source, you know. No, but with those artificially intelligence-created images, they use this something called a generative adversarial network.
Starting point is 00:27:03 Yes. To kind of basically like compile. My understanding is that takes in a lot of input and then kind of mixes and matches different aspects of people to create a realistic looking photograph. Yeah. But there are, even within that, there are tell. often, like you go to the website, you can kind of see some weird ones, but you'll get some that are pretty good. But even within those images, there are tells, right?
Starting point is 00:27:28 Like, there is ways that you could, can you talk about like the things that you saw in the photographs that let someone know that it's an AI generated image? Yeah. So some of the tells that are quite clear from at least the current kind of iteration of this generative adversarial network produced images are what's called water drops or. or teardrop effects. So this looks like a blurring of aspects of the image. It will look someone's like someone's distorted a part of the image, so it swirls in a small way. What you'll see on a lot of these pictures is when they generate an image of a person, is that often it looks like the images has been cropped from a group of people. And to the side of the main image, you'll see someone else whose face is highly distorted, which is a very clear giveaway.
Starting point is 00:28:16 And also the GAN technology, again, in its current iteration, struggles to render ears. Ears are quite difficult for some reason. And so ears won't look 100% convincing. Again, they'll either exhibit that tear or water drop effect or appear not to be stitched together correctly. There's also a certain element of symmetry. Eyes and the mouth will appear in the same position across different images, even though they're different people, they'll appear at the same sort of position within the image. You could stack a lot of these images on top of each other, and you'd find that the eyes and the
Starting point is 00:28:57 mouth are all in exactly the same place. So that's another aspect that will do it. But if you're savvy, all you need to do is regenerate the image and find one that doesn't look like there's any artifacts on it. In the case, in the photos that we look like, one of the fake journalists was called Joseph Labber. And I know Adam, he ran the guy's picture past the dentist and also using this tungsten technology, which is designed to help spot manipulated images. The tungsten technology flagged that the mouth and the ears were suspicious. And the dentist also said that there was definitely something unusual about the mouth and the smile and that there was probably, it was either fake or there was a really sad dental story behind it.
Starting point is 00:29:42 So there are ways you can suss out these fakes, but it's important to bear in mind that this technology is done in its infancy, and it's already very good. They're already working on the second iteration of this. The guys at NVIDIA are working on the second iteration of this, where they're going to be fixing those problems quite soon. So soon those problems probably won't exist. And like you said, this is not something that anyone needs to build, you know, a bunch of computers and string the together to do, there's literally a website where you just hit refresh. Absolutely. And you can get generated images on the...
Starting point is 00:30:18 Yeah, unique images at infernalism. All right. What do you think... If we can kind of zoom out a little bit, because disinformation and misinformation is your specialty, right? So the game is changing every day. It's changing rapidly. Yeah.
Starting point is 00:30:39 What do you think are the biggest myths and misunderstands? understandings about social media and disinformation in general right now? What do people need to know and what are they scared of that they shouldn't be? It's a good question. I mean, I think one of the things that I see a lot now that I think is important to note is that, and this is sad in a way, is disinformation has become not well understood per se, but people are aware that there is fake news and disinformation. And what this has led to in an age where we see a lot of polarization in politics is people dismissing opinions that they do not find. matching their own as the work of either bots or, you know, malicious actors. You know, and I see a lot of cases where people just dismiss something they don't like as a bot, you know, simply because that is an easy, is an easy defense mechanism. So I think one of the sad things about this information, and one thing that we need to understand is one of the points of this information, one of the points of malicious actors,
Starting point is 00:31:38 is to remove trust between people in communicative spaces, such as, social media. And because this information has become such a scourge, I think it's succeeded really in creating a lot of eroding trust between networks of people, which obviously leaves people more isolated, leaves people more divided. And I think people need to be aware, not just of disinformation, but that disinformation isn't just about the message is spreading, but about the attempt to kind of fragment people in different lines and create divisions. So I think people need to be aware that that is an outcome and that is actually happening. And that might be a more abstract approach to this, but I think it's a very important one.
Starting point is 00:32:19 No, I think that's true. You know, Steve Bannon's famous line flood the zone with shit, right? Yeah, absolutely. And it's something I would argue that we'd see, that we've seen in Russia and the surrounding countries for a long time before it's kind of come to America and taken center stage. I think you're right. I think the problem is, is trying to warn people of disinformation becomes you need to try and start thinking imaginative ways to do it because to cry disinformation again is, it can be rendered meaningless because people would just say that, oh, you're just
Starting point is 00:32:52 saying that because you disagree with the message, you know? People have become so tribal in their beliefs that they can just throw the term disinformation bot or fake news at anyone they disagree with. So how do you then encourage people to become critical and actually take apart the information themselves. Part of the whole point is to make it, not just to divide people, but to create an environment where there is no truth. Or at least it's so hard
Starting point is 00:33:18 to find that people, what are they going to do, spend the rest of their afternoon chasing down, you know, some minor piece of information? No, absolutely. I mean, the time costs are impossible. It's an unreasonable thing. And I think that's why this case is interesting, because at least a lot of us still rely, and most of us probably still rely on the gatekeepers of information, which are journalists and media outlets.
Starting point is 00:33:44 So if those media outlets can then be reproducing this kind of propaganda, that's an alarming state of affairs, I think. Right, because you, what's going on now broadly, I think, Jason and I are both in the journalism game, and again, it's something that's rapidly changing. it's increasingly feeling like audiences are developing relationships with specific, either specific outlets or especially specific personalities, right? That's in part also why you're seeing the rise of the YouTuber as a trusted news source. And so you kind of, you pick your people that you trust, and this feels like an attempt to also co-opt that. Like, okay, well, then we could create a personality that then has a message that is backed by
Starting point is 00:34:30 some sort of shady force, right, that has an ulterior motive that is not necessarily about just getting you the information, but also about kind of injecting the agenda into you as well. Yeah, absolutely. I mean, I think what scares me about this recent case is imagine, I mean, some of the news outlets still haven't taken down the fake articles, but you could get to a point where you could easily have a group of people who are convinced that the fake journalists aren't actually fake, right? So the Lin-New and the reference. Rafael Bidani are probably actually real people, but, you know, some liberal conspiracy has sort of try and smear them. And there would be, I'm not saying it would be a majority of people, but there'd be plenty of people I think who'd be willing to believe that too. Which, you know, I find quite alarming because, again, this ties in with that human events editorial. They knew that the person was fake, but they decided to keep it up anyway, just because they agreed with the argument. have you seen any responses to this where people have outright rejected it because it was the Daily Beast? Like, oh, I don't trust the Daily Beast, so.
Starting point is 00:35:37 Well, yeah, I mean, certainly Will Chamberlain, the heads, human events decided to launch an ad hominonym attack on Daily Beast. Again, it was classic Whataboutism. It's like, how can the Daily Beast talk when they've been guilty of X, Y, and Z? Again, not refuting the arguments, but saying, oh, it's the Daily Beast. And I've seen one or two people perhaps mention it, not necessarily disputing or disputing the findings in the investigation, but using the investigation as an opportunity to criticize the Daily Beast, saying, oh, yeah, but it's the daily beast. So there is an element of that. But I think, and I think this is what's interesting with the forensic nature of the investigation.
Starting point is 00:36:14 I think once, if you present a certain amount of data in a very, in a way that's very hard to refute, even people who might be, you know, more inclined to be conspiratorial, can be. convinced, you know, so I think there is a, there is a space for that kind of forensic type journalism, which I think is useful in this day and age because, you know, people like to often see the kind of ins and outs of what's actually happening. You know, I spend a lot of my time now doing these big threads on disinformation, doing network analysis, because I like to expose the nuts and bolts of disinformation campaign. And I think one of the things we see with online journalism is there's such there's often this kind of slavery to format you know you do an 800 word article it doesn't get too technical because you know that's not what people want when actually
Starting point is 00:37:03 there probably is a market for this kind of blow-by-blow account of why something might be fake or false maybe we underestimate people actually in terms of the content we might provide them well that's why bellingcat exists right and is doing well yeah absolutely there's also one other thing about the news organizations that are publishing this stuff, which is that they used to pay. They used to pay their op-ed writers, and they used to pay them pretty well. So now they can't pay, partially because the way the news industry is falling apart, and partly because they don't want to at some of the places that are making money. And so you're getting, they just want content. And they want whatever content they can get for free.
Starting point is 00:37:56 And I think that makes them even more vulnerable to this kind of stuff. No, absolutely. I mean, there's actually a labor exploitation element to all this. You know, people will, I mean, forget a fake journalist for a minute, but, you know, I'm an academic, right? So it's very common for people, especially doing PhDs or even their masters to want to get their name out there. There's a lot of pressure on academics to get their name out there.
Starting point is 00:38:19 And one of the ways to do this is to start writing op-heads. So there is a lot of people, a lot of hungry academics providing free content to these news sites because they're told that you need to get your name out there and academics aren't used to being compensated. Not in the ways, not like per article, Trishol certainly. So there's certainly, I think there's a kind of understanding within the journalism industry. Certainly within the political analysis foreign affairs elements of journalism where you do have this kind of model. where people are willing to work for free. And obviously, editors will certainly exploit that. And what's interesting about this case, we saw in some cases where editors were offering money.
Starting point is 00:39:04 So I think it was the example I mentioned earlier, when one of the fake journalists approached the editor at the Hong Kong Free Press, the guy, the editor mentioned that they offer a fee, offer a sum of money for articles. but the fake journalist also said, actually, I'm willing to waive the fee. That's fantastic. I mean, I love the idea of the PR firm that's writing this stuff, actually taking two bites at the Apple, right? So they get paid to write it, and then they get paid, you know, from the other end, from the organization.
Starting point is 00:39:43 The sales commission, right? You get your salary, and then, you know, if you pitch a successful article, you get a bonus. Oh, the future is awful. I mean, there was that article recently. I think it was Microsoft, we're replacing a lot of its journalists with AI or something. I do wonder what the future of the industry is. We've got fake journalists. We've got robot journalists.
Starting point is 00:40:09 Well, I think at a certain point, you'll end up, I think Adam Curtis said something to this effect once. People will just flee these spaces. Right. At a certain point, if it becomes so, like if Twitter, say, just becomes so completely overrun by disinformation robots, then that's all that'll be left is just these robots kind of talking back and forth to each other. And the human conversation will have moved off site and gone somewhere else. And I think he can always trust. Sorry. Go ahead.
Starting point is 00:40:43 I was just going to say, and you can always trust your friendly neighborhood podcaster. Right, right. Warcollege.substack.com. And I believe he kind of had this vision of the future. He said the future of the internet will look much more suburban, meaning that there will be these kind of like blocks that are very highly regulated, very peaceful, but peaceful because they keep a lot of other people and things and information out. Right. And then you'll have kind of these wild environs that are filled with the chaos and noise of bots and fake journalists and who knows what else we're going to end up coming up with in the next few years. Absolutely. I mean, to an extent, I suppose the Internet's always been like that, or at least it's been like that for some time.
Starting point is 00:41:33 The different, I mean, you can even say the dark web is already the kind of the other side of that suburban kind of dystopia, I suppose. But yeah, I think I can see it becoming more compartmentalized because you are seeing pushes for regulation in various aspects of the way the internet works. You know, even Trump's recent threats against Twitter about Twitter, you know, branding itself as a publishing company and therefore being responsible for the content on its platform would have a huge impact on the type of content produced on Twitter, right? So that would be a good example of how you landscape the information terrain in a specific way and what happens to people do they go elsewhere. I mean, we're seeing this big exodus again from
Starting point is 00:42:16 Twitter of right-wing voices who go to places like parlor. I went on parlor recently, and it's just like a lot of angry people shouting into a void. You know, so you do see the way the space has changed by regulation and policy. And, you know, it's a very dynamic process. Yeah, I mean, we're living, we're living through something right now. I always like to remember that after the invention of the printing press, there was 200 years of horrifying religious conflict. As we sorted out how this new technology was going to affect our lives, and we're living through that again right now. It's not necessarily a religious conflict, although there's certainly religious aspects
Starting point is 00:42:53 to certain parts of it, but it is us negotiating how information is going to work now. Yeah. I think, you know, I suppose to draw on that parallel more. I mean, with the printing press, again, you had people with vested interests in in preserving their monopoly or knowledge, you know, I suppose the church or the clergy. And to some extent you have that now,
Starting point is 00:43:16 there are people who perhaps wish to preserve a monopoly or knowledge. It's not the religious context. And at the moment, we are seeing kind of positive changes in that. You know, we see leaks. Edward Snowden did it. I mean,
Starting point is 00:43:28 one could argue that WikiLeaks, whether it's positive and negative, is an aspect of transparency. But at the same time, we're seeing attempts by often, I mean, in the current guys, right-wing groups to try and utilize the information space to, you know, kind of, I suppose,
Starting point is 00:43:46 grab onto power or maintain power, right? So there's all these kind of interest groups at stake trying to use the information space to their own advantage. And it's alarming. It's hard to know where it's going. But I think recently in the past six years, we've definitely seen this tip in the favor of populism. And that's because we were underprepared, I think. I say we, I mean, the royal we. I think the world was unprepared for how this would play up. So is there any, you see you mostly see this on the right at the moment, is there any kind of similar action or disinformation campaign coming from the left at all? Is it something that you see on the other side of the political spectrum? Yeah, I mean, it's harder to say. So for example,
Starting point is 00:44:30 if often if we look at disinformation, it's perpetuated by, or at least we think it's perpetuated by, or at least we think it's perpetuated by, actors. So if we use the internet research agency, the Russian agency as an example, we know that they were creating disinformation networks who were both left wing and right wing, right? So they were playing both sides. However, it was stacked more in the favor of the right. So they were creating left wing propaganda, but they were creating more right wing propaganda. So we know that they're doing both of these things. I mean, certainly in my experience, I tend to see it coming more from the right. It doesn't mean it doesn't exist on the left. I mean, it obviously exists on the left, but I just don't see it in the same scale.
Starting point is 00:45:12 And there could be reasons for that. You know, there could be because a lot of it is coming from, for example, in the Russian case, you know, the argument is that it wanted to see a Trump presidency and Trump is a Republican. So it makes sense to have right-wing propaganda. But then you have people discussing, you know, Steve Bannon's networks across Europe, him trying to contain China and trying to support the kind of election of right-wing leaders across Europe using these kind of methods. same with Brexit. So a lot of the high-profile cases or political events we've seen in the past seven years are tend to be events that have been supported by right-wing causes or right-wing parties. So that kind of makes sense. I mean, obviously there's nothing to stop the left using propaganda. And we know that historically that is very true, right? Well, let me toss this out at you.
Starting point is 00:46:01 I would say, and this is me just kind of thinking out loud here, working things out on the podcast, so to speak. but I would make a distinction between misinformation and disinformation. I would say that broadly, very broadly speaking, the left tends to be more disorganized and more prone to misinformation as opposed to disinformation. Yeah, I mean, the difference obviously,
Starting point is 00:46:23 or at least the concept, to accept the difference in dis and misinformation is the intent, right? Disinformation is, the intent is deliberate. People spread that information deliberately, in the knowledge that it's not true. Misinformation is accidentally communicating information that is not true. I think that's the crucial difference.
Starting point is 00:46:43 And in that definition, I could definitely see that as being true. I think there is an intent difference between the right and the left, loosely speaking. There's also the fact that the right wing believes that the New York Times counts as left-wing propaganda. And The Washington Post. No, I mean, I think there's a seriousness to that, you know. You're right. It is all about kind of where you're sitting and what your attitude is, right? Yeah.
Starting point is 00:47:11 But we have to really push back again on this relativism, you know, this notion that you can't have, you know, outstanding journalism. You can't have journalists who are committed to, you know, integrity and this kind of thing. And I would, I mean, I have no qualms in saying that I believe the New York. Times is a very robust publication and that to just dismiss it as left-wing propaganda is, you know, it's an affront to the truth, I would say. Lefty. Well, I mean, you know, like, it's, I don't know. I just sort of think, you know, like, again, with this investigation in mind, you know,
Starting point is 00:47:52 you have publications who published, I mean, it really shows you where people's values lie. If you're an editor who published a fake journalist and stands by that, then really what kind of publication, you can't put that editorial comment on the same level as a newspaper who would take down that article and apologize, right? It's showing you that those two entities have a very different regard for truth, provenance, and the values of journalism. Yeah, that's a really good reflection on the reactions to this story really tell you a lot about the values of the outlets, right? Yeah, definitely. What do you think you learn?
Starting point is 00:48:32 during this investigation, what did it teach you about the nature of disinformation in the modern age? Well, I mean, I've always been, I've been doing this for a while now, so I'm very cynical. I think, I think if I was to say what I'm looking for now that I wasn't before, is, is again, I mean, looking at what's coming, I would say artificial intelligence is going to increase the sophistication of, the kind of propaganda of saying, I don't think we're just going to see artificially generated human images or videos. We'll even see artificially generated content, right? Actual written content. So, you know, I think it probably is not far up before we'll get very plausible articles generated by computers that are propaganda. And if you can generate propaganda at scale
Starting point is 00:49:25 and distribute it at scale, then dominating the information space with shit to use the the banning quote from before is going to actually be very easy. So I think what these little investigations show us, they give us an inkling into what's coming next. So if we know that, I mean, we know that PR companies have always positioned op-eds and kind of various things. But this operation shows a slightly more industrial scale of that.
Starting point is 00:49:50 You create, you probably have a few people with multiple personalities or multiple fake accounts generating dozens of articles. The next step then is to have fewer and a fewer people, ideally automation, generating even more and more articles and distributing them through more and more outlets, right? There's always a scale, an element of scale here. So I think that's what we need to be on the look at for. It sounds very dystopian, but I think what this, looking into disinformation has taught me is, you know, you can't afford to be surprised because, you know, it's amazing what innovation technology is actually capable of producing.
Starting point is 00:50:26 Don't give in to astonishment. Yes, exactly. Or give in, but only momentarily. Right. And have a coffee and then regroup. So do you think there are more networks like this that just haven't been uncovered? Yeah, absolutely. I mean, the interesting thing about this network is when I first came, when I first saw it,
Starting point is 00:50:48 the Lin-Newan articles in Asia Times was not actually ostensibly connected to the Arab Eye, the Persian Now articles, right? So there wasn't a direct link. The only link that existed was the fact that this, suspicious character had shared this link with a friend. So what this would suggest to me is that there are units like cells. Let's think of the cell structure, right? They're related, but operating independently in order to preserve their overall
Starting point is 00:51:15 cohesiveness. So I think absolutely exist. And I think what finding these networks takes is a combination of people, obviously, who have good OScent skills, but people who also know a specific area. Like I said, I became suspicious primarily because I knew I'm familiar with the kind of propaganda tropes or PR tropes to look for in this part of the world that I specialize in. So you really need a combination of good journalists, area studies experts and people who are naturally suspicious like myself to look into these things. What should, what can the normal person do day to day to protect themselves from this? Honestly, I mean, it's a big question.
Starting point is 00:52:02 I think the normal person, if we were to take that, literally a normal person is neither right nor left, and it could read anything. I mean, honestly, I would, yeah. Oh, boy. We're in a lot of trouble, aren't we? No, I can give advice, but it's unrealistic, you know? It's like, you know, when you sort of say something,
Starting point is 00:52:29 you give advice, but it's not something you can. can imagine everyone just doing, you know, I could easily, it's easy for me to say, you know, you read an article by someone, Google that person, see what else they've written. But, you know, that's the job with the editor, right? The editor of it's that person. And great if the individual does that as well, but I just don't expect that to happen. I'd love that to happen. And I obviously recommend that people do that, look into the kind of, look into who's telling the story, look into more of the publication itself of what their agenda is, but I don't know if people will do this.
Starting point is 00:53:05 My plea more is to the editors out there, right? Because at the end of the day, they're still gatekeepers for knowledge. And media literacy is something that needs to be instilled from an earlier age. We've seen in Finland, for example, the Finns have been adept, and the Estonians have been adept at trying to encourage people from a young age. to adopt and look at sources and evaluate sources critically when it comes to information. And that's really, the root of this problem is one of education and digital literacy. Well, Finland and Estonia are also on the front lines of something that we're not.
Starting point is 00:53:44 They're dealing with a whole different paradigm there. But that's come to the U.S. now. I don't think the U.S. or Europe can, you know, the whole point of the Internet is dissolved, or it's dissolved in many ways traditional borders. Estonia and Finland geographically have always been on the border with Russia
Starting point is 00:54:04 and have, and because of that, as you said, have been very kind of aware of this disinformation and subterfuge and infiltration. But people, we know enough now to know that
Starting point is 00:54:17 we do not have the luxury of geographical distance. It doesn't exist. So those kind of media literacy, information literacy, endeavors need to actually be adopted by U.S. policymakers, European policymakers, African policymakers, everyone. It's fundamental.
Starting point is 00:54:37 All right. I think that's a good place to end on. And just Jason, do you have any follow-ups? No. I think that we've once again managed to scare the public and leave them, you know, a little less happy than they were before. So it's not a, it's not a war college episode unless it ends with a little misery. I found my people then. Yeah, no, this is usually how the episodes in is we put everything in context, and then everything gets really sad in the last couple minutes. And there's no hope.
Starting point is 00:55:13 You need some sad music as well, just to really push it home, get the melodrama going. Mark Owen Jones, thank you so much for coming on the show. Where can people find your work and start learning more about, And get educated, basically. My Twitter feed's a good place to go. So Marco and Jones is my Twitter handle. And there's links there to my blog and various other things. I've got a book coming out early next year and disinformation in the Middle East.
Starting point is 00:55:39 So keep an eye up for that, I guess. We would love to get an early copy and then have you back on the show. I will send you one. I will send you one. Absolutely. I'd love to. Yeah. I have to write it first, obviously.
Starting point is 00:55:54 Excellent. But I've got a contract. Yeah, exactly, right? You don't have to write it as well. No, exactly. I'll get my robot to do it. All right. Thank you so much.
Starting point is 00:56:05 Thank you, guys. Enjoy the rest of your day. That's it for this week, War College listeners. War College is me, Matthew Galt, and Jason Fields, and Kevin Nodell, who's created by myself and Jason Fields. We have a substack now. Warcollege.substack.com. This is the Information Warfare News
Starting point is 00:56:25 Every week, Jason, Kevin and I are putting together a newsletter that rounds up all of the defense news that you need to be watching with a little bit of light commentary and context from ourselves. There is also going to be bonus episodes of the show coming to Substack, some rare interviews, things that are left in the cutting room floor from our day jobs, stuff that we think that is important that you need to know. It is hard out there, as you know from the episode on the internet to find out just out what the hell is going to be. going on, who's telling the truth, who were the journalists you can trust. You've been listening to us for years now. You can trust us. Go to warcollege.substack.com and sign up for the newsletter. It's free right now, very soon. We're going to start charging a dollar a month for the newsletter, and then $9 a month will get you the bonus episodes that we're going to be putting out every month from War College. Failing that, you can find us online at
Starting point is 00:57:21 warcollegepodcast.com. We're on Facebook at facebook.com forward slash war college podcast on Twitter at war underscore college. We're on iTunes and everywhere else fine pods are casted. We will see you next week for another conversation about a world in conflict.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.