Media Storm - Fake news! The impact of AI on journalism
Episode Date: November 6, 2025Like this episode? Support Media Storm on Patreon! Margaux Blanchard is a widely published journalist. She has written everything from essays ...about motherhood to investigations about disused mines. But what her editors didn’t realise? Margaux Blanchard doesn’t exist. At least, not as a human being. A company called Inception Point AI is using artificial intelligence to publish 3,000 podcast episodes a week at the cost of $1 a piece. Reviewers call it ‘AI sludge’ – is it coming for our jobs (and brains)? Big Tech firms are using journalists’ work without permission to train AI to do their jobs. The AI summaries often get the facts wrong while putting human news publishers out of business. Where does this leave us in an era of disinformation warfare? Can the mainstream media blame AI when it’s already churning out sensationalist clickbait and poorly fact-checked news? And could AI ever be used to improve chronic problems in our news, instead of exploiting them? Press Gazette editor Charlotte Tobitt and tech journalist Rob Waugh join Media Storm to breakdown the best and worst impacts of AI on the news. The episode is hosted and produced by Mathilda Mallinson (@mathildamall) and Helena Wadia (@helenawadia) The music is by @soundofsamfire Follow us on Instagram, Bluesky, and TikTok Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Matilda, how much work does it take us to make one episode of Media Storm?
Our entire waking life, we don't sleep.
Literally.
Like, quite, I don't know, like quite a lot of time, probably too much time to also have two other jobs.
Exactly.
So imagine how I felt when I read about a new podcast network,
which is making 3,000 new episodes a week with just four members of staff.
No, this is really up.
upsetting. Also, how? Is that real? Well, that's a great question. I guess that depends on how you define
real. A company called Inception Point AI is using artificial intelligence to flood podcast
directories with shows. Shows are made for an apparent cost of just $1 an episode. There's
no disclosure on the episodes that the shows are AI generated, though the show's reviews have
worked that out, calling it AI slot. Thank God. Australian journalist Linton Bessa researched the company
and here's how he described the shows.
In researching this and listening to so much AI audio sludge,
I can't begin to describe how unsettling it was.
A soundscape devoid of the most basic human identity and intelligence.
And if that's the future, an audio washing machine of nonsense ideas
and synthesized realities, we really are in more trouble than perhaps we know.
Yikes.
The shows this company churn out are very very.
very wide-ranging. There's one about knitting, for example. And there's also one on the shooting
of Charlie Kirk, which was released within an hour of his shooting. You look so upset.
I'm a bit upset. Obviously, there's like the personal squirm about, am I going to be rendered
obsolete as a podcaster and a journalist? And then the thought that how many people are going
to be rendered obsolete is this going to be mass economic displacement, thanks to AI. But, like,
Like on a journalistic level, I have serious questions in my mind right now about disclosure.
You said they don't disclose that it's AI generated.
Quality, intellectual property.
I think this tells us that we're not dealing with these questions clearly, nearly as quickly as we need to be.
Also, do people want to listen to AI generated podcast episodes?
Well, that's the thing.
I would say no, because for me, what makes Media Storm so special is the humanness.
What is that?
That was a drill.
Machines!
Yeah!
We're recording in my flat.
But for me, what makes Media Storm so special is the humanness of it all,
the thought and the care that we put into our questions,
the ability that we have as humans and not robots,
to connect with our interviewees, to empathise with them.
Because a lot of them have been through really serious and dark things
and to ask them thoughtful and emotion-led.
questions. It's also like very telling about the state of our information economy today.
Quality is so secondary to quantity. There are like a lot of hypothetical fears people have about
AI ending humanity. But like one risk of AI that is in no way hypothetical is the damage that it
does to truth in society. We have seen deepfakes and bots successfully weaponized in geopolitical
and radicalist warfare.
So this is my biggest immediate fear
when I hear this story.
The most maligned version of this
is, of course, disinformation.
It's like post-truth chaos.
But even at its mildest,
what we're looking at here
is tidal waves of slop,
of just shit information.
The replacement of investigative,
informative, indispensable content
with mind-numbing clickbait content.
We've seen that, yeah,
already, but this is happening now at a rate that makes the last four years of media storm
look like child's play. And I will say this here right now up front before we start this
episode, which is about AI and journalism. I don't like AI. Okay, it's pissing me the fuck
off. I'm not trying to be like a Luddite and nor am I trying to ignore advances in technology.
But I am getting increasingly pissed off at people using AI like chat GPT for the smallest, most
obvious things. Like I went to a restaurant with some friends and one of them asked chat GPT,
what's the best thing on the menu? It's like that's a question you'd usually ask the waiter,
right? Or it's just a pointless use of AI. Another friend who's fluent in Spanish said he was using
it to write emails to his clients in Spanish speaking countries. Like he literally went to four years
of university to learn Spanish. It's these kind of things that are just annoying me. It's changing
human behaviour, right? Like even putting aside the environmental impact of AI in killing the planet,
I also feel like it's slowly killing our human connections, our critical thinking skills,
the way we weave together words or sounds or visuals in a way that only lived experience can
inspire. I don't know if I need to like explain that what you just said about environmental impact
only because you're the person who told me that. But every chat, GBT, search uses
so much energy
in like water
basically like a chat
GBT search has quite a big climate impact
and we all do it like we don't have brains
but I'm definitely not as negative
as you are in my like instinctive
view I think that with
good distribution
and education about how to use AI
AI can hugely improve
people's capacity to work
and just to live like on an individual level
But on a social level, on a collective level, I am concerned, AI will have massive socioeconomic and political consequences.
I'm talking like post-truth anarchy, mass economic displacement.
I touched on wealth inequality, growing mental health and loneliness like you've touched on, copyright and privacy infringements.
I thought you weren't as negative in your view.
Shit.
Maybe I, I don't know, maybe I, yeah, because when I think, okay, there's the positives and there's the negatives and there's the negatives and how.
how do we make sure that the positives outweigh the negatives?
The thing that really scares me is the rate of change.
If change happens at a pace humans can adapt to,
then ultimately it does tend to be positive.
I'm a bit of a wiggist, I think, like, yeah, progress is good.
But if the rate of change outstrips the rate of adaptation,
then the fallout can be catastrophic.
And we've seen that, like with the Luddites.
That's a historical movement that was basically completely economically displaced
by the Industrial Revolution.
the invention of certain machinery, like the miners, those communities are still suffering
because adaptation was not prioritised at the same rate as change. And with AI, the rate of change
is prolific. And I don't think we're keeping up. Okay, maybe I am as negative in my view of AI. Yeah,
I think this is not being handled well and I think we're heading for disaster. And there have been
multiple reports which confirm my worries about human connectiveness and AI. A recent study from the
Massachusetts Institute of Technology has shown that over-reliance on AI may be slowly shrinking our
brainpower, killing creativity and damaging our memory. Researchers from Microsoft and Carnegie
Mellon University warn that the more you use AI, the more your cognitive abilities deteriorate.
And yet journalism relies on critical thinking, discerning fact from fiction.
and gaining the trust of your readers or your listeners.
Yeah, my AI lies to me when I have used it in the past for research purposes.
It manipulates facts.
It manipulates dates to give me what I'm looking for.
And it equips my biases with basically fake anecdotes and data to back them up.
So at first, I'll give AI a task and then I'll read it and I'll be like, wow, cool.
I knew I was right with my analysis.
And then I check the sources and they don't light up and the dates are wrong.
And I realized that actually, like, reality is more complicated than my preconception and
I need to critically research around my preconceptions.
AI didn't do that.
And then I spent more time fact-checking the work AI did than it would have taken me to
just do the research myself.
But in reality, most people won't do the fact-checking.
They'll take the free and quick research.
Yeah.
And it's interesting what you said about biases.
AI is trained on the information that is already on the internet, right?
Yet a lot of that information contains unconscious and conscious biases.
A recent study from a German university revealed that AI tools like chat GPT
are recommending significantly lower salaries to women and people from minority backgrounds,
even when the qualifications are identical to those of white male counterparts.
No, I'm scandalised.
This isn't a tech glitch, though.
This is a reflection of the data we fed into these systems and a reminder that when we train machines based on a biased world, we get biased machines.
There is no doubt that AI poses significant problems to journalism. Will it continue to erode trust at such a critical time?
How can newsrooms keep up when AI summaries are crushing traffic to their sites?
Can we preserve creativity and humanity as storytelling,
becomes algorithmized.
And if AI isn't going anywhere, how can journalists use it for good?
AI is learning to escape human control.
And that's why so many people are worried about AI.
It's in the headlines pretty much every day.
These companies are much faster than our institutions.
There's so much potential for it to help us in all kinds of fields, including journalism.
Their plan is to use your data to replace you.
Welcome to MediaStorm, the news podcast that starts with the people who are normal.
asked last. I'm Helena Wadia. And I'm Matilda Malinson. This week's Media Storm. Fake stories,
fake reporters, fake news, the impact of AI on journalism. Welcome to the Media Storm Studio. We are
very pleased to have two very special guests joining us today. Our first guest is the UK editor
of Press Gazette, the magazine dedicated to journalism and the press. She has partaken in extensive reporting
on the media industry, including holding papers accountable for false or misleading headlines,
and has led recent investigations into the use of AI in news reporting.
Welcome to Media Storm, Charlotte Tobit.
Thanks, guys. Great to be here.
Our second guest is one of Britain's leading technology and science journalists
and has written about business and emerging tech for dozens of papers, magazines and websites
over the past 25 years.
He is the author of NASA's B's, 50 Experiments that Revolutionized Robotics and AI.
Welcome to the podcast, Rob War.
Hello, pleasure to be here.
So to kick off this discussion, let's start with accuracy.
In February this year, BBC research found that 9 out of 10 AI chatbot responses about news queries contained at least some issues.
In the research, the BBC asked the chatbots, such as Google's Gemini and Open.
and AI's chat GPT, 100 questions about the news and ask the platforms to use BBC news sources
where possible. The answers were reviewed by BBC journalists who were experts in the relevant
topics, rating them on criteria like accuracy, source attribution and context. Now, Charlotte,
you reported on this research for Press Gazette. What were the biggest concerns in terms
of AI chatbot news responses? So I think we're basically most concerned about how
it affects trust in the news brands. So it's the fact that it can take completely accurate,
kind of well-sourced, well-considered journalism, mangle it. And then a lot of consumers
wouldn't know the difference and would take it from chat chibach-chee, but chat chibach-ch-bitch
maybe says that's the BBC. And when people kind of start to realize that, I think it would
just degrade trust in the overall information ecosystem. The other big concern that ties into all
of this is the fact that kind of AI companies are essentially threatening the ability
for news businesses to make money out of their own content by taking it without compensation
and then some of the news companies might close or have to fire journalists and that would
denigrate the quality of news on the internet and kind of the information that those same AI
chatbots can use and train on meaning it so it's kind of like chasing its own tail type of
situation where the information just gets worse overall. So they're kind of the two main prongs of
why this is all so important, to be honest. So trust is a big issue, but as Charlotte touched on,
so is click through. More and more people are relying on Google summaries and not clicking
through to the original news outlet. Rob, can you just explain this issue with click through
and why it is such a problem for journalism?
The problem with the AI summaries for publishers
is that when people do a search for a news item,
they're looking for information.
And an AI summary delivers that information.
So there's really little incentive for somebody
to then go on and click through to the publisher to get more.
I mean, you know, people are time poor.
They just want to know what's going on.
And if they get a paragraph of AI generated stuff,
which may or may not be right,
research has shown that they're much, much less likely to click through to the publishers.
Google is, of course, adamant that publishers will still be fine and everything's great.
But, I mean, research has shown that click-through rates can drop by as much as 50%,
which is obviously absolutely devastating for businesses that aren't exactly high profit margin anyway.
Okay, so stopping click-through is no small issue.
Can news organizations do anything to stop this?
some have actually sued Google.
Will that work?
So unfortunately on the legal cases, we still don't know.
There's still no precedent.
The Hollywood Reporter and Variety just sued Google over the impact of AIVs.
That's the one.
That's kind of the real biggie.
Of course, many of the big publishers are signing licensing deals
for how their content appears in chat TBT, perplexity.
But that's only available to a small number of the bigger players, really.
And the deal signing has kind of slowed down.
So unfortunately for most publishers, that that isn't very helpful.
At present, you've got a situation where, you know,
Google and publishers are driving towards each other at 100 miles an hour
and somebody needs to blink and negotiate a little bit
because I think we're, you know, heading towards a situation
that particularly for smaller publishers might be untenable.
And there's some areas that are particularly problematic.
Like I've spoken to people who are in kind of review journalism
and Google AI summaries tends to absolutely garble that
come up with completely inaccurate reviews of products that are often no longer on sale.
And meanwhile, the people who provide the accurate information are actually losing out.
And so I think that there needs to hopefully be some form of accommodation or change.
I guess often what we see with new technologies is the regulation doesn't progress as fast as the technology progresses.
So the legal protections are catching up.
What are the strategies that news outlets are using to get around this click-through problem?
People are just trying to do what they can, like basically trying to keep people on their websites for longer, even if it's fewer people because fewer people have clicked through.
If you've kind of got the right people, that's great.
Goal.com, for example, it's kind of a massive football site.
Traffic is down, and that can be largely attributed to the Google AI views in particular.
But they've employed various techniques to get people to stay on the site for longer, so it's actually not hit revenues as badly as you would think.
And so it's things like new widgets and actually using AI in a useful way for like all the sports data and things like that and kind of do all the easy stuff with the AI.
And their CEO says they haven't fired any journalists, but what they have done is being able to get them to do the better stuff because the AI is doing that stuff.
And it's all lots of kind of equations like that at the moment.
So we're evolving.
Yeah.
You know, I mean, the media industry is always evolving.
Like I've been at press except more than seven years and it's never, the reason it's not boring is that every.
year there's kind of different trends people are having to do different things to compete or to survive
an advertising downtown whatever it is and this is kind of the latest wave but it is very much here
because of these AI challenges now streaming on paramount plus is the epic return of mayor of
kingstown warden you know who i am starring academy award nominee jeremy runner
I swear in these walls.
Emmy Award winner Eidie Falco.
You're an ex-con who ran this place for years.
And now, now you can't do that.
And BAFTA Award winner Lenny James.
You're about to have a plague of outsiders descend on your town.
Let me tell you this.
There's going to be consequences.
Mayor of Kingstown, new season now streaming on Paramount Plus.
Journalists are very used to being bombarded with press releases.
We can speak to that.
In June, Rob, you revealed a campaign by a PR agency called Signal the News, which featured
lottery winners who accidentally bimmed their lottery ticket, a catchy hook, a story that
would get millions of people reading. But there was just one problem. After some searching,
it seems like these lottery winners never even existed. Rob, can you tell us a bit more about
this story? Yeah, it's a, I mean, this is an absolutely fascinating one because it just, it speaks to
the kind of content that goes viral online and that sadly a lot of editors run without question
because they know it works. What you had in this situation was an unscrupulous PR agency that
seems basically not to really exist called Signal the News that sent out images of the lottery
winners. Two of them seem to be the same bloke. It's a bit of a nervy one when you're thinking,
okay, how do I say these people don't exist in case, you know, as soon as we published, we get a phone
call going, hi, I'm Mark. So it's obviously a bit of a nervy one. But one of the big problems
with this story is that most of them could have claimed their money anyway. So public information
alert, in most cases, if you're a lottery ticket or your kid throws it in the bin, you can actually
get your money. You just contact the National Lottery and they'll give you your money. But the
weird thing in this case was there was an email for one of these guys. We contacted him and said,
you can claim your money and it's all okay. And we got no reply. There's not many people in the
world, they're going to ignore an email saying you've won the lottery. So at that point, we were
pretty confident these guys have been made up. And again, as with a lot of these stories, it was just
to secure links for a company called Play Casino. And it seems in this case that the PR agency
was made up specifically to target lots of publications with this campaign. Could you tell us about
how, was it you got like threatened by the PR agency? Yeah, yeah, absolutely. They filed a quite
convincing legal letter
which obviously
at any publisher
you're pretty nervous
when you get one of those through
but I mean it was just nonsense
it's all just fake
Just how many publications
printed this story
How many times did we see this
in the mainstream press?
It was certainly dozens
you know it will have come through
as a release to journalists
and you know it's easy clicks
that kind of stuff
you know that slightly anxiety inducing thing
oh I lost my lottery ticket
you know this could have
to you, it's the kind of clickbait headline that pulls people in. And unfortunately,
you know, in journalism 2025, people don't seem to have the time to be checking stuff.
I can see why, as depressing as it is, editors are going for this content. But why is someone
putting this content out there, right? It's not just signal the news PR agency, Rob. You found
that three linked PR agencies are bombarding British journalists with what seem to be AI-written
press releases, like you've described, featuring fake people.
Former police officer Pete Nelson and chef Daniel Harris,
who both allegedly have decades of experience in their respective fields,
but they're extremely hard to find online.
They probably don't exist.
My question is, why would someone do this?
Why send out press releases promoting fake people?
I would say that calling them PR agencies is to dignify what they're doing.
They're mostly what are known as Black Hat search engine optimization or SEO.
guys. For SEO guys, getting a link on a site with high reputation, your site, no matter how
dodgy it is, no matter who it is, will rank higher in Google search. And obviously, getting into
that top 10 Google search results can mean the difference between you becoming a millionaire
and you're going bust. And basically, these guys have identified this sort of weak spot in the
media. You know, the media has changed a lot in 10 years. They've got rid of a lot of the older
people who might have made judgments like,
oh, this looks dodgy, can you check?
For these guys who are just after money,
it's a soft target.
Let's take this a step further, though.
Fake AI stories are one thing,
but what if the person behind the stories is also fake?
Charlotte, tell us,
who the hell is Margot Blanchard?
My new best friend.
So Margot came to like after a journalist named Jacob
got in touch with us about a freelancer.
pitched he'd received from a freelance journalist, quote-unquote, called Margo Blanchard
for his publication dispatch.
So basically, she was pitching something about a supposed decommissioned mining town
in Colorado, saying it had been repurposed for a training ground for death investigations,
but it was all very secret, no one really knew about it.
Jacob was rightly suspicious that this had just been made up by AI.
Like, surely there'd still be some trace of this thing, somewhere on the internet,
had kind of other flags in the writing and then it worked out that an article by her for wired
had already been taken down recently as well so I looked into it found like several of the
other articles by her and kind of the big red flag to me that feels like a very easy thing to
check is that most of them contained named people like quoted people usually that I just
couldn't find any trace of online so I got in touch with all those publications kind of sharing
our suspicions. Some of them, like Business Insider, took some pieces down straight away,
others later after we published a story about it. The weird bit was like trying to, as you say,
work out exactly who this Margo is, who had got all these articles published. So I obviously
tried to get in touch with her. There was the email address that Jacob shared and got nothing
back before we published. And then after publication, a Twitter account appeared. She was like
trying to claim, no, I am real. And I've been on holiday off the grid and suddenly
everyone is saying I'm not real
Were you worried for a second? Were you like
oh my God we've told the world that this woman isn't real
and she is real? I mean yeah
definitely. It's in my nature to just
immediately be anxious
but as I say so we're like
DMing this person saying it would be
very easy for you to prove if you're real
essentially and they like wouldn't do it
it seems like just someone
trying to get some quick
money by doing some freelance journalism
which isn't a particularly lucrative
area. This is wild like
that you, the thing you're having to chase now is, do you exist? Prove that you exist. Because
actually what this bot has done, fake sources, right? Human could do that. The thing that's
actually wild to me is that business insider and these other outlets had published all these
stories with fabricated sources and not done the fact checking themselves. Like, yes, the fact that
this was an AI behind it means that that can be done at a more prolific rate and more damaging
rate. But the root problem is to do with the media sector there. That tells us a lot about
the media landscape we're operating in and its approach to content and its prioritisation
of quality versus quantity. Yeah, no, I completely agree with all of that. I should just say
for business insider, because I mentioned them, theirs was probably the hardest to flag because
it was like two kind of op-eds about being a mum or whatever, but it didn't quote people in the
same way. Whereas to me, with all the other publications, the real immediate, easy thing to
figure out was the fact that these quota people didn't even seem to exist. Business Society have
actually since, taking down a load of other articles. A similar thing has happened essentially
from other people, not just, it's not like this is just a one-off, but I completely agree it. There's
like processes of fact-checking and editing that just aren't happening in every case anymore.
That's so dangerous, though, because while whoever is behind Margot Blanchard,
maybe doing it for a quick bit of cash.
There are people who would exploit that to publish fake stories, fake narratives,
harmful narratives, with the fake legitimacy of it being a news article.
And apparently, that's not that difficult to do.
And let me ask you, Charlotte, at least six publications were found by Press Gazette
to have previously published articles by Margot Blanchard.
And four of those, I believe, have now removed her work.
And a fifth has confirmed they're investigating.
but has anything come out of those investigations
and really the question is if publications were found
to have posted fake information,
would there be consequences for those publications?
No, it's the basic answer.
Yeah, beyond damaging trust in your brand,
you really can't do these things many times.
People do remember,
but there's no like standards investigations about any of this.
Another example is the Times was the first,
of several publications to publish supposed interviews with a royal cleaner called Anne Simmons.
So supposedly she'd worked at Buckingham Palace and now she was sharing her tips.
But yeah, we don't think Anne Simmons exists.
And then similarly, last week, Bill de Blasio, the former New York mayor supposedly did an interview
with the Times and then he was like, no, that wasn't me.
And in that case, it seems to be that the journalists had emailed an email address for Bill de Blasio.
It was a different Bill de Blasio, but they just gave their opinion on Zoran Mamadami instead, and then the Times just published it.
I mean, both of these things are examples of like getting Sapphire email and not checking it further.
But I mean, this comes back to what we talk about on Media Storm all the time, which is that there are many ways that news outlets can print false or misleading headlines.
And then if they are found to be false or misleading,
they at some points have to publish a correction.
But often these corrections are hidden in the back pages.
Nobody sees them.
Nobody knows that this happened
and that the original article was false or misleading.
And therefore, the damage by that original article is done.
It's already out there in the world.
And it's interesting that these issues that journalism is facing with AI,
basically just comes back to that same issue
that we always speak about on MediaStorm.
Which is where is the accountability.
Exactly.
Let's talk about money.
Rob, you wrote an article which showed
that fake news stories have been viewed
tens of millions of times a week
on Google's Discover News Aggregation platform.
Google promoted the fake stories
despite the fact they came from publishers
who had emerged from nowhere overnight.
An example of one top-performing untrue story that was promoted thousands of times
was headlined, goodbye to retiring at 67, UK government officially announces new state pension age.
Is it relevant that the spammers were targeting pension age readers here?
Yeah, they're targeting pensioners who might be looking on their phones because just to explain
a little, the Google Discover feed is an automated feed which is personalized to each user on a phone.
and it delivers stories that you might be interested in.
Writing basically lies, which are designed to make pensioners feel anxious,
is a proven way to get millions of clicks on Google Discover.
And I spoke to a French journalist called Jean-Marc Manac,
who has researched this in France.
He says that this hacking of Google Discover is much more advanced there,
and a high percentage of the top-ranking stories in France are already fakes.
And they use tactics that make pensioners anxious as well.
like you're not going to be able to transfer money to your grandchildren and stuff like that.
And he says that in France, some of these spammers have become millionaires off the back
of running these fake stories.
And as well as lying to pensioners, they're also stealing traffic from honest publishers
who are doing their best to rank on Google Discover.
But Google isn't policing the feed enough.
So these spammers are getting through.
Google really needs to pull its socks up.
It needs to take the job of publishers seriously.
Sorry. Also, you know, the scammers are making millions, but Google is making money off that too. By promoting the more clickable, less accurate stories, Google's also getting rich. Right. So why would it pull its socks up? Because, well, hopefully, by shining a light on this, you know, they will feel ashamed. You know, there's old people here who are being made to feel afraid. And like you say, Google is making money off every single click. You know, perhaps we can shame Google into actually taking action, although that's not always a guarantee with big tech.
We don't want to just talk about necessarily the negatives of AI.
We want to look towards the future because whether we like it or not,
AI is not going anywhere.
So how can news platforms and journalists use AI in a way that helps and not hinders?
The Financial Times has increased subscription conversion rates by almost 300%
by personalising the paywall messaging using AI.
Look, you reported on this.
What can you tell us about how news companies are using AI to positively impact subscription rates?
Yeah, so I think personalisation, both for this and other ways of connecting with readers,
is the best way that AI can be used, to be honest.
With the FT, that's like the AI knowing who's looking at the page and changing the messaging
around signing up for subscriptions or renewing.
Some news organisations, not the FT, but some kind of even changed the price to match
how much certain demographics are likely to be willing to pay.
Although, yeah, it might not be popular if people work out that that's how things are done
nowadays. And if it's not transparent, I feel like people might be upset if they think
they're constantly being charged more than other people. But yeah, that's definitely
kind of an increasing thing we're hearing about more and more and is a bit more of a positive
use in terms of kind of helping the business bottom line. Do you guys use AI as journalists?
Do you use AI? I constantly use it, yeah. I never use.
use it on live copy, but I use it a lot on research. What AI is very good at is that sort of top
line summary that gets you into the middle of a story. And so I, you know, use perplexity and
chat GPT all the time. And I never, the thing I've never ever done is used it to create live
copy or even dummy copy that then I turned into live copy. Because I think the problem is that once
you start to lean on it for that, the tendency is you just lean all the way. And then as soon as you
start doing that big mistakes, start creeping in.
And Charlotte, do you use it?
I don't know. I'm still quite skeptical of it in many ways. And yeah, I don't, as Rob
does, I don't touch it for actual copy. And it can be helpful with like reformatting,
kind of an annoying data set. But yeah, to be honest, I barely use it.
I mean, I use it occasionally. If I have to file like a boring document to somebody like,
here's a plan for what we're going to do in the next eight weeks.
That's interesting. You're more willing to use it like for output going to
colleagues, then output going to the public as a journalist.
I'd never use it for public, I'd put it going to the public.
Right. I mean, many publishers, journalists have for years used AI and similar technologies
for behind-the-scenes tasks, which is what I hear both of you saying, as an example for
transcribing interviews, monitoring trending topics online. Do you think, though, that we can
use generative AI in a more public-facing role in journalism without it affecting truth
and trust. There are some ways, I think. So pages with like weather reports, weather forecasts or
like an AI generated football match. You know, you've got all the scores or you've got the team sheets,
things like that are pretty simple for the AI to do. There's still have to be a disclosure on it,
but I think people aren't necessarily against that. I mean, I can understand that if you're using it
for like a light rewrite of something to go to a different publication or something like that,
then potentially it's okay. I am nervous whenever I'm.
I look at anything from AI. When I use it myself, I always go, please produce this with links,
because then I want to double check everything that it says. I also think that there's a danger
in using, for instance, A, B testing and AI in headline writing, that it sort of drives a tendency
towards really click-bait and slightly dishonest headlines that perform well rather than
headlines that actually sort of convey the truth. But we're already doing that. I don't know,
like one of the main issues we take at Media Storm is that the headline is often really
unrepresentative of the article because it's been written by, I don't know, an SEO specialist
and not the journalist. So again, it's maybe just speeding up issues already existing
in our industry. Yeah. I once had this story that it was performing extraordinarily well.
I mean, just thousands of shares and whatever. And I was like, my God, why is it doing so well?
And the headline I'd filed was the six asteroids that might hit Earth in the next two centuries.
And somewhere in the SEO process, it had been changed to the six asteroids that will hit Earth.
And well, no wonder people are clicking on that.
Look, we've talked a lot in this episode about how AI is going to change journalism.
We haven't talked about us the very insane ways that AI is going to change society.
A lot is going to change and it's going to change really fast.
And for me, this is a sign that journalists will be needed more than ever.
we will be needed to navigate fragmented truth and rapid social change and maybe mass economic
displacement and to report on all of the inevitable human collateral damage that an AI revolution
will bring. So the job that we do will look different, but it will prevail because you simply
cannot rely on AI to track the human fallout of AI. So my final question is a question for the
imagination. How do you both envision the job that we do will look?
in 10, 20, 50 years at the rate that AI is changing society.
In some ways, I mean, I think that the broader changes AI is going to wreak on society
are almost unimaginable within our business.
I think that we'll see maybe, I mean, I'm quite old.
So I've seen a sort of shift away from the kind of print world I was very used to
towards a world where everything is optimized for search.
So people, human journalists are often essentially writing for machines. And I think that hopefully what AI will mean is that honesty becomes more prized, that human element becomes more prized. And that rather than every publisher producing copies of the same story optimized for search and to be read by machines, the, you know, they've sort of slightly forgotten arts of actually picking up a phone and doing an interview with people and actually getting information that way and how.
having a unique piece of information will become more valued.
I mean, at least that's my hopeful scenario.
What do you reckon, Charlotte?
Yeah, I do agree.
I was going to say, no matter what, AI won't be able to be the one going to a scene and speaking
to sources or calling people up to understand things, I think the way that we put out
news will change, but it's hard to picture that exactly because people still comparing
this current time to the arrival of the internet.
and obviously that was like incomprehensible at the time so if it really is as game changing as that
then like of course it's hard for us to picture it thank you both so much for joining us on
media storm just before we all go could you please tell us where people can follow you or read
your work or if you have anything to plug just press gazette.com.com.com.com. Social media wise probably
LinkedIn's the best place to find me nowadays.
Likewise, you can find a lot of my AI reporting
and particularly on the issues around media and the AI
at PressCazette.co.uk.
And yeah, I mean, like a lot of journalists,
I've kind of given up on Twitter recently.
So the best place to find me is LinkedIn.
Thank you for listening.
If you want to support MediaStorm,
you can do so on Patreon
for less than a cup of coffee a month.
The link is in the show notes
and a special shout-outs to everyone
in our Patreon community already.
We appreciate you so much.
And if you enjoyed this episode,
please send it to someone.
Word of mouth is still the best way
to grow a podcast,
so please do tell your friends.
You can follow us on social media
at Matilda Mal,
at Helen Awadier,
and follow the show at MediaStorm Pod.
MediaStorm is an award-winning podcast
produced by Helen Awadier and Matilda Malinson.
The music is by Samfire.
Thank you.
