Consider This from NPR - The cream of the slop: this year's AI highlights
Episode Date: December 19, 20252025 has proved that artificial intelligence is rapidly reshaping online reality and that the “slop” is here to stay. NPR’s Geoff Brumfiel and Shannon Bond have spent much of the year rolling a...round in that slop and join host Scott Detrow to break down some of the highlights and how to sort the real from the fake.For sponsor-free episodes of Consider This, sign up for Consider This+ via Apple Podcasts or at plus.npr.org. Email us at considerthis@npr.org. This episode was produced by Elena Burnett and Daniel Ofman. It was edited by Brett Neely, John Ketchum and Courtney Dorning. Our executive producer is Sami Yenigun.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Transcript
Discussion (0)
We are going to kick off today's episode with a little bit of a cliche, a dictionary definition.
But stick with us. It's relevant. And Merriam Webster forced our hand by making slop its word of the gear.
Their definition? Digital content of low quality that is usually produced in quantity by means of artificial intelligence.
You know it. You've seen it. It's weird. It's clunky. And it is everywhere.
Blue cooler car. Wait, son. Let me have some.
Dad check this pancake blimp.
Nuts.
There he is officer.
He stole my pancakes.
The YouTube channel, Fantastic YT, has been cranking out short videos featuring cartoon cats.
It's racked up hundreds of millions of views.
An AI-generated summer reading list was published by major newspapers and featured 10 made-up titles from real best-selling authors, like The Last Algorithm by Andy Weir or The Rainmakers by Percival Everett.
Playlists were also slopped.
This summer, sing the lie they made you learn.
Let it burn, let it burn.
This summer, music from a band called The Velvet Sundown,
flooded Spotify users, discover weekly feeds.
And within a few weeks, the band's music had been streamed millions of times
before anybody noticed the group wasn't actually real.
In September, Spotify announced it had removed 75 million spammy tracks,
from its platform.
But other companies began to wallow in the slot.
In the same week, META released a feed for users to create and share AI-generated videos.
An OpenAI launched a new version of an app to generate video and audio.
Even one of the world's biggest record labels, Warner Music Group,
signed a licensing deal last month with two AI companies
that was previously suing for copyright infringement,
allowing paid users of their platforms to create songs
with the voices and compositions of artists who agreed to participate.
And now AI Slop has managed to find its way into the pages of the country's oldest dictionary.
Consider this. The year has proved that AI is rapidly reshaping online reality and the slop is here to stay.
Coming up, we will take a look at the cream of the slop from 2025 and how to sort the reel from the fake.
From NPR, I'm Scott Detrow.
It's Consider This from NPR, 2025, has been a big year for artificial intelligence,
especially for short AI-generated videos that people keep posting online.
The kids, and now Merriam-Webster, call it AI Slop.
And NPR's Jeff Brumfield and Shannon Bond have spent much of the year rolling around in the slop.
They are here with the highlights, so to speak, and to talk about what it means.
How do you both?
Hey, Scott.
Oink-ko-ink.
So we're going to, we're going to, we're going to.
Let's talk about three fake video todays, all of which have been widely shared online. Shannon, let's start with you. What's our first slop entry?
Well, back in October, this was a Saturday. It was one of the days of those big no-kings protests, right, against the Trump administration. President Trump posted this AI-generated video of him flying a fighter jet. The jet says King Trump on the side. He's wearing a crown. He flies over a city full of protesters and dumps what looks like.
poop all over them. You've probably seen it. Sure have.
And the video is set to the song Danger Zone by Kenny Loggins. I should say, as an aside here,
Loggins is not happy about his music being used in this video. He asked for it to be taken down back
in October. It has not since been taken down. Now, this video is, of course, obviously a fake, right?
Like, I don't think anyone's watching it thinking Trump is really flying a fighter jet. And we have
seen the president share AI videos and imagery before. He did actually a lot during the
2024 campaign. He and his supporters seem to really love these kind of memes. But since he's
taken office this year, this kind of strategy, it's really something we've seen not just the
president, but his whole administration embrace. The White House and the Department of Homeland Security,
their social media accounts, post these sort of meme videos and images often made with AI. And I think
what this tells us, given we're heading into midterm elections next year, we should
expect to see even more AI-generated political content all over our feeds.
I mean, Shannon, this is so prevalent. You're seeing Trump and his allies return this more and
more and more. What has the White House said about their use of, you know, I feel like it's fair to say
there's almost like propaganda videos that they're creating with AI. Yeah. And it should be clear.
It's not always clear that the White House itself or the White House staff are making this.
In some cases, like this fighter jet video, we're seeing, you know, the president or administration
accounts resharing content that's been made by other people online. You know, the White House doesn't
tend to comment on these specific videos, but what they have said in the past, in general,
about this social media approach, you know, they said through things like, you know, the memes
will continue. It's clearly a form of messaging, I think, they think, resonates with their
audience. And, you know, look, this is a very online administration. This is a very online
president. And this is, you know, they're very much engaging in the language of what is online
at the moment, and it's increasingly becoming AI.
So that's the first trend we're talking about.
Let's move on to video number two.
It came from a little company.
Nobody's really heard about it.
We haven't talked about it that much.
It's called OpenAI.
Google it.
So it actually showed the company's CEO committing crimes.
Jeff, what's going on here?
Yeah, this second video came from an app.
OpenAI rolled out earlier this fall called SORA,
and that app has made AI slop super easy to generate.
One of the features of SORIS, you can put.
put other people's faces and voices into your video with their permission.
One of the first people to grant permission was Sam Altman, OpenAI CEO.
He let people make videos with his likeness,
and an Open AI employee created this video of Altman in what appears to be a target.
It's surveillance video, and Altman seems to be shoplifting computer chips for his AI company.
Please, I really need this for Sora inference. This video is too good.
Now, this is an inside joke about AI's endless need for computing, but it's really notable for a couple of research.
First, it shows that AI videos can now put real people into completely fake situations.
You can make the CEO of a company commit fake crimes and make it look pretty real.
But that's not the only fake stuff that SOAR is capable of producing.
So, you know, we've also seen news stories about SOAR producing fake videos of people stuffing ballot boxes, fake local news.
interviews. And this is creating a lot of concerns, especially, you know, as Shannon just
said, we're going into an election year next year, and SORA has basically lowered the bar for
slop to zero. So that's two different extremes here. Jeff, you're talking about ways that
these fake videos can get into the real news cycle very quickly. Shannon, you're talking about
just totally farcical propaganda, for a lack of another word. Let's move out to our final
video. This is one. Maybe our listeners have seen it's racked up like 200 million views on
TikTok. Tell us about the bunnies, Shannon.
Yeah, so this video, it looks also pretty realistic.
It looks like ring camera footage of some very cute bunnies bouncing around on a backyard
trampoline at night. And you can imagine, right? People post these kinds of videos,
right, from their actual ring cameras. And so when this was posted on TikTok this summer,
a lot of people were fooled into thinking it was real. There was no watermark on the video
itself, disclosing that it was made with AI. TikTok has since put its own AI,
on the post, but, you know, judging from the comments at the time, you know, lots and lots of people just totally thought it was real. And I think it's interesting about this one, you know, it's quite different than the other examples we've talked about. But this is the kind of, you know, mindless, cute engagement bait. It's animals, right? That's so prevalent on the internet. It's always been prevalent on the internet, right? And so in some ways, it's not surprising that now we're seeing AI versions of this. But what strikes me is this is the kind of stuff I am seeing all over my social media feeds at this point. And whether or not they are
like clearly labeled as AI, it really does start to blur the boundaries. And it makes people feel,
I think, like, this AI slop is inescapable if you are going to be online. And if it is inescapable,
I'm just wondering, Jeff, is there anything we could do about that? Well, I mean, the first thing is,
you know, until there's some sort of regulation and labeling, you're probably just going to have
to accept, Scott, that you're going to be duped sooner or later. I mean, I think all of us at this
point, have seen videos that are AI. But that being said, there are some things to watch
out for. AI videos tend to be very short because it takes a lot of computing to make them,
and they often contain scenarios that if you take a second, you'll realize are kind of unrealistic.
Like, all those monies aren't going to bounce on a trampoline all at once. A reverse image
search can help, too, or searching a news story on the event you're seeing. But interestingly,
Scott, you know, one of the things researchers I spoke to,
to about this say is they actually don't want people to become cynical and just assume everything
is fake. Because when that happens, it makes it really hard to hold bad actors to account.
You know, people can say, oh, that's just fake. I didn't really do that thing. And so we've got to
try to cling to reality, even for the cute animal videos. You know, that raccoon that passed out
next to the toilet, I thought that was AI. But I did my homework and I was relieved. And it brought
little joy into my life to see that a raccoon really can still get drunk in a liquor store in
2025. And that's a real thing. We can always count on raccoons to give us realistic, entertaining
internet content, I think. Let's hope. Let's hope the raccoons aren't put out of work by all this
AI image generation. That was NPR's Shannon Bond and Jeff Brumfield. Deep, deep, deep into the
slop. Thank you and my condolences to both of you. Thanks, Scott. Thank you.
and Daniel Offman and edited by John Ketcham, Brett Neely, and Courtney Dorney.
Our executive producer is Sammy Yenigan.
Thank you to our Consider This Plus listeners who support the work of NPR journalists and help keep public radio strong.
Supporters also hear every episode without messages from sponsors and unlock bonus episodes of Consider This.
You can learn more at plus.npr.org.
It's Consider This from NPR. I'm Scott Detra.
want to hear this podcast without sponsor breaks amazon prime members can listen to consider this sponsor free through amazon music or you can also support npr's vital journalism and get consider this plus at plus dot npr.org that's plus dot npr.org
