Science Vs - Misinformation: What Should Our Tech Overlords Do?
Episode Date: February 25, 2022After Joe Rogan was accused of spreading Covid-19 vaccine misinformation on his podcast, Spotify landed in the hot seat. People (including us!) wanted to know what the platform was doing to stop it. I...n this episode, we look into how tech platforms are fighting misinformation — and find out what actually works. To find out we speak to Professor David Rand, Professor Hany Farid, Laura Edelson and evelyn duoek. Find our transcript here: https://bit.ly/3BOEsOo This episode was produced by Michelle Dang, Rose Rimler, and Wendy Zukerman with help from Meryl Horn, Ekedi Fausther-Keeys, and Rasha Aridi. We’re edited by Blythe Terrell, with help from Caitlin Kenney. Fact checking by Nick DelRose. Thanks to the researchers we got in touch with for this episode, including Dr David Broniatowski, Dr. Alice Marwick, Dr. Anna Zaitsev, Dr. Homa Hosseinmardi, Dr. Kevin Munger, Manoel Ribeiro, Dr. Rachel Kuo, Jessica Ann Mitchell Aiwuyor, and Nick Nguyen. Very special thanks to Max Green, Casey Newton, Courtney Gilbert, Dr Karl, the Zukerman family and Joseph Lavelle Wilson. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Hi, I'm Wendy Zuckerman and you're listening to Science Fest.
This is from Gimlet.
On today's ep, we're tackling misinformation and asking how can tech companies stop crap
spreading on their platforms?
For us, this started because of a big blow up involving Spotify.
They own this show and they also have an exclusive deal with Joe Rogan's podcast.
If you listened to our last episode, you'll know all about this.
You'll know that Joe Rogan got into hot water
over this interview he did with Robert Malone,
which spread COVID vaccine misinformation.
And people were up in arms about it,
like Neil Young pushed Spotify to take Rogan off the platform.
They can have Rogan or Young, not both.
And yet, our bosses said no.
Well, the CEO of Spotify says it will not take Joe Rogan's podcast off of its platform.
He does not believe silencing Rogan is the answer.
Instead of removing Rogan, here's what they did.
Spotify released its platform rules, which they say had
been around for a while. These rules are supposed to spell out what's allowed on Spotify and what's
not. They also put this label on content that's about COVID, with a link that sends people to a
page with a bunch of COVID content that they consider trustworthy. But the academics that
we spoke to said it's not enough. What Spotify did is the bare minimum.
I mean, really, truly the bare minimum.
It's not strong enough.
Are they enforcing it at all?
There really is this thing of like, we put a label on it.
What else do you want?
Like, here's our rules.
This can't be it.
So today on the show, what else should Spotify be doing here?
What can actually stop this stuff from spreading online?
Now, the good news is we're not totally in the dark because a lot of tech companies are reckoning with this right now.
In fact, the pandemic has been a kind of watershed moment for tech companies dealing with misinformation.
So we are going to look at what other companies have done to tamp down on this,
to see if it works. And if maybe it could work for Spotify too. When it comes to misinformation,
there's a lot of, we put a label on it. What else do you want? But then there's science.
Science versus misinformation is coming up. No, sir. I'm keeping it simple. Starting small. That's trading on Kraken.
Pick from over 190 assets and start with the 10 bucks in your pocket.
Easy.
Go to kraken.com and see what crypto can be.
Non-investment advice.
Crypto trading involves risk of loss.
See kraken.com slash legal slash ca dash pru dash disclaimer for info on Kraken's undertaking
to register in Canada.
What does the AI revolution mean?
For jobs.
For getting things done?
Who are the people creating this technology and what do they think?
I'm Rana El-Khelyoubi, an AI scientist, entrepreneur, investor, and now host of the new podcast, Pioneers of AI.
Think of it as your guide for all things AI, with the most human issues at the center.
Join me every Wednesday for Pioneers of AI.
And don't forget to subscribe wherever you tune in.
It's season three of The Joy of Why, and I still have a lot of questions.
Like, what is this thing we call time?
Why does altruism exist?
And where is Jan 11?
I'm here, astrophysicist and co-host, ready for anything. That's right. I'm bringing in the A-team. So brace yourselves.
Get ready to learn. I'm Jan 11. I'm Steve Strogatz. And this is Quantum Magazine's podcast,
The Joy of Why. New episodes drop every other Thursday, starting February 1st.
Welcome back.
Today, we're digging into what Spotify and other tech platforms should do to help fight misinformation.
Because people do get sucked in by this stuff.
And it can affect what they do in real life.
So, for example, studies have found that when people have recently seen misinformation about the COVID vaccines,
they're less likely to get the shot and more likely to discourage others from getting jabbed.
So this, it matters.
And people are starting to call out podcasts for being the wild, wild west of misinformation.
People say wild things on podcasts. Like I think they think, hey, we're just sitting here chatting with my mates. I can
just say anything. So people just like, you know, shoot the s**t. Evelyn Doick is a fellow at the
Knight First Amendment Institute at Columbia University, but she's originally from Australia.
G'day. So back to podcasts. But it's been this weird blind spot
where we haven't really been talking about this total vector for misinformation for a long time.
I never had Neil Young on my bingo card for the reason that we finally had this conversation,
but here we are. And now that we're here, what should we do about it? Well, we're going to start with the biggest, bluntest tool that Spotify has.
Kicking people off the platform.
And there's actually this debate about what might happen if Spotify did decide to boot Rogan off.
With some people saying, yeah, doy, if he's gone, you can't hear him.
But others are worried that this would actually backfire.
They're saying, no, if you remove him, he others are worried that this would actually backfire. They're saying,
no, if you remove him, he'll just go elsewhere and become like a martyr,
more exciting and get an even bigger following.
Now, away from Rogan, Evelyn says that when you look at the data,
by forcing this content to a smaller space, You're reducing some people's access to it.
But we do see a lot of the people then just moving to other platforms and moving to other spaces.
And then in some ways, they are then sort of in bigger echo chambers.
So, for example, take what happened with Alex Jones.
Alex Jones was this guy that said the Sandy Hook shooting was a hoax.
Vaccines cause autism.
And that isn't even the most bonkers stuff he's put out there.
So I never expected Trump charging into a goblin's nest
to not get some goblin vomit and slopping blood on him.
So finally...
I just don't want to catch him in bed with a goblin.
Finally...
I don't want to see him kissing goblins,
having political succubus with goblins. So then Twitter... I don't want to see him kissing goblins, having political succubus with goblins.
I don't want to see him ingratiating goblins.
Finally, Twitter, Facebook, YouTube, and Spotify kicked him off.
And we actually have two studies on what happened next.
You see, Jones had to go elsewhere.
So, for example, he joined an alternative video platform.
And the researchers said that his total audience plummeted.
Like, he had over 2.4 million subscribers on YouTube.
But these days, his subscribers on that new platform are just over 150,000.
Now, when you look at Reddit, you see this kind of pattern too. So, for example, when some
grotty subreddits have gotten shut down, some people move the content to other websites. And
while those sites can get even more toxic, they tend to be way smaller. So, removing people from
the big platforms, it means that while they probably won't disappear, on average,
they'll get less eyeballs and ear balls on them. So should Spotify do the same thing to Rogan?
Well, it's tricky, right? According to reports, Spotify negotiated a deal worth at least $200
million to get exclusive licensing for Rogan's show. And unlike with Alex Jones,
where a lot of the big tech companies piffed him off, if Spotify broke its deal,
it's possible that Rogan could just go back to being available in more places. Here's Evelyn.
I mean, the whole part of the Spotify deal was he's powerful enough that he can bring
his audience with him wherever he goes. He has fans. Okay, so the impact of booting Rogan off Spotify is unclear. And what
muddies all of this even more is that making these kinds of decisions to remove someone,
it's actually really complicated. Like, just generally speaking, how do you decide who stays and who goes?
Or whether one episode should be taken down or an entire show?
It's tough, right?
If you're too trigger-happy with the eject button, people say you're heading in the direction of censorship and shutting down the healthy exchange of ideas.
If you're too loosey-goosey, you could be letting misinformation spread.
This is not a simple problem.
I think everyone kind of feels like if you put them in the decision-making chair,
they would obviously know which content should stay and which should go.
I mean, simple, right?
Just give me the button.
And then it turns out everyone has a different idea of what the line should be.
The major tech companies have rules about where their line is.
Cross that line, and according to them,
you risk getting kicked off the platform
or at least having the offending content pulled down.
So, for example, over at YouTube,
a video of Rogan's Malone interview had been uploaded onto the site
and YouTube pulled it down.
We asked them why and they told us, quote,
we remove content that suggests hydroxychloroquine
is an effective treatment for the virus,
or content that associates vaccines with a high risk of death, end quote.
And yet, that episode is still up on Spotify.
So what's up with that?
Well, over at Spotify, they have different rules.
And when we asked them about this,
they told us that a group of people helped make the final call
on whether a podcast has violated those rules.
They wouldn't tell us who these people were.
So to figure out how this all might work,
we're going to cosplay as Spotify's content moderation team
with Laura Edelson.
She used to be a software engineer in industry,
but decided to leave to pursue research into misinformation.
And she's currently at NYU.
And so I asked her whether Rogan's episode
with Robert Malone violated Spotify's rules.
Hold on a second.
I just want to like actually bring up their policy.
She read this one section where Spotify says that you can't have
content that promotes dangerous false or dangerous deceptive medical information
that may cause offline harm or poses a direct threat to public health.
I wonder if the Malone interview is captured in it.
It's sort of on the line for me.
Well, let's just go through it.
Let's just go through it, right?
So first, it has to promote false or deceptive medical information.
Now, having done a real deep dive on some of the claims in that interview in our last episode,
we at Science Versus think that yes, that interview did promote false medical
information. Next thing, are those false or deceptive claims dangerous? So are those claims
dangerous? Well, how can we understand danger? Well, are those claims going to dissuade someone from getting a vaccine for a virus that yesterday killed 2,500 Americans?
To me, that would qualify as dangerous.
Okay.
But I could see someone else coming to a different conclusion.
If it were me, I would call that dangerous.
Right.
Now let's move to the other half of the statement that may cause offline harm or pose
a direct threat to public health. So do we think that this may cause offline harm? Well, if a
person listens to this and makes a decision not to get a vaccine, they could get ill or die. I would call that offline harm.
Would you call that offline harm?
I would.
I would, yes.
Done.
But obviously not done, right?
Because whoever made this call at Spotify disagreed.
We asked Spotify explicitly why didn't they think this episode broke the rules?
Was it not considered
dangerous or deceptive medical information? And they just ignored our question. Now, we do know
they take some content down. Spotify has said they already removed 20,000 other podcasts related to
COVID. But the fact that the Malone interview is still on Spotify now, it does give us a clue as
to how hard it is to break the platform rules. So now we know that a three-hour interview
misrepresenting vaccines in the middle of a pandemic isn't considered dangerous medical
misinformation. And that's not all. There have been lots of concerns over what's been said
about transgender people on Rogan's show.
And he said the N-word over and over and over again on his podcast.
And yet that all lived happily on Spotify.
No apparent breach of the rules.
Which does make us wonder,
well, how are they enforcing that policy?
And are they enforcing it at all?
Now, Laura says there are a bunch of ways that companies can enforce their rules.
Like you can use algorithms that say detect and pull down content that breaches stuff.
Or you can wait for people to complain.
And Laura says that some companies do this. Zoom, for example, they actually
explicitly have a policy against adult entertainment. Like you cannot, according to Zoom's
policies, you cannot do sex work over Zoom. They have a policy against this. However, in order to understand how that policy works in practice, you need to remember
that Zoom is encrypted end-to-end, and Zoom doesn't have any way of proactively enforcing
that policy. They don't monitor all the Zoom streams and check to see if anybody is naked.
Half the audience just braved a sigh of relief when you said that.
Those policies only come into force if someone reports,
hey, someone is doing sex work over Zoom.
We asked Spotify,
are they proactively listening to podcasts
or waiting for listeners to make complaints?
And they didn't answer the question specifically,
but they did say,
quote,
Spotify uses a variety of algorithmic and human detection measures, end quote.
Now, because they apparently removed those 20,000 episodes that we talked about before, this does make us think that maybe there is some level of proactive listening going on? Now, we just really want to understand how Spotify makes these decisions,
and we're not the only ones who think this is important.
A few years ago, a group of academics and advocates got together at a conference and came up with transparency guidelines for tech companies.
These are called the Santa Clara Principles.
Basically, these boffins argue that tech companies should explain
when and why they take down stuff. They should even have an appeals process in case people have
stuff that's taken down unfairly. Here's Evelyn again. These companies are making hugely significant
decisions about what speech is or is not in the public sphere. And they should at least tell us
a bit more about what they're doing
and the rules that they're making and how much content they're taking down.
About a dozen companies have endorsed these principles, at least in theory.
Reddit, for example, has begun to release transparency reports every year
with numbers on what kind of content they've removed and why.
Spotify has not agreed to do this.
In fact, we've only found out the bits and pieces
of what they're doing in response to this Joe Rogan controversy.
Spotify so far has been like a fantastic example
of how not to personify the Santa Clara principles, right? Like during this process,
we didn't know until there was outcry about it, what their content moderation rules were.
They were released once they were scooped by The Verge. And we didn't know, for example,
that they had already taken down 20,000 podcasts for COVID misinformation, and we still don't know
what they were, you know, why did they breach the rules and Joe Rogan's podcast
didn't. But we basically know nothing except for a couple of press releases.
So where does this leave us? Well, when Spotify releases its platform rules,
but then tells us basically nothing about how they enforce them, they're asking us to trust them.
To trust that they're enforcing these rules fairly and not, say, giving special passes to the podcasts that reportedly make them lots of money.
It would be great if they were just a little more transparent here.
We don't need to look through their undies or anything.
But this level of cloak and dagger?
Come on.
Okay, so it's pretty clear that Spotify isn't going to boot this episode off the platform
or break up with Joe Rogan anytime soon.
But the good news is, that's not the only option we've got here. For a really
long time, we were stuck in this binary of the only options really are either you take it down
or you leave it up. And that's all we've got. That's all that we have in our toolbox. And it's
really only in the past couple of years that we've seen platforms start to experiment with other
solutions. Other solutions. The hot new
thing in tech is coming up just after the break. Welcome back. We've just talked about how removing content, it can work.
It can mean that less people see it and hear it.
But it only works if you actually remove the content.
So enter the next big idea.
Labels.
Look, it doesn't sound that sexy,
but it's one of the latest tools that big tech is using to solve this problem.
So, for example, Facebook has worked with people who fact check sketchy content.
And when something's not right, they'll pop a label on it to warn people that this is false.
And Evelyn Doek at Columbia says that labels have been a game changer in how we're thinking about this misinformation problem. Like I remember,
I think it was May 2020 when Twitter stuck the first label on a tweet of President Trump, right?
You'll have to fact check that, which I know you will. We did. That's right. And everyone was like,
oh my God, whoa, can you believe they did that? Like this is censorship. Like, oh, this is so
crazy. Right. And now like fast forward a couple of years later,
there's labels freaking everywhere, right?
Like, Trump's Twitter feed by the time the election
was just, like, wallpapered with labels.
Labels are the new hot thing, and do they work?
Yeah.
Does slapping a label on something actually help
to stop the spread of misinformation?
To find out, we called up David Rand.
He's a professor at MIT who studies
how good people are at detecting real news and fake news and whether labels can help here.
Have you ever shared misinformation online? Yes.
What happened? What did you share? So it was this tweet that was like a screenshot of a Ted Cruz tweet that said, I'll believe in climate change when Texas freezes over.
During that time that there was, you know, all this like snow and craziness happening in Texas.
And I saw it and I was like, oh, man, that's so good.
And I retweeted it.
And then maybe an hour later, some other academic replied and said, is that real?
Like, where's the and I was like, oh, I did it.
I did exactly the thing that I'm always talking about.
Wait, so Ted Cruz never said that?
Yeah, that was just a joke.
I've done this on accident too.
It was this tweet about the world's oldest breakup letter, and it was just too good to
be true.
Literally. world's oldest breakup letter and it was just too good to be true literally anyway to study whether
labels could be the antidote to fake news one thing that david does is collect a bunch of real
headlines which have been fact-checked by places like snopes and then he'll see if people can tell
what's real can i play can i play the game uh all All right. Well, let me get our sort of up-to-date one so I can give headlines that feel more immediately relevant.
Let me just jack into the matrix here.
Yes.
So, Republican senators question Biden's fitness for office amid Afghan debacle.
Accurate. Accurate.
Accurate.
All right.
Great.
Taylor Swift humiliates fan wearing Trump t-shirt on stage.
False.
She would never.
You're right.
She did not do that.
Yeah.
She's not into the humiliation game, I don't think.
Yeah.
It's not her angle.
Yeah.
That's right.
So that one's not true.
Okay.
Report.
FBI investigating whether
Trump spent $3 million in
Wisconsin buying votes.
Huh.
Huh.
I don't think so directly.
That's not his style, right? False.
That's right. So that one's not
true. Okay. So this gives you the
flavor. Okay. Okay. All right. So my bulls**t mate is not too bad. You're pretty good. Okay that one's not true. Okay. So this gives you the flavor. Okay. Okay.
All right.
So my bulls**t meter's not too bad.
You're pretty good.
Okay.
You're pretty good.
Turns out that most of us have a pretty good bull meter.
Like one study that David did found that when you ask people to tell if a headline is fake
or not, they'll get it right most of the time, even when it doesn't fit their political opinion.
The problem is that sometimes we share the wrong stuff anyway.
It could be that we're not thinking about whether it's true or false,
but rather how many likes we might get.
After all, it's often the flashiest, craziest,
most sensational posts that get shared the most.
Kind of like David and that fake Ted Cruz tweet.
A lot of the times you forget to even think about is it true or not before you click retweet. And I
completely had exactly that experience as someone who spends all my time thinking about misinformation.
So he's looked into what happens here if you label the baloney with fact-checking labels.
You know, pop a big false
stamp on it. Can that help stop us from sharing this stuff? To find out, David and his team took
some headlines, kind of like the ones we heard before, and he made them look as if they were
posted on Facebook. But on some of the false headlines, we put a big label that says they're false. Yeah, they had these huge red false stamps on them.
And then his team recruited around 1,500 people and split them into groups.
There were people who didn't see any of the labels and people who saw articles with those
false stamps on them.
And then they asked them, if you were to see the above article on Facebook, would you consider
sharing it?
Yes or no?
And so what we found is that very consistently,
people are less likely to share the headlines that are marked as false.
And this is actually as true or even more true
for headlines that they want to believe.
So it's not this kind of thing where you ignore the fact checks
on things you want to believe or share,
but actually you're even more influenced by fact checks on false claims that are aligned with your politics compared to ones that aren't. Oh, this is exciting news.
This is not the narrative of people are hardwired in their holes of bias and can't be pulled out.
That's right. That's right.
People who got the false labels were about half as likely
to say they would share the post compared to people
who didn't get any of the labels.
So this tells us that this can work,
particularly when the labels are big and noticeable.
Plus, David found that they can even work on people who say that they don't
trust fact-checkers. And so even though they don't, in general, don't say they don't trust
fact-checkers, in a concrete instance where they see a specific headline that's labeled as false,
they're like, oh yeah, well, okay, fine. Maybe that's, you know, maybe I should adjust.
So we know that big labels saying stuff like this is false can work and slow
the spread of misinformation. And this is the kind of thing that some platforms are now doing.
But now let's look at what Spotify has done here. Now, we know that it's not exactly the same kind
of social media platform as Facebook, but still, there are tons of podcasts that get uploaded onto that
platform every day, and a ton of information and misinformation bouncing around that needs to be
fact-checked. So to help with all this, Spotify added a label to episodes that discuss COVID-19.
Now, their labels don't say that something is false or misleading. Instead, if you go to an episode of a podcast that deals with COVID,
there'll be a button that points you to more info that they think is trustworthy.
Do you think that would work?
So if you were going to Joe Rogan on Now on the Malone interview,
you'd see that learn about COVID-19, learn more.
Is that the kind of label that's going to help here?
It's not clear.
What did you just do with your head then since we are audio?
I was shaking my head back and forth, not in a no way,
but in a like, eh.
Not something like, oh, that's terrible, stupid idea,
definitely shouldn't do it.
But my concern is that it's not strong enough. So we've done a bunch of work suggesting that
if you just get people to think about the concept of accuracy, you know, that makes them more
discerning in, in their social media use. And so you might hope that this label of saying,
learn more about COVID makes people think, oh yeah, this is an important thing that I should be thinking critically about.
But it's not clear whether that language is really enough to do that versus like, oh, yeah, great.
I just learned about COVID.
Like here I'm learning about COVID from alone.
Yeah.
Okay, great.
Maybe I could learn some more, but I'm probably good.
I just learned this thing.
I just learned three hours.
Yeah.
Yeah, exactly. So my feeling is that the labeling should be more explicit than that. For
example, you know, you could have a label that says sort of like many doctors disagree with this.
We also know that just fact-checking itself
can actually change people's minds
and push them away from believing bunk.
One big review that looked at more than 100 papers on this
actually found this very thing.
And experiments on this find
that when you show people the correct information
after they've just seen hogwash,
it can also help.
All right, so it looks like we're getting closer to getting some solutions here. And the final thing we want to look at to see
what Spotify and other tech platforms could be doing here is looking at the algorithm,
the secret formula on these platforms that recommends the stuff we like.
Anybody who spent time on Facebook knows that the recommendation algorithm is sort of the whole ballgame because your news feed is highly curated, highly personalized.
You don't really have much control over it.
This is Hani Fareed. He's a professor of computer science at UC Berkeley.
And several years ago, Hani and some colleagues did a study on YouTube. Now, we've all stumbled onto a creepy video on YouTube late at night,
thanks to the old algorithm. But Hani wanted to find out just how creepy things were. That is,
how often was YouTube recommending this kind of thing? And so they decided to focus on the really
out there conspiracy theories.
We did not land on the moon.
The Holocaust never happened.
Lizard people are on this planet with human skins.
And the significant number of our congressional representatives are in fact lizard people.
So in 2018, Hani's group started looking at these videos.
And first, they wanted to train a computer to learn what kinds of videos were conspiratorial.
Hani talked about it with our producer, Rose Rimla.
And he said that one of the telltale signs
for whether a video was a conspiracy one
was whether this word popped up in the comments.
The word, they.
That says so much.
Isn't that awesome, by the way?
Yeah.
So people talk about they, this entity, the government, the media, the scientists,
the podcasters that are keeping information from you.
By the way, another giveaway word was cucumber.
Nah, it was actually the word conspiracy.
People talk about conspiracy.
And they don't do that on kitten videos
and that's fascinating. So next step, Honey's team fired up YouTube and started watching videos,
not the conspiracy theory ones, but just new stuff like the BBC or CNN. They then collected
the recommended videos, you know, the ones that popped up after the news clips, and fed them through their custom
made conspiracy theory predictor program. What we found is at its peak in late 2018,
close to 10%, one in 10, recommended videos on a news video that we initially viewed was
conspiratorial in nature. That's insane. One in ten. This means that at the end of 2018,
if you were innocently watching a BBC segment about lizards, and ten recommended videos popped
up after the clip ended, one of them was probably about lizard people. Or something like that. Now that sounds pretty bad and it was. But the thing is,
these companies, they can do something about their algorithms. And YouTube did. In early 2019,
they announced that they were changing their algorithm. They said they were going to cut down
on recommending this kind of stuff. And Hani and his buddies were watching all this unfold. And they found that whatever
YouTube did, it helped. These kooky recommendations went from a peak of about 10% and then it settled
in at around 5 to 6%, 1 in 20. From 1 in 10 to 1 in 20. We asked YouTube about this, and a spokesperson said
that they're working on getting that number down even further.
And something that really struck Hani about all this
was that YouTube could change things by just squelching a few weirdo accounts.
That is, YouTube didn't remove them.
It just didn't recommend them as much.
When we looked at what they actually did,
what we found is that they essentially demoted about a dozen channels. That's it. That's it.
It's about a dozen channels that was the vast majority of the impact. And these misinformation
super spreaders seem to be a thing on other platforms too. A big report from a non-profit
that combed through anti-vax content on Facebook and Twitter last year found that two-thirds of
this stuff could be traced back to just 12 accounts. So you'd think that this would make
it easier to make a dent in this problem of misinformation being spread, because you don't
need to look at absolutely everything on your platform. You can just squelch
the big guns. Which brings us back to Joe Rogan. It's been reported by a bunch of news outlets
that millions of people listen to his show. It's number one on the Spotify charts. And Evelyn,
who we talked to earlier, suspects that Spotify's algorithm is pushing people to listen to Joe Rogan.
I don't know about you, but when I log into Spotify,
I get a bunch of podcast recommendations and sometimes the Joe Rogan experience is there.
And so, you know, they are promoting it.
It's not like they're just a completely neutral platform.
And Evelyn, she doesn't even listen to podcasts on Spotify, just music.
So then I'm trying to wonder why when Spotify
recommended Joe Rogan to you, what in your past might've you been doing on Spotify that made them
go, this is someone who's going to want to listen to Joe Rogan. Yeah. We're going to, we're going to
reveal some deep, dark, embarrassing secrets about the kind of music that Evelyn listens to.
It's just like exclusively UFC soundtracks.
Because they have also, Spotify has also recommended Joe Rogan to me
and I do listen to podcasts on Spotify, but they're all,
like they're science ones.
I listen to science verses and like.
You listen to your own podcast. I listen to Science Versus and like. You listen to your own podcast.
I listen to, exactly.
I mean, cell phone there, Wendy.
You and your mum.
No, I couldn't get her to switch to Spotify.
What are you doing?
For me, at one point, Joe Rogan was literally on three places on the front page of my Spotify app.
And over this week, they tried to push one Rogan episode on me.
When I didn't click that, they offered another one and then another one.
Now, I've been listening to Rogan, not that much, but a little bit lately to research these episodes.
So you might think, well, this makes sense.
But we wanted to find out whether or not Spotify is recommending Joe Rogan to people
who have never listened to him on Spotify.
People like Evelyn.
We wanted to know how common that was.
So we asked Spotify this.
Again, they didn't tell us.
Now, luckily, we sent out a survey on social media
this past weekend
to get some of our own answers.
And over 1,400 of you who have the Spotify app responded.
We asked you to go open the app
and scroll through the podcast recommendations on the homepage.
Is the Joe Rogan experience there?
It turned out that 10% of people who would never listen to Joe Rogan
saw his podcast there on the homepage. And so it sure seems like Spotify is pushing Rogan
right now in the middle of a misinformation blow up over Joe Rogan. We asked Spotify how many
people listened to Joe Rogan's podcast on Spotify
because they recommended it to them,
as opposed to fans who specifically went looking for him.
And this time, they did respond.
Joking. Of course they didn't tell us.
Instead, they gave us a general statement which said, in part,
quote,
we're investing heavily in developing the world's best recommendation algorithms, end quote.
So we think that Spotify should twiddle the knobs on the world's best algorithm
so that instead of promoting Tosh, they're promoting good stuff.
Now, of course, we know that the algorithm isn't all powerful.
People will find this stuff other ways. And that's because it turns out that right now,
some people just want to hear someone like Robert Malone talk about how bad the vaccines are,
even though that's not true. And that is a problem that we can't totally pin on tech companies.
Here's Evelyn again.
Here's the thing.
I think in some sense we ask content moderation to bear the weight of a lot of problems that
are a lot bigger than content moderation, right?
These are underlying social economic problems that are far bigger than, like, it's all the
things that led to the tweet or the podcast in the
first place.
A Pew poll taken at the end of last year found that Americans' trust in science has gone
down since the pandemic started.
We're also less trusting of politicians and even public school principals.
It's all part of a larger trend that shows we've been losing faith in our institutions
and each other since at least the 1970s.
But tech companies, including Spotify, still play a role here.
By choosing to promote misinformation,
they're profiting from this deep-seated mistrust.
The researchers we spoke to said that while there may not be
some perfect, easy solution to misinformation online,
these are massive companies that we're dealing with, with massive bags of money.
And so companies like Spotify, they can do a whole lot better.
I mean, look, from my perspective, Spotify is doing the bare minimum.
And if this is because we don't know either, I should say,
even though we're at Spotify, we don't know what else they're cooking
up in this space.
But if what they've announced now is the only thing they're doing,
they're like we have these platform rules and now we have some labels
directing you to content about COVID that we trust,
how good is that when you think about what other platforms
are doing? I mean, you called it the bare minimum, I think. And I think that's being
generous. I think there really is this thing of like, we put a label on it. What else do you want?
Here's our rules. I think I'd really like this to be the beginning of the conversation, not the end of the conversation.
And this is not going away. Like Joe Rogan's podcast is not going to be the last time that
this happens. Like this can't be it. Okay. So here's what we think Spotify should do.
One, they need to be more transparent about why they're taking some things down and not others
and how these decisions get made. Two, they need stronger labels about false or misleading
information, just like what other companies are doing. And we're not asking for fact checks on
every podcast on the platform, but they could just really focus on the popular ones
and the exclusive content. And three, they should
change their recommendation system so that it's demoting misinformation or borderline content.
This is all pretty reasonable stuff, right? And you know what? If they did this,
I might even be able to get my mum to listen on Spotify. That's Science Versus.
Hello, Wendy.
Hey, Rose.
How many citations in this week's episode?
This week, there are...
This week, there are 165 citations.
165!
We did it.
We did it.
We did it.
And if people want to see these citations,
learn more about all the things they've heard in today's episode,
where should they go?
They can go to our transcript,
and the link to the transcript is in our show notes.
And we have to tell our listeners two things.
We reached out to Joe Rogan to get a comment for this episode. He didn't get back to us. The second thing is,
is that it is a perfectly reasonable thing to do to listen to your own podcast once it is out in
the world. I mean, whatever you say. Thanks, Rose. Thanks.
This episode was produced by Michelle Dang, Rose Rimler,
and me, Wendy Zuckerman, with help from Meryl Horn,
Akedi Foster-Keys, and Rasha Aridi.
We're edited by Blythe Terrell with help from Caitlin Kenney.
Fact-checking by the amazing Nick Delrose.
Thanks to all the researchers we got in touch with for this episode,
including Dr. David Broniatowski,
Dr. Alice Marwick,
Dr. Anna Zitzev,
Dr. Homer Hossian-Marty,
Dr. Kevin Munger,
Manuel Ribeiro,
Dr. Rachel Kuo,
Jessica Ann Mitchell-Aiwuyo,
and Nick Nguyen.
A very special thanks to Max Green,
Casey Newton, Courtney Gilbert, Dr. Carl, special thanks to Max Green, Casey Newton,
Courtney Gilbert, Dr. Carl, the Zuckerman family,
and Joseph LaBelle Wilson.
I'm Wendy Zuckerman.
Back to you next time.
If I still have a job.