Your Undivided Attention - Down the Rabbit Hole by Design — with Guillaume Chaslot
Episode Date: July 10, 2019When we press play on a YouTube video, we set in motion an algorithm that taps all available data to find the next video that keeps us glued to the screen. Because of its advertising-based business mo...del, YouTube’s top priority is not to help us learn to play the accordion, tie a bow tie, heal an injury, or see a new city — it’s to keep us staring at the screen for as long as possible, regardless of the content. This episode’s guest, AI expert Guillaume Chaslot, helped write YouTube’s recommendation engine and explains how those priorities spin up outrage, conspiracy theories and extremism. After leaving YouTube, Guillaume’s mission became shedding light on those hidden patterns on his website, AlgoTransparency.org, which tracks and publicizes YouTube recommendations for controversial content channels. Through his work, he encourages YouTube to take responsibility for the videos it promotes and aims to give viewers more control.
Transcript
Discussion (0)
It's like you have this huge current that pushes you towards being more aggressive, more divisive, more polarized.
That's Guillaume Shaslow, a former software engineer at YouTube.
If you've ever wondered how YouTube got so good at predicting exactly what it'll keep you around, ask Guillaume.
He worked on the site's recommendation AI, and he marveled at its power to sweep a viewer along from one video to the next, setting them adrift on a stream of idle viewing time.
He celebrated as the streams multiplied and gathered strength,
but he also detected an alarming undercurrent.
It was always giving you the same kind of content that you've already watched.
He couldn't get away from that.
So you couldn't discover new things.
You couldn't expand your brain.
You couldn't see other point of views.
You were only going to go down a rabbit hole by design.
To understand where these algorithms might take a viewer,
consider for a moment how they're designed.
Think of that moment when you're about to hit play on a YouTube video.
And you think, I'm just going to watch this one, and then I'm out, and that'll be it.
When you hit play, inside of YouTube server, it wakes up this avatar voodol version of you.
Based on all your click patterns and everyone else's click patterns that are kind of like the nail filings and hair clippings and everyone else's near filings and hair clippings.
So this voodoo doll starts to look and act just like you.
And then they test, like they throwing all these little video darts out.
you and see, if I test these 100 million darts, which video is most likely to keep you here?
So now in this case, YouTube isn't trying to harm you when it out-competes your self-control.
In fact, it's trying to meet you at the perfect thing that would keep you here next.
That doesn't have to be bad, by the way, right?
Like, they could just, you know, show you entertaining things.
And so suddenly they've taken control of your free will, but it's not bad because they're just showing you cat videos or whatever.
Guillaume observed a subtle but unmistakable tilt in the recommendations.
It seemed to favor extreme content.
No matter where you start, YouTube always seemed to want to send you somewhere a little bit more crazy.
What Guillaume was seeing was algorithmic extremism.
When I saw that, I thought, okay, this is clearly wrong.
This is going to bring humanity to a bad place.
Now, this is exactly what you would hope to hear from a conscientious programmer in Silicon Valley,
particularly when that programmer is building an algorithm that can determine what we want.
watch to the tune of 700 million hours a day.
Guillaume could see how these cross-currents would pull viewers in countless delusional directions.
He knew the algorithm had to change, and he was confident he could change it.
So I proposed different type of algorithms, and a lot of Google engineers were motivated by
that, like, seven different engineers helped me for at least a week on these various projects.
You'd hope this would mark the beginning
of a humane design movement at YouTube's headquarters.
So what happened?
But each time it was the same response from the management.
Like, it's not the focus, we just care about watch time,
so we don't really care about trying new things.
Antichlamactic, right?
And that's actually how this conversation plays out
or fizzles out again and again in Silicon Valley.
Managers rarely ever reject solutions outright.
They just do what managers do, set the team's priorities.
A slow, creeping tilt towards user extremism, that'll never be on the top of this quarter's priority list.
Today on the show, we'll ask Guillaume to game out the consequences.
I use the word game deliberately because, in a sense, YouTube's recommendation system is engaged in a chess match for your attention.
It's trying to predict your next move to catch the moment you've had enough and are ready to leave the site,
and to overwhelm your self-control with the next irresistible video.
And you may think, you know, big deal, that's on me.
Well, take it from the designer of these algorithms, you're up again.
against a machine that can outsmart a chess master.
So today on the show, Guillaume Shaslow,
AI expert and founder of the nonprofit Algo Transparency,
will explain why.
For the sake of humanity, we must shed light on these algorithms,
understand their inner workings,
and more importantly, make visible their outcomes.
So we can tilt the rules of play back in our favor.
I'm Tristan Harris, and I'm Isaraskin.
This is your undivided attention.
Why are they even doing these recommendations?
I mean, you could imagine landing on a video site.
You watch a video, but there's no recommendations.
So, like, why is recommendations so important to YouTube?
So more than 70% of their views come from the recommendation.
That's huge, knowing that they do one billion hours of watch time every single day.
70% of that is, like, a tremendous amount of watch time.
Yeah, sort of like this is the content that we are, that's 700 million hours of,
of dosing humanity with something that humanity hasn't chosen.
Exactly.
So you have very little choice on this content
because the YouTube algorithm has 10 billion videos
or I don't know how many billion videos
and it chooses the 10 to show to you in front of your screen.
And then you have just a tiny little choice
between those 10 to choose which one you want to see.
So it has 99.99% of the choice is from an algorithm.
that you don't understand and you don't control.
So I'm like, okay, I'm just getting shown stuff that I, like, clearly have a revealed preference for that I'm clicking on or watching, or that works on other people.
Besides filter bubbles, what harm does that create?
So it creates a bunch of arms.
So one, one I show is conspiracy theories, because conspiracy theories are really easy to make.
You can just make your own conspiracy theories in, like, one hour shows.
it, and then it can get millions of views.
They're addictive because people who live in this filter bubble of conspiracy theories
and they don't watch the classical media, so they spend more time on YouTube.
So every single of their watch will have more watch, more total watch time.
So it will have much more weight on the algorithm.
So the more people watch them, the more they get recommended.
it's like a vicious circle.
So you're saying that conspiracy theories are very effective at grabbing our attention
and keeping us around, and they become kind of like black holes.
And if the system is just recommending the stuff that people click on,
one of the techniques is going to find is recommend conspiracy videos
because conspiracy videos are very effective.
Is that what you're saying?
Exactly.
That's the same way a black hole creates and only grows bigger.
Like by design, this conspiracy theory can only grow bigger.
because then people who are in there spend more time than others.
Imagine you say you're someone who doesn't trust the media.
You're going to spend more time on YouTube.
So since you spend more time on YouTube,
the algorithm thinks you're better than anybody else for the algorithm.
That's the definition of better for it.
It's who spends more time.
So it will recommend you more.
So there's like this viciously call.
So it's not only like don't trust the media,
but it's with any moral.
the algorithm by design will be anti-moral.
So if you have like a moral in the society
says like racism is bad.
Humans are equal and people think, no, that racism is good.
They will spend more time on YouTube
so they will get recommended more by the algorithm.
So like the anti-moral will be favored by the algorithm.
So Google is saying, yeah, we give a place for these people
who are not accepted by society.
we give them a place to express themselves on that's, I have no problem with that.
But what I have a problem is that it's structurally, systematically anti-moral.
So even if we reach a new moral, let's say we go towards a moral in which like, okay, racism is great, then the anti-moral will win again.
It's just ridiculous.
What I think I'm hearing you saying is that because the AI doesn't have a sense, the recommendations isn't doesn't have a sense of what's right,
what's wrong. All it has the sense for is what works.
Yeah. That we're sort of A-B-testing our way with the smartest supercomputers pointed at our
minds to find sort of the soft underbellies to just be like, what's effective? And so we're
A-B-testing our way towards anti-morality or immorality or amorality.
Yep.
Are there any specific examples that, like, can light up my mind?
Like the Flatters Conspiracy Theory, for instance, got hundreds of millions of recommendations for
something. Hundreds of millions of recommendations.
Yeah, it's like for something that's completely absurd.
So one of the arguments was like we're just showing what people make.
But that's not true because if you search on YouTube is flat or not,
you had 35% of search results were flatters conspiracy theories.
But then if you followed recommendations, like I followed thousands of recommendations
and then took like the 20 most recommended videos.
Out of this 20 most recommended video, 90% were flatters conspiracy theories.
90%. That's insane.
So I think one thing that people tend to think about with this is, I mean, if you just go back
to just the simple human experience of YouTube, like, why are we spending all this time?
And the average watch time per day is 60 minutes now.
The YouTube product officer, chief product officer, Neil Mohan said it's because our recommendations
are getting that good.
So the reason that watch time is going up is because the recommendation system is getting
stronger and stronger and stronger and stronger every year.
And we're not talking about the fact that this is an huge asymmetry of power.
they have supercomputers.
I mean, who has the biggest supercomputers in the world?
It's Google and it's Facebook.
Also, we blame ourselves.
We blame teenagers.
We're like, hey, this teenager.
You should have more self-control.
Yeah, you have bad parenting, you're a bad person,
but you have a supercomputer playing against your brain, as you said,
and it will find your weaknesses.
Whatever you, it already studied the weaknesses of billions of people,
it will find them.
So my weakness, for instance, is a plane landing videos.
I don't know why.
I'm fascinated by plane landing videos.
There's a lot of that on YouTube.
And I'm like, if you would ask me, do you want to watch accidents on plane landing videos?
I would say, no, never show me that.
I don't want to waste my time watching that.
But you can't say that to YouTube.
So it will show it again and again.
And I lost so much time watching this plane landing video.
This is ridiculous.
So YouTube is discovering these weaknesses for so many different demographics, right?
So you have this example of, you know, teen girls who started watching dieting videos,
like, you know, what kind of food should I eat?
They get recommended anorexia videos because they're better at holding on to that demographic.
And they recommended this millions and millions of times.
You have this other example, you know, of you watch a 9-11 news video,
and it recommended 9-11 conspiracy theories.
And the number with Alex Jones, for example, always stunned me.
You said that it recommended Alex Jones videos 15 billion.
times. Yeah, and that's a lower estimate. I think it's much more, but we have no idea how big it is.
Why do we not have any idea? Because YouTube doesn't want to say, like, how many times they
recommend each video. So they're like, yeah, if we start saying it, then we give way too much
information. They have no incentive to actually do it. What are they afraid of? For the small
recommendations, they might be afraid of people gaming their algorithm, but people are already gaming their
algorithms. If you search on Google, should I buy YouTube views? You have an information panel
that says, yes, you should buy YouTube views. It's like Google's algorithm says that you should
game YouTube algorithm. One of the things I also find fascinating about your research is when you
look at YouTube's incentives, it's not just that it wants your watch time as a viewer. It's also
trying to give YouTube creators that hit of dopamine because when you host, when you publish a
video, it needs to give you that rush of feedback of, look at how many people are starting to watch
your videos. So it gets everyone addicted to getting attention from other people.
Guillaume, if you can explain the cold start problem that YouTube's trying to solve.
So when you put a video online, it has no views or very little views. Maybe you send it to your
friends. It's like five views. But from five views, it's mathematically impossible to detect
how good the video is from just these five views. So the algorithm has two ways.
to behave. It could either really try to be fair and give an equal chance to every single video
and say, okay, it only had five views and it doesn't seem so good for now, but we are going
to show it to a lot of people to see how well it performs. So that would be like the fair way
of behaving for YouTube. But this fair way of behaving is way too costly because by doing that
means that you promote really bad videos millions of times. Like if you had all the bad videos
that you have on YouTube, you have to promote them each, let's say, a thousand times before
you can have good statistics. So if you do that on thousands of videos, it's like millions
of time. It's probably millions of videos. So you do that billions of times. So you lose a lot of
money and a lot of people get away from YouTube. So that's not what YouTube is doing. YouTube's
algorithm is pretty greedy. So if you don't have... It's a greedy algorithm.
Yeah. So that's a scientific term. It's actually greedy. So it means that if it doesn't have
extremely good stats from the start, the algorithm is going to stop recommending your video
right away. So you got this. The very few first views are very crucial. That's why Google's
algorithm say you should buy YouTube fake YouTube views because that's the best way to juicing the start
the start of your video. I mean, what I always find sad about this is when you just look at what it's done to
culture because you know we have this point that in the race to get attention it's not just enough
to get your attention we have to get you addicted to getting attention to other people and so it
turns society inside out where we all want to get as many views followers subscribers as possible
and because youtube with this cold start problem it wants to make you feel like hey don't you feel
famous like we get you a thousand views and you want that number to go up and we're also vulnerable to
that. I mean, we're just like chimpanzees getting this number to go up and they're like,
oh, that feels so good. I'm going to like refresh it again and see how many if I got some more
views now. And listen, you know, I, by the way, I've studied this for like 10 years. I do this
still, even last night, we had this big, you know, Wall Street Journal piece on our work. And I
went back and I checked again to see how many people had been looking at it and what kind of
feedback it had gotten because we really care what other people think about things. And that was a design
choice YouTube made because they wanted to get people addicted to how many views that they had.
mean nefariously, like bad, evil people. And so you want the whole system to be kind of
operating in a way where you automate all the chimpanzees to just like want to put up
their videos of themselves and the makeup videos and all these things. And it's really turned
culture inside out.
Hey, this is Eza. So we're going to pause Guillaume's interview for a second.
Tristan testified before the U.S. Senate on persuasive technology last week. And I wanted to ask him
to tell us about it. I've never gone up and given a Senate testimony. What is that like?
I honestly was really impressed with how, especially a few of the senators, had really understood it.
I mean, I felt like Senator Schatz and Senator Thune, who were the two chairs of the Commerce
Committee that I testified at, knew the topics very, very well. Senator Blumenthal knew the topics
very well. Senator Markey knew the topics very well. And I think that this helps displace this notion
that government doesn't get it. Now, yes, are there some people on that committee who knew
far less or may have missed, you know, made gaffs in their comments? Absolutely. But I especially,
I think Senator Schatz, I really, who's from Hawaii, and he gets the entire thing.
Social media and other internet platforms make their money by keeping users engaged. And so
they've hired the greatest engineering and tech minds to get users to stay longer inside of
their apps and on their websites. They've discovered that one way to keep us all hooked is to use
algorithms that feed us a constant stream of increasingly more extreme and inflammatory content.
And this content is pushed out with very little transparency or oversight by humans.
I was talking recently to the former FCC chairman, Michael Powell, and we were talking about
standards and practices for children's television. It is, I believe, still the case that Nickelodeon
or a children's TV network is not allowed to put a URL.
inside of a TV program for danger that you might be pushing a child towards a website
that you don't know where it's going. Can you imagine that compared to YouTube where YouTube is
push, push, push, push, push. That's all it does is it pushes you and shoves you. And even worse than
that, it has all those buttons and links that come up saying, subscribe here, click on this. Do you want
more child pedophilia? Do you want to know even more how to commit suicide? Like, this is a war zone.
We would have never, you know, in my Senate testimony, I made the analogy to Mr. Rogers.
You know, I mean, I just love this example because he came to the exact same committee, the Commerce Committee, 50 years ago.
And he said, you know, he was so concerned about what he called the animated bombardment that we were throwing in front of children.
We deal with such things as the inner drama of childhood.
We don't have to bop somebody over the head to make him, to make drama on the screen.
we deal with such things as getting a haircut
or the feelings about brothers and sisters
and the kind of anger that arises in simple family situations
and we speak to it constructively.
How long a program is it?
It's a half hour every day.
And he convinced the hearing, you know, the committee.
In six minutes, he takes the most cynical senator
who's like ready to defund all of PBS
and he just says at the end, well, I guess you got your $20 million.
The comparison is mind-blowing.
By contrast, we are sending children into a kind of a war zone of unpredictable, mindless, and extreme stuff.
I just want to add this other note from my conversation with a former FCC commissioner
that if you looked at the time spacing between commercial breaks and television shows 30 years ago,
the screen would go black before the commercial break.
And there'd be like a pause, a real pause.
where there was just nothing there.
And that originally, as I understand it,
was in part related to the way that in theaters
the curtains would drop.
And there's a break.
And you have to sort of, you know,
you go out of the theater
and you come back, the intermission, et cetera.
You mean you have to decide?
You have to decide, yeah.
It forces you to make a conscious choice.
And especially when it comes to children
in front of television, you know,
that break is critical.
I mean...
So you're saying there used to be breaks
in between shows.
Yeah.
Okay.
Yeah, exactly.
And so these are the kind of stopping cues
that we've deliberately lost per our episode with Tatasha
and the design towards, you know,
removing right angles and all purposeful breaks.
Yeah.
And it just really struck me that, wow,
this is something that we really need as humans.
Now, back to the interview with Guillum.
One of the other things I find fascinating is,
it's a point ESA actually has made,
that the attention economy grooms humans
to be better for the attention economy.
Like, there's almost two ways to predict a human's behavior.
One is, given this human, let's build a more and more accurate model so that we can predict better and better with bigger computing power and more data, meaning more nail filings, more hair clippings, to make them look more and more like you.
We're going to build a bigger supercomputer so that whatever we couldn't predict yesterday, we can predict more of today.
But the other way to better predict people is to make them simpler and more predictable.
So if you make them act out of their fear, out of their amygdala, out of their dopamine, sort of wriggling around.
around inside their nervous system, they're more reactionary impulses. They're also more predictable.
And so we're being groomed on a certain level to be more reactionary, more outraged, more concerned
with our status and how we are perceived by other people, more addicted to getting attention
and status. We might want to talk about this thing. How is YouTube actually responding to all
this? Because if we're tilting the landscape, they've hired 10,000 content moderators, if not more
now, probably mostly in English. And the joke about this is that these are like hiring
10,000 boulder catchers.
So while the landscape's been tilted and all the outrage is in polarization and crazy
conspiracies are flowing downstream, they hired 10,000 people to catch the boulders, and it's
like they're not going to catch, you know, nearly enough of them.
No, and they catch only the very extremely visible things.
And that's a danger that they remove all the things that's visible, and then you don't see
like all the little boulders that go down.
So for instance, we saw like one thing they didn't catch for like the last 10 years
and they just quote like this February was this pedophilia problem.
So basically the algorithm was recommending little girl videos to people who were watching fitness videos
because there was one chance out of, I don't know, maybe one chance of 100 that you would become a pedophile or you would be a pedophile.
And you would like watch these videos on pedophil, watch videos for a long time.
They watched little girls for a long time.
So the algorithm was actually recommending little girls to many people.
Until this one YouTuber discovered that, like, nobody was talking about it.
And, like, YouTube reacted, like advertisers reacted.
It was a huge deal.
But it's like one more thing.
I mean, this speaks to something critical structurally about this problem, which is, you know,
per your point, when does YouTube respond to these problems?
And it's usually because, you know, someone like you,
you know, stays up until three in the morning, an unpaid nonprofit civil society researcher
who's just staying up saying, hey, look, I think I'm seeing some problems, I'm building my own
tools, I'm not paid by a company. I'm just doing this because I care and I understand some of
this. There's, you know, a handful of people like you, like Renee DiResta, people scanning for
what is Russia doing, what is China doing, is Iran doing, or what are the recommendation
systems doing? They're unpaid, and they find these things. They have to work hard to get,
you know, the Washington Post or New York Times to report on this stuff, or it works hard
to hold a hearing or get Senator Mark Warner, Adam Schiff to write a letter, to get, you know,
the companies to respond. They work so hard to do this, but they're only catching a handful of
these issues and mostly in English and in Western markets. But now you consider that YouTube is
actually the most popular. I think in Mexico and in the Philippines, the watch time is like off
the charts. And it's the most efficient medium. And how many people like Guillaume or like Renee or, you know,
these people who are doing the hard work exist in these other markets.
So, yes, we have maybe the best boulder catchers in the world here in the United States,
but we only have a few of them, and we have none of them in some of these most vulnerable countries
where we know the polarization is most extreme, whether that's Myanmar and the genocide happening there
with the Rohingya minority group or, you know, the way that in Christchurch with the New Zealand bombings,
I mean, they've created a digital Frankenstein that they cannot control.
Yes, so to give an example of that, this asymmetry, like missiles outbreaks, like rose...
Measles outbreaks.
Measles outbreak, yeah.
Rose 300% in the first trimester of 2019, global in the world.
300% more measles cases.
Wow.
Yes.
And in some parts of Africa, it's 700%.
So it's huge.
And so you see the asymmetry between countries.
that have like structures on good media
that can fight these social media problems
but on Africa which doesn't have good
political structure and good press
then they just can't fight it
and then people really sincerely believe
that vaccines are designed to kill you
so yeah so quickly just draw me the line
from like YouTube recommendations
to these incredible stats
I noticed like from two years ago
that YouTube recommendations were showing a lot of anti-vaccine conspiracy theories.
And not even, so there were different types of anti-vaccines.
So they were like, for instance, Bill Maher, who was saying,
hey, don't take the flu shot.
It's like, it's a bit of a kill.
I wouldn't take the flu shot.
And I'm like, okay, fine.
But there is this really, really dumb videos saying just vaccines are designed to kill you
or look at my little child before and after autism.
and you have like this very emotional video
with this very emotional music
of a little girl who was
beautiful before having autism
and then started to have autism
and degenerated.
The parents said it was because of the vaccine
but there's absolutely no scientific evidence.
And now if we take this more full-stack
sort of socio-emotional understanding
of why this is happening,
think about in a parent.
Why is it so compelling to watch it?
video like that, because the idea that you would inject your child with something that would give
them autism is so fear-inducing that if there's even the tiniest, tiniest chance that that could
happen, if that surrounds you, and if when you, as you said, like you start on a YouTube page,
right, for Bill Maher saying, hey, don't do the flu shot. That's a reasonable video to watch.
I'm sure he said it in a funny way. But then for someone who watches that video, YouTube calculates
what would be the really good videos to show someone who saw that Bill Maher Flew Shot video?
And it discovers that these are these incredible, emotional, powerful videos of parents.
And that's praying on fear.
It's praying on emotion.
What if I screw up?
It's the loss of it.
I mean, it's horrific.
And so it's totally understandable why, A, these videos would be recommended.
And B, why parents are going to react so strongly to this.
It's a very powerful stimulus to throw in front of the human nervous system.
This is also what people need to understand is how, you know, what a global problem this is.
You know, you have this example of the Syrian refugees and I think some Russian conspiracies about Syrian refugees.
The white helmets in Syria are a peacekeeping force.
Yes, exactly.
But if you actually Google for or if you're searching YouTube or looking at those videos, YouTube recommends what?
Yeah, so the Russian propaganda outlet made a very good case for the white helmet being like this terrorist force that are like secretly helping terrorists and doing terrorist things.
And so they made so many of these viral videos that were like without foundations behind that if you start looking for white helmet, YouTube will tell you that they're a terrorist group.
I met someone from the white helmet and she told me like she had a member of her family telling her like, hey, what are you doing?
Are you terrorists?
So the people start to believe more what they see on YouTube than their own family.
Imagine that.
Yeah.
There's one stat that you mentioned to me yesterday about the Mueller report coming out and which channels were most recommended.
Yeah. So basically I saw in the data that a Russia Today video that didn't get that many view, it got around 50,000 views.
It got recommended for more than 236 different channels. It was more than any of the 84,000 videos that I was monitoring.
So this is the Russia Today video was more recommended than any other video about the topic of the Mueller report.
Exactly.
So Mueller report is about the Russian interference.
So YouTube is recommending the take of the Russian on their interference into the 2016 election.
That's pretty ironic if it wasn't so important.
So I think there's this sort of automated machine.
You know, we use this rhetoric that these systems are.
putting thoughts in people's minds that people say, oh, no, you know, it's hijacking our minds.
People say, oh, you're just over-exaggerating.
Like, what are you talking about?
Like, I'm choosing my own thoughts.
I'm living my own life.
You know, I also, maybe I don't even use YouTube, but when you really realize the scale
of this and the actual reality, the evidence that people's subtly, psychologically
influenced on the emotional level and the physical level and behavioral level to believe bits
and pieces of this, even as you said, if one out of a thousand people, you know,
people believe the Alex Jones things. Just imagine a thousand people and only one of them
believes the Alex Jones. That's 15 billion divided by a thousand. That is such an insane number.
I mean, your brain, literally, talk about humane technology, your brain is, our brains are not
tuned to deal with large numbers. We cannot even reason about them. We have such poor reasoning
about large systems. So one of the hardest things for me about this problem is how do we get
people to see the vast scale and the influence, like at a global stage level, because
people want to focus on their own experience.
Like, I'm not addicted to YouTube, and we're trying to get them to say, like, it's not about
you.
It's about this much bigger system.
And what do you say to that?
Yeah, so first thing that if you listen to this podcast, you're not one of the most
vulnerable people, and you can say, okay, it's okay, it's just the vulnerable people are getting
tricked by the algorithm.
But the algorithm is improving so fast that soon it will be you.
So that's why you need to pay attention to the vulnerable people right now.
That's why we need to pay attention to anti-Vax.
That's why we need to pay attention to conspiracy theories.
People don't realize how different it could be.
We realize it because we were at Google.
We were there when the choice were made,
and we realized how different the choice could have been.
But people think it has to be that way because...
That's all they've ever seen.
Well, let's talk about that for just one second,
because Guillaume, you and I both shared this experience of seeing some of these problems
while being inside of these companies, right?
A lot of people think, Guillem, probably, when they hear your or my story, they think,
oh, it's these greedy companies, and they just want their money, and they, that's why they're
not changing, you know?
But my experience, I would talk to say someone who ran Android, and I would say, okay, you
are in control, whether you hand the puppet strings of two billion people to these apps.
You were the government regulator of the puppet strings, and you get to decide which strings
are they're allowed to pull and which ones they're not allowed to pull. And what are you going to do?
And, you know, I would explain this problem and people would look at me and they would nod and they'd say,
yeah, that's a problem, that's an issue. And then they'd say, I'm really glad you're thinking about that.
And then we would make these proposals of here's how Android could be different. Here's some
notifications rules. But nothing ever got implemented. And it wasn't because in my case, someone said,
that's going to drop revenue. It was mostly like, well, is that really a big problem? Like we've got these
OEM suppliers, we've got these new phones to ship, we've got the next version of Android to ship
next year, it just never became a priority. And I'm curious in your case, you know, there you were
in YouTube and you're starting to raise these issues. How did people respond? And what did you
try to do on the inside? Exactly. So from the inside, I tried to be very positive,
mostly because we have this image that the French are always complaining about.
And so I didn't want to be this typical friend that complained about things. I read some conspiracy
theories that the friendship is the most complaining, you know.
No, no, they are, that's true.
So I didn't want to be like that.
I wanted to be, like, positive into solutions.
So there was this thing at Google, there was this thing.
I love this.
Like, if you see a problem, fix it.
If you see a problem, don't complain about it to the management.
Just fix it by yourself.
Right.
And that was supposed to be rewarded.
So that's what I did.
I saw this problem and I propose on implemented solution with some people.
and I thought that was going to work
but then as you said
people said yeah
is it really a priority
we're trying to make
make the product grow like 30% a year
that's huge
that's a priority
like which time
grow 30% a year
that's fantastic
this is like a distraction
for them it was distraction
like trying to
to help people
get out of their filter bolts
right
So how do we go about creating protections or regulations around recommendation systems?
Yeah, so that's very tricky because we don't want to block free speech, of course,
and that should be the absolute priority.
But recommendations are not free speech.
There's a freedom of Google to make money with anti-vaccine content.
So it's not a problem of free speech to regulate recommendations.
It's a problem of free speech to regulate what can be put on the platforms,
and that's why CDA 230 that regulates platforms right now is actually a really good positive legislation.
But at the time when it was voted, recommendations didn't exist, AI didn't exist.
So it was a very different game.
Well, let's explain what CDA 230 is.
So this is the Communications Decency Act of 1996, I think, and Section 230.
So if someone says CDA 230, they just mean that.
and this is specifically carving out no responsibility.
Platforms are not responsible for the content that appears on them.
This is what allows the Internet to grow.
It seems like a great thing.
But as you said, it was before the age of AI.
It was before anyone had built recommendation systems.
It was before there was YouTube, because 1996 is actually 10 years before YouTube.
And it's a completely different thing to be like, yeah, platform, you're not responsible for a user uploads than saying, platform, you're not responsible.
you're not responsible for taking something user uploads and promoting it or amplifying it.
And this is where we come to that phrase of the freedom of speech is not the same thing as the freedom of reach.
Imagine if we said, cool, like what's true for the New York Times and other media is just true for YouTube.
Anytime that you as a platform make the curatorial decision, whether it's with humans or with an algorithm, to amplify content, it's at that point that you become liable.
Yeah, exactly.
So amplifying how many times do you become liable is a valid question.
But everybody would agree, like, if an algorithm amplifies something a billion time, it should be liable.
At some point, the number of times that the algorithm said that Obama was born in Kenya was in the hundreds of millions.
Hundreds of millions.
So it's completely crazy.
It's probably more than the population of the U.S.
It's insane.
At some point, if the algorithm is not liable, these things are going to happen.
So the idea is to have accountability.
Like, at least, okay, we can have an AI like in charge of where we're going,
but at least we should know where it's heading.
So we should know, like, if when a content is recommended on YouTube,
we should know which proportion of the view comes from the recommendation,
on which proportion of the view come just from human recommended to another.
So there was this low past in front that says exactly that.
YouTube should say the proportion of each algorithm promoting the content for...
So if you have a video that's got 100 million views,
then you should be allowed to be able to say
what percentage of those views came from recommendations.
So this would open up a bunch of transparency and accountability for YouTube.
Exactly, which percentage of the view come from the search results, etc.
So you would have more visibility into what's going on.
So if something goes wrong, like the pedophile case or bad child videos and stuff,
you would see it much faster because you would see that the algorithm is starting to amplify like crazy some specific type of videos.
So you wouldn't need to wait until the problem is too big.
You could see it faster.
Another problem, it seems, is just speed because if you think about the most profitable business model for YouTube,
it's to have all of this running on automation.
So you publish a video and it gets instantly available and recommended everywhere instantly as fast as possible with no human reviewers.
That's the most efficient business model.
Then you have no human beings in between, guarding between what is being broadcasted and the sensitive people on the other end.
And that includes children on YouTube for kids.
That includes in Syria, what people are believing about these sort of war zones where there's not much information coming out.
So having a more sensitive, more protected way in which information gets controlled.
or share, or, you know, there's more thoughtfulness and not just an automated channel
that's just trying to maximize profits. So this is why I think what you're doing in France is so
critical and why we could replicate that in the U.S. or the EU. And why I think Algo Transparency is
so interesting because this is essentially a citizen is having to create the satellite network
to point it back to understand what's going on on Earth. Like, that's ridiculous. Yeah. So
that's a perfect analogy. I mean, so Guillaume has this project called Algo Transparency.
which basically shows as much as it can, it scrapes YouTube
and it shows these are the things that are getting most recommended
and it tracks it over time so we can start to see trends.
But this is one human being, one very talented human being,
a civilian who's trying to create essentially a system of satellites
to monitor what's happening at the scale of 2 billion people.
This is just not the white way that accountability should work.
And I know that when YouTube will often fight back in you
and be like, oh, but you don't have the latest data.
And that's sort of your point.
You're like, correct.
You do.
And they're trying to hide it.
Hey, this is Tristan.
What if we lived in a world where Exxon was the only company that knew how much pollution was actually out there?
Because they owned all the satellites.
That's actually kind of like YouTube and the pollution that is dumped into our public sphere.
Let's talk to Aza about amplification transparency.
So Guillaume has done the most research on YouTube, but of course the same engagement bias.
the enrage to engage, is happening, of course, on Twitter and Facebook and all the techno-social
platforms. Just like there's an algorithmic bias against race and gender, there's an algorithmic
bias against our values, and every time our values are pitted against engagement, our values
lose. And here's, I think, the most important point to remember, it's whether platforms are
choosing the content to amplify or choosing an algorithm which chooses the content to amplify,
they are still choosing. I think that choice has greater impact than the impact of any
major news organization and probably all of them put together, or to put another way, the platforms
are choosing what goes into the information soil from which all of our collective sense and
decision-making abilities grow. So that's a lot of power, a lot of responsibility for which
the platforms right now have no responsibility. Right. So amplification transparency is an idea
that we're interested in to do just that, to put back responsibility where it needs to be.
And the idea, at least for me, originally came from Guillaume.
Algorithm transparency is about being able to start teasing apart the question of like, why is the
algorithm doing this versus that?
And amplification transparency is saying, what is the algorithm doing?
Just give me the hard numbers of which content is being promoted so that we as civil
society can come together and decide, hey, is that decision in our values or is it attacking
our values in favor of engagement?
That's the difference.
And the idea I think is very simple.
Force platforms to expose an API where anyone in civil society can ask, hey, how much have you
amplified, recommended a piece of content now and historically?
And the platforms are required to answer.
It's just an API.
That quantifies and makes visible the platform's bias so that we can decide together where
their choices fit or values and where those choices attack our values.
It lets there be tens of thousands of geomes.
So that's oversight that scales to the scope of the problem.
This is your analogy, Tristan, but if social media and tech platforms are a patient that is cancer,
we want to save the patient by just removing the cancer, but in order to do that, we have to be able to see where the cancer is,
else you just cut out the stuff that's helpful to living and you kill the patient.
And the good news is for me that if we fix it, the results can happen fast.
Here's an example of how fast it can actually make change when you shut off the algorithmic hate,
and it's from that countrywide study across Germany by Mueller and Schwartz at Princeton.
And so I'm quoting from a summary from the New York Times, but whenever Internet access went down in an area with high Facebook use, attacks on refugees dropped significantly.
And that gives me a lot of hope because that says, oh yeah, we are confusing mirrors for screens.
And if we just make that difference apparent, we revert to being our real actual selves.
Right. I love that. Yeah. Replacing mirrors with seeing them as amplifiers, not mirrors.
Yeah. They're like funhouse mirrors. They're like distorting.
Exactly. They're funhouse mirrors. That's a good metaphor.
Look, by the way, YouTube is a fantastic and amazing product that provides life-changing experiences.
So just to say and make sure that we're all with you here in the tech industry, this is not an anti-tech conversation.
It's about what is this automated, attention-hungry, AI-powered system doing to history, to world culture?
The problem occurs when they have a self-dealing, extractive business model that says, instead of wanting just to help you with your goal,
we really just want to suck you down the rabbit hole.
And there's no reason why recommendations should be on by default.
Like, this is not a, you shouldn't be able to post ukulele videos or post the health how-to videos.
This is about why are we recommending things to people that systematically tilt in the more extremizing directions that we know are ruining society?
So, you know, how do we actually regulate?
I mean, why not just not have the recommendations at all, except when you click a button specifically?
So the default setting is no recommendations.
Just like the default is to autoplay and you can flip that off.
Of course, almost no one does and YouTube knows that.
I say, oh, but we gave you a choice and it's your choice, but it's just all manipulation.
You know, they need to be much better about this.
Where is YouTube offering these lasting value, these lasting use cases?
And how do we strengthen all of those cases where it's helping more people with their health injuries,
helping more people learn musical instruments, helping more people laugh with friends?
There is so many good use cases, but it's not optimized for that at all.
So there are several classes of solution.
There's one class of solution is to optimize for better things.
So instead of optimizing for how much time you will watch,
which will lead to this like false medical information,
you optimize for good feedback.
Like, yeah, this YouTube video really helps me.
And this should, like, count, like, a lot.
And right now it doesn't because YouTube doesn't care that it helps you with your foot.
It just cares if a video keeps you online longer.
So there was this video having 8 million recommendations that says, like, don't drink table salt because there is glass in it, and it's hurting your intestines.
And this was very scary, so it worked really well, and it got 8 million views or something like that.
And the problem is whether or not human authority is a good rating system.
I mean, you've all Harari author of Sapiens and I talk about this, that this is a question about the breakdown of human choice and feelings.
because if enough people can be fooled into thinking that that table, salt, glass video thing
is true, and then they'll self-rate that this actually did help me because now I'm not eating
that glass.
There's this challenge, which comes down to back to a crisis of trust.
Like, who do you trust to provide these rating signals that distinguish between truth,
meaning, and fitness?
It's not just about truth.
It's about what provides meaning, what's also helping us survive.
It's complicated.
I mean, time will spent that phrase and that idea originally came.
from a new class of ratings that said not just what is it what is it maximizing to spend our time
but what is what is of lasting benefit and value to our lives like a choice that on your deathbed
that you would say gosh I wouldn't take those hours back from YouTube for a minute that you know
totally changed my life that was amazing I laughed with those friends I learned you know I played
the accordion I learned how to play a few songs on the accordion from YouTube I would embrace those
hours with a big hug there's so much good that can come here but we need to have a
totally different, almost like meta app that sits between us and all of this extractive
garbage that helps us navigate just to these time while spent lasting, humane, you know,
things that really recognize the things that make life worth living and also recognize we're
human, you know, we're naturally brilliant at certain things and making that happen more.
You know, we often hear the line about platforms just being neutral parties, which of course
is so intellectually dishonest, but I think most people don't realize we've baked in values
to the very beginning. So Google search, page rank was the sort of the algorithm that let them
rise, which is like it determines how good a page is based on the number of, not the content
of the page, but based on how other pages view it. It's a sort of social consensus of internet.
And when they just let it run, originally it just found porn.
Finding porn, exactly.
Porn is the number one thing that is most authoritative on the internet.
According to that algorithm.
According to that algorithm.
Unless.
Unless they're like, all right, we're going to have to seat it and start it at MIT and Stanford.
And so they were seeding the entire way that human beings experienced our collective knowledge.
And they said, yeah, there's values.
There's some information which is better than other information, and they baked it in.
And now they just, we just pretend like we're neutral.
No, you can be neutral.
Yeah.
It doesn't exist.
Yeah.
So where is your work blocked?
right now. What do you need help with? And how can people help you? Yeah, so I think like the main
blocking factor is this lack of public understanding of the problem on the like thinking that
Google has your best interest in mind. No, you should take Google accountable like if people
are ready to say no like anti-vaccine promotion is not okay or Russian propaganda is not okay.
On not having transparency is not okay.
So if this awareness that we need this transparency,
we need this accountability of algorithm is like the main roadblock.
Once people understand that, then we can do all kinds of things.
We can give back more control to the user,
either with Google doing it or startups doing it.
I mean, startups can easily do it,
but if there is no awareness, nobody is going to use their products.
So just to go on this point about awareness,
because I think a lot of people think,
oh, raising awareness, that sounds important, but like it's not going to do anything. That's
not going to cause any change to happen. Let's talk about why awareness actually does make a
change. So we just introduced this phrase of human downgrading, which is the connected system
of problems, how it downgrades our attention spans, downgrades our civility, downgrades
our common ground, dongrades the quality of our beliefs and our thoughts, children's mental
health. When we have a phrase that describes the problem, instead of talking about, oh, like,
there's some bad videos on YouTube, we're describing
the problem not in a systemic way. It'd like be just talking, instead of talking about all
of climate change, just talking about coral reefs all the time. People like, eh, coral reefs are
kind of important, but like, is that such a big deal? Versus if you can talk about climate change
and how the whole system is moving together in this catastrophic direction, the first thing I think
people can do is if you just have this conversation three times a day, human beings respond to
public pressure when there's, you know, three times in one day, you hear it from a school teacher,
you hear it in your design meetings, you hear it in your product meetings.
If people say, are we downgrading society?
Are we downgrading the quality of people's beliefs?
And not saying that in an accusing way, even though it sounds that way.
What we're encouraging us to ask is, just like we saw with time well spent,
you know, time well spent and the attention economy and technology hijacking our minds
are three phrases that started to colonize the public discourse.
And now so many people are talking in terms like that that has led, you know,
along with political pressure, along with here,
to huge changes. And in the past, YouTube actually has responded mostly when their advertisers
get upset. So actually, we might want to put out a call to the heads of Unilever, P&G on this
podcast to be really aware of the systemic problem here. Right now, these guys, these CEOs of
Unilever and PNG, respond when there's a specific issue, like child pedophilia, right? Or there
was an issue recently two weeks ago of YouTube recommending how to, videos of how to commit suicide to
teenagers. And when those issues come up, again, because of researchers like you, Guillaume,
who raise it to the press, then the advertisers respond. But what we need is the advertisers
to respond to the entire problem of human downgrading in a lasting and sustained campaign
and effort. Because if they do that, then these companies can't continue to do what they're doing.
And I think the whole purpose is we're in the middle of this kind of transition from the kind of
fossil fuel, fossil attention age of the attention economy where it's all extracted. We've got to
drill on this race to the bottom of the brain stem, frack your mind, split your attention into seven
different multitasking streams because it's more profitable. That's the extractive era. And that
era is over. And we're moving now to a regenerative era of the attention economy where we need
every single one of these companies, Apple if you're listening, Google if you're listening, Android
if you're listening. There's different players of different things they can do to move away from
human downgrading and move towards a more humane recognition of the vulnerabilities of human
nature.
And if we do that, we really do believe that that change can happen.
Definitely.
Yeah, there's so many things that can be done.
So we talked about optimizing different things, optimizing regenerative content,
optimizing, giving more control to the user.
You could, like, yeah, build in more metrics for the user to say, hey, this, this,
this content like was very helpful or this content hurt me on the long term,
you could report much more kind of things and take that into account.
So there are a lot of solutions when people notice.
So it's a bit like fighting cigarette tobacco industry.
Tobacco, it took so long like to raise awareness,
but at some point when the media in the US focused on raising awareness about tobacco,
and people understood and then smoking became a,
cool. So common awareness saves lives and will save America, I think. And just one last thing on
that is just the urgency. So, you know, those issues, tobacco are huge issues and took 60 years
to flip that around culturally. But in this case, when you realize the speed at which technology
is evolving, that that supercomputer is playing chess millions and billions moves ahead on the chessboard.
Every year it's getting better. It's not moving at a slow timeline. It's moving in an exponential
timeline, which is why now is the moment, not later now, to create that shared surround sound.
Even if I don't watch YouTube, I'm still surrounded in a media environment and people
that do.
And if everyone else thinks something that's going to affect me.
Or they vote for someone else, even if I don't still going to vote for who I'm going
to vote for, you know, we're all still affected by this.
And that's, I think, the main point to end on.
It's just that this is an issue where we're all in this boat together.
But if we put our hand in the steering wheel, we can turn it, which is what we have to do.
Guillaume, thank you so much for coming.
This has been a fascinating, fascinating conversation.
Thank you, Tristan.
Yeah, it's been such a pleasure, as always, Guillaume.
On our next episode, Aza and I talked to René DiRusta, a researcher who investigates the spread of disinformation and propaganda about how Russian campaigns have evolved with the Internet.
I hear a lot like, oh, it's just some stupid memes.
And it's interesting to me to hear that, because I'm like, well, you know, they were running this.
same messages in the 1960s in the form of long-form articles. So the propaganda just evolves to
fit the distribution mechanism and the most logical information, you know, kind of reach of the
day. And so in a way, they should be using memes, in fact. That is absolutely where they should
be. And it's interesting to hear that spoken of so dismissively. A lot of us look at cartoons with
silly messages and block letters on the internet and can't imagine that content like that would ever
really affect anyone's opinion, much less their vote.
But Renee helps us understand that some of what looks childish and primitive on the internet
is actually the result of sophisticated campaigns by foreign state actors.
Did this interview give you ideas?
Do you want to chime in?
After each episode of the podcast, we are holding real-time virtual conversations with members of our community
to react and share solutions.
You can find a link and information about the next one on our website, humanetech.com slash podcast.
Your undivided attention is produced by the Center for Humane Technology.
Our executive producer is Dan Kedney.
Our associate producer is Natalie.
Original music by Ryan and Hayes Halliday.
Henry Lerner helped with the fact-checking.
A special thanks to Abby Hall, Brooke Clinton, Randy Fernando,
Colleen Hakes, David Jay,
and the whole Center for Humane Technology team for making this podcast.
And a very special thanks to our generous lead supports
at the Center for Humane Technology who make all of our work possible,
including the Gerald Schwartz and Heather Reisman Foundation,
Foundation, the Omidyar Network, the Patrick J. McGovern Foundation, Craig Newmark Philanthropies, the Knight Foundation, Evol Foundation, and Ford Foundation, among many others.