Big Technology Podcast - Amazon Rigs Search, Social Media 'Ampliganda,' Netflix Protests — With Adrianne Jeffries, Renee DiResta, and Zoë Schiffer
Episode Date: October 20, 2021Join us for a 'mega' episode with three guests! The Markup investigative reporter Adrianne Jeffries leads off with a discussion of Amazon's self-preferencing in search. Stanford Internet Observatory's... Renee DiResta joins for our second segment to discuss her story on bottom-up propaganda on social media, something she calls 'ampliganda.' Verge Reporter Zoë Schiffer rounds out the week with a look into the state of worker activism at Netflix and Apple. Check out Adrianne's story on The Markup, Renee's in The Atlantic, and Zoe's on The Verge.
Transcript
Discussion (0)
Hello and welcome to the big technology podcast, a show for cool-headed, nuanced conversation,
the tech world and beyond.
And, boy, we have a pack show for you today.
We're going to actually try out a new format.
We'll do some news at the top and then move to analysis in the second half with a new guest.
Our first guest today is going to be Adrian Jeffery.
She's an investigative reporter at the markup.
We're going to talk about this bombshell Amazon story that she and a fellow reporter at the markup dropped.
And then in the second half, we'll have Renee DeResta come up and speak with us about what she calls amplanda, amplified propaganda coming from the bottom up and not from the top down.
But first, let's speak with Adrian.
Adrian, welcome to the show.
It's great having you here.
Hi, Alex.
Thanks for having me on.
Yeah, thanks for being here.
You are the lead byline on one of two bombshell stories about Amazon this week that both focus on the company's private label brands.
These are the brands that come from Amazon that are fairly indistinguishable from the rest of the brands on the marketplace.
On the website, there's long been discussions of whether Amazon favors its products.
Jeff Bezos famously kind of waffled on the question in front of Congress a few years ago.
But now we have proof that Amazon is not only preference.
seeing its products in search. It's also copying private label brands, not in a haphazard
way, but in a systemic way, which is something the Reuters story got into. But let's talk about yours.
So you and your co-author decided to go and examine whether Amazon was preferencing these private
label brands, the brands that it has created on its own in search results. And it seems like
that you caught the company red-handed with some serious self-dealing.
So do you want to share a little bit about what you found?
Sure.
So we've been told by lawyers to avoid phrases like serious self-dealing.
So I'm not sure I can get that far.
No, no.
This is the fun part of the show, right?
I'll make the big proclamations you can walk me back and tell us what the truth is.
So go ahead.
Perfect.
Yeah.
So we did this story.
So my reporting partner is Leon Yin.
He's a data journalist.
And the way the markup works is we typically take a big story and assign a team of a reporter that's me and a data journalist to work together.
So I mostly did the reporting, talking to sellers, interviewing experts, and Leon did the data collection.
And then together we worked on the analysis and how to tell those in a human understandable way.
Okay, but what did you find?
Yeah. So we started looking at this because we did a similar story last year about Google's search results, where we looked to see how much of the page was actually directing people back to other Google stuff. So this is like the one box answer or Google Maps or Google flights. And we found pretty significant findings there that they were taking up a lot of the page, especially at the top, to send people back to their own stuff. So once we finished that story, we were riding high. We were like, let's just do this again for Amazon. Amazon has a
own stuff. Amazon has search results. Everybody uses Amazon search. We'll do it. It'll take a
couple months. It ended up being a really complicated data collection project and took much longer.
However, we ended up with a similar finding that Amazon's private labels and its exclusives,
which together, that's a category that Amazon refers to as our brands, those were very frequently
in the number one spot. And they were often outranking products from competitors that had,
had more reviews and better star ratings.
Yeah, so that was the basic finding.
So if I'm getting it right, the finding is that Amazon has some products that I can list
only exclusively on Amazon, so you can't find them elsewhere, products that it creates
itself.
And when you search for the product categories, let's say a coffee grinder on Amazon, even if
the product has been in market for shorter, even if it has less reviews, even if it has
less favorable reviews. When you do that search, you're going to find the Amazon product first.
Right. So this is, we don't know the exact mechanism by which this happens. We did talk to some
former Amazon employees who described a practice that's also reported in the Reuters story called
search seating where you can introduce a new Amazon label. They don't do this anymore, but for a couple
years, this is how they were doing it. They would introduce a new product, say, Amazon Elements,
baby wipes, and they would just set that product's score in the search results, whatever the
relevancy score is. They would match it to another popular product. So for their own baby wipes,
they said, these baby wipes, they probably should have about the same score as baby wipes from
huggies. So they would set it up so that as soon as they drop the product, the Amazon one would
appear in the top right after the Huggies version. And that's something that third-party
sellers obviously can't do it. If you launch a product, you started zero and you have to
claw your way up to the top. So that was a really clear way that they were preferencing their
own products. We didn't find the exact mechanism by which that's still happening. They are
using what they call merchandising placement where they stick a product. Often, we saw up to three
times on the page, but in the number one spot is usually where it appears first. And it's an
Amazon product or an exclusive, and it says featured from our brands. And so we asked Amazon,
is this an ad? What is this? Is this a search result? And they said, no, it's a merchandising
placement, which is a phrase that they made up to describe this situation. So we don't know exactly
how that happens. But in the data, it came out very clear, knowing only whether a product was an
Amazon brand was enough to predict whether it was going to be number one, like far more than any
of the other things we were able to measure. Right. And so the argument against this and the argument
will continue to be that this is Amazon's store. And just like brick and mortar stores are
able to create their own products, think about Costco and the Kirkland brand, why can't Amazon
create its own and put them at the top? I mean, it is its own site. So where do you see the issue here?
Yeah, I think what some antitrust advocates will say is like, okay, just because Walmart doesn't do, does it too, doesn't necessarily mean that it doesn't hurt competition, that it's not having a deleterious effect on the economy and customers.
So that's the first response to that defense.
But also, there are some things that make it just different when you're on a platform versus when you are doing like a traditional brick and mortar sort of placement.
Like, well, on the Amazon platform, there's just a million different levers that Amazon can pull to try to push people towards certain products.
Whereas if you're in a store, they really can only do a couple things.
They can, like, put a big sign on it.
They can put it at eye level.
They can put it on an end cap.
But they can't do as much to direct the user through the user interface to buy a certain thing.
The other thing is that on a store shelf, you're not necessarily –
you don't necessarily assume anything about the product based on where it is on the shelf.
Like, I'm not thinking, oh, because like this soap is on the top shelf, it's the best soap.
Whereas we surveyed people and found that they pretty much believe that the number one search result on Amazon means it's either best rated, bestseller, or lowest price.
And not that it's just an Amazon product.
So consumer expectations are really important for these kinds of questions also.
in the eyes of regulators?
That's a good point.
The other thing that someone who's defending Amazon might say,
just I'm going to advance it for the sake of the argument,
and to be able to really discuss this in a full 360 way,
is that Amazon is a website that people go to.
And if they're not getting value out of Amazon.com,
they can always type another value.
So by placing its own brands at the top here,
Amazon's actually taking a pretty significant bet
that if they don't deliver for people,
they're not going to keep coming back to Amazon.
So what do you think about that?
Yeah, and I think all the anti-trust arguments kind of hinge on whether or not people have a choice between Amazon and something else.
So the first step in any antitrust case is to establish what the market is and who the dominant players are and how much choice there is.
From talking to sellers, they pretty much feel like they have to sell on Amazon.
Conservative estimates, Amazon is 40% of retail, of e-commerce retail.
So it's a lot. And there are estimates that are higher than that. So that's kind of the first step is to say, you know, how much choice do you have if you're not getting good quality from Amazon.com? Can you really go somewhere else? If you're a seller, can you really make a living somewhere else? If you can't, then it's potentially a problem for competition under U.S. antitrust law.
Do you think people can go other places? What is your perspective on this?
Um, my perspective as a somebody who lives in North Brooklyn is that Amazon is pretty dominant.
Yeah. Um, but, uh, yeah, that's the kind of thing that antitrust regulators would have to
investigate carefully. And that's the sort of thing where Amazon would push back and say, you know,
you're measuring the market wrong. Like, you can't just have one big market for all e-commerce.
That doesn't make sense. Uh, we have lots of competition. Um, one funny thing that I enjoyed from the
There was this big antitrust subcommittee report that came out of Congress last year.
And at one point, they asked Amazon for a list of their competitors, and Amazon submitted a list with lots of names.
And one of them was ERO, which actually Amazon owns.
So it's gotten to the point where they own so many brands that they may be losing track of them internally.
Yeah.
I personally think that the argument that Amazon is a monopoly or you can't go anywhere else is going to be extremely difficult to prove,
you look at the increasing competency that company like Walmart has, you look at the success
of Shopify and how it's empowering some of the same independent vendors to sell on their own
sites and use things like Facebook ads or Google to promote their goods as opposed to relying on
Amazon, you know, that's going to be tough in any antitrust case to prove.
I mean, but, you know, I think we are starting to.
to see some movement to rewrite the rules and actually explicitly ban this type of stuff
when it comes to the tech giants.
So also this week, we had a story, I mean, it's all over the news, but I'm just looking
at a headline in the Washington Post.
Senators aim to block tech giants from prioritizing their own products over rivals.
Right.
It goes exactly to this point here, which is that if you own a big platform, if a bill like
this passes, you will be prohibited from doing something like rigging the search
results in your favor. So I'm curious what your view is on that bill. I mean, it's a matching bill
in the House and the Senate. It's kind of gotten stuck in the House. Maybe I'm wrong about that,
but I haven't heard any noise about it ever since it came out. But there could be some momentum now
that it's in the Senate as well. So what do you think the chances are that something like this
passes? And do you think it will have an overall positive effect on the market and competition
if it does? Yeah, I think it's tough to place a bet on Congress passing.
any laws. They just don't seem to pass a lot of laws. That's true. It's not their favorite thing to
do. Yeah. But there is a lot of attention and a lot of energy around the power that the big
tech platforms have. I think people have just started to feel like the amount of power they have
is getting really large and it's kind of scary, just sort of in a broad, ambient way. And
members of Congress are picking up on that. And then the possible impact on the elections also has
made elected officials, I think, feel threatened.
So could this actually, it is really encouraging.
This is part of a package of bills that were introduced in the House.
There are several that are aimed at curbing big tech power,
and this one seems to be the one with the most traction.
It's called the American Choice and Innovation Online Act, I think.
And, yeah, it says you can't.
Sorry.
Very catchy name.
Yeah, very catchy name. So, I mean, yeah, a law would be a much faster way to get any kind of enforcement on this stuff. Like, lots of people will say what Amazon is doing, what Google is doing with preferencing their own products is already illegal under antitrust law. But antitrust law has been very loosely enforced for the past couple decades. And the way to get something done,
without having to undo a lot of precedent with antitrust enforcement would be to
pass a new law that's explicitly aimed at this.
And yeah, the law seems to be, it does seem like it would make this stuff illegal.
If it's, if they're playing dirty, like that it doesn't say Amazon can't compete on its
website, but it says they can't.
Can't use data.
Yeah, can't use data, would otherwise not have to.
improve its products. And I think that sort of goes to, and I want to get to the
Reuters story before we hop off, but it goes to the crux of the Reuter story, which
essentially showed Amazon trying to sell its own or through a partner, its own exclusive
for private label brands. And there being a lot of returns. And so Amazon basically looked at
competing, it was a T-shirt brand, competing T-shirts, and pulled the measurements off
the ones that were doing extremely well, transposed that onto its brand, and then all of a sudden
got a leg up. And, you know, it's a complicated thing because when you have access to a platform
the size of Amazon, you can turn basically nothing into a large business if you hit it right.
And so the company does provide opportunities for companies to do it. But there is this sense
among third-party vendors that as soon as you get too big, all of a sudden, Amazon is going to
figure out what makes you special, copy it, and then either put you out of business or knock you down
a few pegs. And until this past week, that was largely something that people were kind of feeling
in their bones, but we didn't have the documentation. And now we do. So I do wonder what you think
about that situation. Right. It does seem like you look at that and then you look at the copying,
you look at the fact that they're rigging the search results. And it does paint a different picture
of Amazon. Maybe that's what we come out of this week looking at. Yeah, I think so that's
that's another thing that this American Choice and Innovation Online Act makes explicitly illegal
is using that secret data to inform their own decisions about what to launch.
And yeah, what Amazon did in India was pretty blatant based on this reporting by Reuters, which is they didn't publish the documents, but they quoted from a lot of these internal Amazon documents.
and it's it's pretty clear that there was a deliberate strategic push to make Amazon's own private labels get more market share in India and yeah I think the data question is you know like this there's an analogy in U.S. history with the railroads so like this this bill is looking at non it's looking at nondisquist
discrimination. And the railroad analogy is like you have the railroad. They own the tracks. They also own companies that ship things on the tracks. So they're like, oh, our coal, it's going to go first down the railroad. And competitors are going to have to wait until our coal has been shipped and then you can ship your own coal. And that is kind of the crux of what's going on with Amazon is that they own the infrastructure. Everybody else is building on that infrastructure. You know, there's lots of advice not to build on other people's infrastructure.
infrastructure because this is one of the things that can happen.
But it does come down to, like, what impact does that have on customers?
What impact does that have on competition and innovation and the American economy?
And that's something that there's definitely a lot of attention around.
So maybe we'll get an institute at some point.
Yeah, and we will continue to focus on that stuff on this show.
It's obviously a moment where all the big tech companies are facing questions about the way that they influence competition.
And those questions don't seem to be going away, no matter how slow Congress is moving.
So it's a key issue, something we'll keep focusing on.
But, Adrian, thank you for coming here and sharing a little bit more about your findings and your reporting and what the implications are.
We appreciate it.
Of course. Thank you.
Thanks.
Coming up after the break, we're going to speak with Renee DeResta, technical research manager at Stanford Internet Observatory.
She wrote a great piece in the Atlantic called It's Not.
misinformation, it's amplified propaganda. And the story looks into why misinformation is actually
kind of a bad word for all the, or a useless term in some ways, for all the bad stuff that
we see online. It's become this catch-all that's now meaningless. And she's going to talk a little
bit with us about how there are these bottom-up influence campaigns that are coming, not from
Russians, not from bots, but from within the house. And we're going to get into that and talk about
what it means for social platforms. So thanks so much for sticking with us through the first
half. We'll be back right after this on the big technology podcast. Hey everyone, let me tell you about
the Hustle Daily Show, a podcast filled with business, tech news, and original stories to keep
you in the loop on what's trending. More than two million professionals read the Hustle's daily
email for its irreverent and informative takes on business and tech news. Now they have a daily
podcast called The Hustle Daily Show, where their team of writers break down the biggest business head
lines in 15 minutes or less, and explain why you should care about them.
So, search for The Hustled Daily Show and your favorite podcast app, like the one you're
using right now.
And we're back here on Big Technology Podcasts for our second segment.
Joining us is Renee DeResta.
She's the technical research manager at Stanford's Internet Observatory, and she's out
with a great piece in the Atlantic that you have to check out.
It's called, it's not misinformation.
It's Amplified Propaganda.
This is the part of the show where we're going to get a little bit more into analysis
and deeper thoughts about the tech world.
And Renee, I love this because, first of all, you start off talking about how misinformation
is really a clunky word for all this suspicious and manipulated stuff we see online
and you coined this new word, Amplaganda.
Why don't we start off by telling the story of Shahid Bhuthar, who,
used a network of very real people to game some algorithms and get a hashtag Pelosi
must go trending and spark a discussion that really spanned many communities. Yeah, so it was a
really interesting morning. We were doing some observation of the conversation around the
primaries. And in California in 2020, yes. So we'd been following along with a number of these
conversations, and what we saw time and time again was completely authentic accounts that would
manage to get particular things trending. This is just a form of networked activism. But the public
conversation about it, the kind of meta-conversation, often insinuated that there was something
inauthentic that was happening, that these were bot accounts, that they were in some way,
you know, they would kind of harken back to this idea of the Russian bots of 2015, 2016. And what we
were seeing at standard Internet Observatories is that these were completely authentic,
very real, legitimate activists. And so this led to some interesting questions about was this?
Before we move on, so let me just make sure I have this right. So there are big trends in the 2020
primaries as we start to see congressional candidates make moves and maybe presidential stuff.
And so people who have this experience seeing the way that the Russian Internet Observatory
our internet, what do they call them?
That's not you guys.
Not us.
Yes, didn't mean to imply that.
Anyway, so the internet research agency has hijacked some of these trends and gotten stuff
moving on social media.
People see stuff moving on social media and immediately go to the bots.
But what you found was it was actually very real people who had found a way to gamify
these algorithms and put their talking points front and center.
Yes.
And this is just activism.
And so it's not misinformation.
There's nothing falsifiable about anything that is that is happening in this momentum.
This is just real people expressing themselves politically across a variety of, you know,
kind of places along the political spectrum.
And so the interesting question becomes, you know, where is that line with coordination?
Where is that line with authenticity?
Where is that that line with thinking about how we as a country in our political process
are incorporating in these networks?
worked movements into our political conversation. And then the sometimes unexpected ways in which
the networked activism community intersects with the media as the media kind of picks up and
reports on the hashtags using this generally speaking. It was things like people on the internet
are saying. People on the internet are talking about X-Y-Z was treating. I worked at BuzzFeed. I know
those stories well.
The fact that something is trending is a story, which is in and of itself kind of remarkable.
It's a meta story.
It's not focusing on the substance.
And interestingly, when I was talking to Shaheed, this is what he starts to say to me as I asked him about that way.
So, who is a Democratic socialist who was running against Nancy Pelosi in the race for Congress in San Francisco.
And so the way that the California primary works, she had had a Republican challenger as well,
but it's the two top vote getters who go on to the general election.
And so in this case, it was Shaheed, who's a Democratic socialist, so to her left, and then her.
And so he remarks upon the fact that she won't debate him.
So there's not a lot of media coverage that he's able to get for his campaign.
He's talking about the issues that he cares about in Medicare for All and the kind of Democratic Socialist platform, you know, fossil fuel industry stuff.
Yeah.
And he pulls the classic move of like debating.
an empty chair. He debates an empty chair, you know, and so he wants to get attention for his
campaign. He wants to get attention actually for his ideas. But the incentives for how we communicate
online are not to talk about the ideas. You're not necessarily going to get that hashtag about
this is my political plank on fossil fuels. That's not going to trend. Not a whole lot of people
are going to pay attention to that. But Pelosi must go. That's the hashtag that him and that he and his team
kind of land upon. Well, that's kind of like a Worshack test, right? Anybody can read anything into
that. And so it becomes something that a whole variety of different activist communities can
aggregate around. They can see what they want to in it. And they feel compelled to participate
from whatever political point of view it is that they hold in opposition, usually to Pelosi,
but we do see people begin to come in trying to reclaim the hashtag with content like Pelosi
must go straight to the White House to take over the presidency. You know, so there's, there's
people her supporters are in there too at some point and so all of a sudden the entire internet
was talking about this Pelosi must go yeah on this sunday morning and you know it was it was just
fascinating to watch it kind of hop um kind of hop around the ecosystem and it was all how did he get
it trending in the first place did he and his supporters coordinated on some discord channel and you
you were watching this go down as as as yeah i was watching it in real time um you know we were
interested in looking for, actually, that foreign interference, those Russian bots that, you know,
that everybody thought would be much more prominent as they had been in 2015, 2016. And really,
the platform integrity teams, to their credit, they kill off inauthentic accounts very quickly
lately. They deprecate bots. So automated accounts don't surface in trends. There's the gray box
where kind of low quality garbage accounts, people sock puppets and throwaways. They don't actually
surface. And so what you have instead is just these very highly activist online communities
that sometimes coordinate and sometimes compete. And that's kind of an interesting dynamic too.
So even though the Democratic socialist left that precipitated this hashtag has very little in
common with the kind of Trump right that picks it up, through that act of shared participation
in the hashtag, they elevate it to the attention of the public.
as it hits number one on Twitter trends.
And for the ordinary public who has not, you know,
is not aware that this was kind of coordinated in the discord and some Twitter chats
and then picked up by other activists on the other side of the political spectrum,
they don't see any of that.
All they see is the hashtag.
So it looks like hundreds of thousands of people are very angry at Nancy Pelosi
on this random Saturday morning or Sunday morning in July.
So I think it's important to establish how did they get it trending in the first
place. So was it like everybody on this Discord channel that supported Shahid Wittar, who's the
candidate, said, on this day, at this time, we're all going to tweet, Pelosi must go, don't have
more than two hashtags, then the system will think you're bought. And if we push it with enough
momentum, the system will start to see that something's happening here. And then others will glom on to it
and it becomes a story. That's exactly right. And so this is, you know, this is a marketing tactic.
This is an activism tactic.
There is nothing that is inauthentic necessarily about this tactic.
When people want attention on the Internet, this is a thing that they do.
And more often than not, you know, many of these attempts at engineered virality fail because the hashtag isn't appealing enough.
And it doesn't have that, you know, the Genesequa that like makes people want to get up and like fight about it, you know.
But this particular one, Pelosi must go, it did.
Right. It had that, I can read into it my political beliefs. And I, a passionate person on Twitter on a Sunday morning, I'm going to add my contribution or retweet the people who were in there. And so it started off as this moment where if they could get critical mass, they would begin to climb the leaderboard. And then as it climbs the leaderboard, other people notice it, observe and begin to participate. And then it starts to kind of bounce around the, um, the, um,
the Twitter sphere, and sometimes you see hyper-partisan media on one side or another
will write that people on the internet are saying article.
Yeah.
And so the problem here is that the stuff that we see on social media that we think might
have shown up organically is actually something that's been put on a trending topic or
into our feed through a coordinated manipulation of the algorithm.
And we don't know because it's never labeled that that's what happened.
What I was thinking about as I wrote the article was not even that it was manipulative, actually.
It was the question of what is manipulative?
What is what is reasonable, what is normal today?
How do we think about this?
What kind of visibility do we want to have into Twitter trends?
What might make us have some better understanding of,
why a particular thing is put forth in a format that is designed to capture our attention and
make us want to participate. Is it engineered virality, which maybe engineered virality is
perfectly fine if it's done by legitimate accounts? I think that's actually one of the
really interesting questions that I think we need to be thinking about. But when you have those
moments, when you have the kind of the things that rise to the top of Twitter, I think that
there's an innate belief among much of the public that it's happening because groups of people
just happen to have a strong opinion on a particular day. Imagine you've had this experience.
You see a trend and you click in and you, you're curious, right? You're like, why is this hashtag here?
You go, you click in. You start reading the conversation. You go down a couple of rabbit holes.
You're trying to figure out what did this person actually do? I've seen a couple of these because, you know,
other groups that were really, really good at getting things trending before Twitter took action
after January 6th was QAnon.
So do you remember like Wayfair, right?
The, there was the story that Wayfair was trafficking children.
Right.
And this again, we have, there's some really interesting questions around this,
which is that, that trend begins to happen because a group of people come to the conclusion
that Wayfair is selling children, not filing cabinets.
And these get hundreds of thousands of engagements on Instagram.
They have evidence collages.
They've made all of these, these.
these snippets of the filing cabinet next to the headline with the name of a missing child.
Oh my God. I just don't know. How do you get it to? I did. I just don't understand. I mean,
maybe it's because I'm out of touch with the internet. I just don't get how you pick Wayfair,
which is like, you know, the most inconspicuous furniture company, right? To target them.
Maybe someone had a bad experience or a table cam chip, but to associate them.
With child trafficking.
Yeah, let's hear your perspective on this.
Let me posit that it's actually a sincerely held belief, right?
How?
I think that those of us who think that it's crazy
or those of us who think that it's completely outlandish
are operating from a very, very different position,
whereas those people who have been immersed in Q&N on lore
and Q&N mythology have been conditioned to see
this kind of content everywhere, right?
They firmly believe that every company,
every democratic politician, all of these people are in cahoots in some way.
And so then this inexplicable thing, a $13,000 filing cabinet named Samia,
and there had been a missing child named Samia.
And so somebody on the internet kind of connects the dots, so to speak,
and that Pepe Silvia, you know, kind of sense of word where the conspiracy theorist version of that.
And then they tweeted out, and then there's a receptive audience there.
There was a crowd.
Twitter took down 70,000 accounts in this QAnon network after January 6th.
But what these accounts were doing was they were constantly surfacing this content.
And so they were making things trend.
Wayfair traffic's children and these other hashtags around Wayfair were trending.
Interestingly, this was happening at the same time as Shahid Batar's Pelosi must go.
This was all right around the same time frame.
And the QAnon people saw the Pelosi must go trend climbing the, you know,
climbing the ranks and Twitter trending.
As they're getting up on Wayfair.
Yeah.
Yeah, but and they start marrying the hashtags.
They start using both of the hashtags.
Pelosi must go and Wayfar Traffics children.
They begin to create tweets that co-occur so that anyone who clicks into the Pelosi
must go hashtag trying to figure out what's happening there then is introduced to and
sees the Wayfair traffic's children hashtags.
And so it's a way to kind of draw another audience to pay attention to their hashtag that
they are actively trying to make trend and they get trending one or two times and then Twitter
starts to throttle it. And then they're convinced that they're being censored because Twitter is now
taking action and saying, no, no, no, the wafer hashtags are no longer going to trend. This is a
conspiracy theory. And they curate the platform to remove that trend leading to the allegation that
Twitter is kind of putting its thumb on the scale by deciding what shows up in that in that trending
feed. And that actually, you know, that's the question, right? What belongs in that trending
section? If the platform is putting it forth and saying these are trends, these are things that
you should be paying attention to, what kind of information should the public have, understanding
the provenance, should Twitter curate out the wildly false or defamatory stuff? Or is that, again,
networked activism, just from a, you know, from a different community. And where is that, you know,
where are these lines? So this is, I think, the interesting.
interesting question is not, I picked Pelosi Musco as my example for the story because I liked that
it hit all of these different groups. And if you read the article, it starts with Shaheed and his
team of Democratic Socialists makes its way to Pelosi's Republican challenger, who is now out
of the race officially, but still had 300,000 followers, you know, enough following to put her own
spin on Pelosi, must go, makes it to more of the, again, the Q&ONA wayfarer set come into the
chat, so to speak, and then we have the more traditional kind of pro-Trump right, again,
who stridently dislike Pelosi and they have their own spin on it. And so you see this entire
constellation of different accounts, but this is how messages spread today. And I thought I wanted
to try to get at that dynamic and start to actually interrogate the question of what should we
be seeing in trending? Trending captures public attention. Trending facilitates media stories.
it's not misinformation at all.
The Wayfar stuff, like, absolutely.
Let's clarify that.
But a lot of other hashtags, right.
This podcast is not believe.
I know.
But that, I think that dynamic is what's so interesting as we think about how to platforms
curate, how to regulators who are now very concerned about the algorithm.
Sorry, I got to ask you.
You know, we've been talking about this question.
what do you think they should do?
Twitter?
I mean, yeah.
Like, okay, you brought up earlier.
So I was saying this is manipulation.
You brought up earlier, maybe we don't classify this as misinformation.
You said there's a big question of how a platform should, sorry, you said maybe you don't classify this as manipulation.
And then you said there's a big question about how should Twitter handle this.
So, okay, we've definitely identified the problem.
What's your perspective on the solution here?
What does a platform do?
how do we how do we think about this stuff so i think there's a few things so first there's the
my preference generally speaking is to leave content up right i don't i don't like the
takedown of accounts i don't like the takedown of tweets i think that unless there's um you know
kind of direct harassment or incitement it's actually usually just counterproductive it just spins
up a whole second order um you know threat about censorship than more people see the content than
would have it that it had just been left up um but i think this question of
what constitutes a trend?
Like what is what is Twitter's intended function with that feature?
Is it to inform the public about things that they potentially would want to pay attention to?
They are very personalized.
You know, even as we were watching this trend climb the,
climb the trending hashtags.
It was like a slightly different number for me versus some of my colleagues,
again, depending on how other things competed for our interests.
I think the
the contextualization
that Twitter is doing now
is a lot better
but in a funny way
it really reminds me
where they give you some information
about the trend
underneath it
right exactly
and a human editor
is doing that
right
so there's a human
in the loop at that point
and it makes me think
a lot about
Twitter is actually moving
almost in the opposite direction
that Facebook went
do you remember Facebook
used to have trending topics
yeah
and they crushed
and they killed it
yeah
And they killed it because they had human curators in the loop who were trying to downrank the wild
conspiratorial trends that were happening.
And they got accused of being biased against conservatives.
Exactly, of anti-conservative censorship, because a lot of it was happening at the time
was the sort of, this was the age of like the Macedonian teenager fake news blogs.
And so a lot of what was surfacing was this was when fake news meant demonstrably false news,
you know, versus what it, you know, things they don't like on the internet.
And the dynamic that was happening there was the curators were actually throttling the stuff that was really way out there or was coming from these garbage unknown blogs that just managed to, you know, kind of gain the ecosystem and achieve these viral moments.
And that was processed as anti-conservative bias.
And what Facebook actually did was they removed the humans in the loop.
And then all of a sudden, all of this stuff, you know, I remember in the science section, there was like a witch blog.
It was like some some Wiccan blog.
It had like the old GeoCity style, you know, like like blinking GIF kind of stuff.
There was money that you made there.
You know, I, I'll say that there was a moment where I was like Facebook has what it was at that point, one point something billion people using it.
And I was like, what if someone just made a publication that all they did was published stories that were linked to Facebook's trending topics and could they make a go of it?
I thought about starting it.
Obviously, ended up landing on big tech.
a few years later, and I'm happy about that because Facebook killed the trending column.
But a lot of people saw the economic opportunity and then just, like you mentioned,
they spin up these, you know, bare-bone sites using maybe Geo-Cities or whatever it is,
even though I don't think GeoCities exist anymore.
It doesn't exist anymore.
Yeah.
Yeah, definitely.
Geo-Cities was the way to go.
Shout out GeoCities.
But, you know, and then they are in front of massive audiences with, yeah, anyway.
It's interesting.
Well, and so Facebook ultimately killed the feature, which you used to hit on.
And they killed the feature because it became, when there was no longer that human in the loop
curation, it just turned into whoever managed to game the virality, kind of, you know,
either with an insane headline or with clickbait, click farms, sorry, click headlines,
click farms generating fake engagement, throwing things into groups that were highly, highly
activist, again, it's this idea of networked activists who believe that their mission is to
share the truth. And those people would often be very, very active. I remember when we were
looking at the anti-vaccine movement, we called it the asymmetry of passion, right? People who
believe that there was a nefarious plot underway really saw, you know, this conspiracy
theorist dynamic was that they really believed that they were telling the truth to
the public. And so they were passionate about sharing. And I try to get into this in the Atlantic
essay also. This idea of the exhortation must participate. This is your moment. You know,
you are uncovering the truth or sharing the truth or spreading the truth. And so that it just,
you know, for Facebook, it got to the point where without the human curators who were accused
of being biased, they instead had this just wild nonsense trending all the time. And so they killed
the feature. Twitter's going in the other direction, right?
used to have the bots and then the you know they did a bunch of rejured to make it harder to
get something to trend and came up with this idea of low quality accounts but now ultimately are
moving towards that more curated model right uh which then does open the lot to allocate humans human yeah
probably human eventually the i will do it right at a certain point probably for everything
i guess i don't know but sorry so so yeah how do how should um so are you suggesting that twitter should
have their human curators, you know, find out when there's one of these coordinated manipulation
campaigns of the algorithm and then label that? Or what's the solution? I think it's really
difficult. I think this question of what surfaces and when, you know, should there even be
trends is one, you know, what's your take on? Are you pro-killing the trending column? Are you
killing the trending column or do you think it should stay i wrestle with it because i think
that the answer should probably be yes i think that that it is um yeah i think that nobody really
misses it on facebook anymore right it's not a thing that we feel that we've lost but it is so good fun on
it is so much fun and that's the that's the thing and it represents some of our worst impulses right
like it does the main character thing where everybody teams up against one person and
tries to destroy their life or like what you're talking about this stuff.
Yeah.
And I think that's that's really the question, which is, is there a way either, either we think
about curation differently, right?
Which is, you know, the onus is on Twitter at that point to, you know, with some public
input.
Twitter's actually pretty good about kind of soliciting public input and testing things in
public.
Or there is, you know, there's been some questions about do we,
change the affordances in such a way where the, you know, we stop seeing the metrics.
We stop seeing the number of likes.
We stop seeing the number of retweets.
And the question becomes if we can't see that something has a high retweet value, like,
meaning the number is already high, there's a momentum dynamic, right?
When you see something that has a lot of likes or a lot of retweets, you're sort of conditioned
because it has that high number to believe that a lot of people have engaged with
and to pay attention. And one thing that we see in our work on actual inauthentic networks,
like we do some work on, there are still Chinese information operations happening. And they use
clusters of accounts to like and retweet and reshare both their garbage troll accounts, but also
their kind of prominent public figure state media accounts. So they have inauthentic amplification
of real accounts, you know, oftentimes blue check accounts even. And so there's an interesting
dynamic there, which is the recognition that if 500 fake accounts like a real account,
like a minister of foreign affairs account, that tweet is going to look as if more people
are paying attention to it, which then when I see it in my feed, inspires me to think
that, hey, maybe this is something worth paying attention to. And there's sometimes a reflexive
action that happens there. You see something that's gotten a lot of retweets or, you know,
likes and you add your own to it. So even though any individual,
act of liking or sharing, we're not thinking of it in terms of in aggregate, we're creating
momentum around this hashtag. In aggregate, we're creating momentum around this content. But this is
the agency piece, right, where we ourselves, our active participants in this dynamic. And that was
the other thing I wanted to get at in the article that I wrote for the Atlantic, which was positing
that in some ways we talk about the algorithm and what should Twitter curate and how. To some extent,
talk about what I call the affordances, the tools, our ability to share, our ability to
retweet, our ability to like the capability that we've been given to participate in this
way, which is fundamentally different. I get, you know, I get asked a lot or like, I read this
Washington Examiner piece that were like, this is, this is all the same, it's the same as it's
always been. It's not in the sense that we now have phenomenal ability as individual people
to shape the way that narratives move. And when acting in concert, that ability is even more
pronounced. Yeah. Yeah. And I like how you put it in. And that's the part that I think is
understudied. Yeah. I like how you put it in your in your story. Sorry. I mean,
didn't mean to cut you off. But I do want to read this part. You know, you talked about how
propaganda, you had propaganda that you would come top down and control the masses from,
you know, large institutions, politicians, church. And now you say social media has ended in
monopoly of mass media propaganda, but is also ushered in a new competitor,
Amplaganda, right, which is the stuff coming from the bottom up.
So the idea that propaganda could come top down is gone because the top,
top doesn't control the message anymore.
Now it's a much more bottom up.
And you say it's the result of a system in which trust has been reallocated from
authority figures and legacy media to charismatic individuals adept at appealing to the
aspects, personal or ideological identity that their audience hold most dear. That is fascinating,
a fascinating paragraph in such an interesting perspective about this shift that we're seeing
where the propaganda is now coming bottom up. Can you expand on that a little bit? I mean,
I think that might have been what you were getting at. So the term propaganda, and I only got to
keep one paragraph on this, but dates back to the Reformation when the Catholic Church is trying to
evangelize and feeling that they're losing out to the Protestants. This is the age of the printing
press. And the Pope, Pope Gregory, has this exhortation, uses the word propaganda, which is a
particular form of a Latin verb, that carries with it a command. And it means to propagate,
meaning to spread. And the command is we must propagate the faith. We must propagate the true faith.
And he has this paragraph exerting the bishops to do this. And he's really insisting that this is how
you bring people back to the one true faith, the one true reality. And I find, you know,
when you read things like that, and you can see the kind of parallels to today, everybody has
their one true reality. There are these influencers who are out there and their job is to really
reach the masses. And many of them have very large platforms commensurate with mass media,
in fact. And so there's this interesting dynamic of this kind of middle layer of influencers
who are not mass media that control has fragmented. Propaganda, overrunner. Propaganda,
the years came to be seen as a thing that governments did, right? The U.S. and World War I and
two, Germany, of course, you know, Nazi propaganda in World War II. The UK had a, you know,
had a propaganda office, this idea that states would manipulate the enemy public, but also that
states, governments and institutions and media would, what Nome Chomsky came to call
manufacture consent. It's actually a reference to phrase from Walter Lipman in 19,
I think, the idea of manufacturing the consent of the governed, which in the early days of
thinking around propaganda was that you could create one unifying narrative that people would
believe. And this was how you could nudge people in the direction of supporting a particular
policy. In the 1920s, that was seen as actually a function of good government. The government
was supposed to do that because how could the people be trusted to inform themselves? Now,
today we recognize that as paternalistic and terrible, but it took a while to get to that
realization. And so in Chomsky's book, he uses the same phrase manufacturing consent, but in a very
different connotation, right? And what he is trying to do in that work is expose the extent to which
there is this top-down control, he calls it the five filters, that shape what we see. And that model
of propaganda comes to be the prevalent one in the mind of the public, right? The idea that
propaganda is a thing that governments do to the people to create a unifying narrative and
control them. And yet at the same time, we have these older versions of the term to propagate
information, to spread information with an agenda. And the only reason I thought, okay,
maybe we need a new term is not because I wanted to make a new term. I actually usually
think that's really cheesy. Yeah. You don't strike me as the Tom Friedman type. So when you
came up with a new term, I'm like, okay, let's listen to what this is all about.
I felt that there was a need to point to this dynamic by which now the public has the capability
to do that. The public has the ability to create content. The public disseminates content. The public
amplifies content. And that is so fundamentally different. That is such an inversion of the old power
dynamic, because this is fundamentally all about power, right? It's about who shapes the narrative,
who controls the public perception. And that, I think, is really what is so remarkable about what's
happening today, and it's this fundamental inversion, and we see it deployed, particularly in
topics like elections, right? And this is where the manipulative, the truly manipulative side
comes in, not with something like Pelosi must go, but when you get into things like hashtag
stolen ballots, hashtag stop the steel, where the material that is being put forth is
false and manipulative and also highly resonant because the people who are going to
amplify it trust the influencer and they're unified in their community. And that is also what
we see with the QAnon dynamic as well, that idea of the tightly integrated community
where the sort of body of facts is actually sort of fundamentally outside of the other body
of facts. So this is where, again, that reference to the idea of the one true faith or the, you know,
the one unifying narrative, that's no longer a possibility. Now we have these competing,
these many, many competing narratives, many, many competing bodies of fact. And that I think is
what's most interesting about what's happening today and what, you know, what networked activism
is evolving into. Yeah. And so let's, there's one last thought I want to end on, which is that
one of the things that's been interesting. So Eugene Wei has this great post called status as a
service. And he talks about how people who people use social, the power users of social media will
use it in order to achieve status. They might not have elsewhere. And it talks a little bit about
how social networks at the beginning will start and people will try to build status there.
And if they find that they're not getting it, they'll go elsewhere. And what's really interesting
about these social networks, you're thinking about that framework. By the way, I recommend
folks go read that. Also read Renee's story. But what I find really interesting about these
the activism that you see on social media is it almost always comes from the fringe because
if you have status elsewhere, you don't need to engage in these campaigns. So it's interesting
when you talk about this, you know, long shot challenger to Pelosi. It's like, oh, yes, of course.
You know, that's the type of person that's going to end up using these techniques because, like you
mentioned, you know, you talk about these folks that are passionate about this stuff. It's like,
yeah, they're going to be passionate because they can't get any, any way.
to break through otherwise.
So understanding that this is mostly going to come from the fringe and these are very
powerful tools, what do you make of that combination?
Some people might say that's good, where we're going to bring voices to people that didn't
have them otherwise, but some others might say it's bad because you mentioned you're going
to end up just getting this, you know, massive amplification of stuff that's, you know,
false and potentially damaging.
Well, when I think further out, I do think that that rising voices is good, actually.
I do think that that proliferation of opinions is good and is where we need to be.
I'm curious to see if actually the sort of moderate center begins to play in the same game, actually.
I am curious if they begin to realize the extent to which the polls are doing it and start to try to participate in that regard.
It's hard to imagine, like, you know, there's a trend, you know, people coming together.
to get the hashtag Pelosi is just okay, you know, trending across social media.
I know, I know.
It's really the, again, it's that asymmetry of passion.
You know, for your point, some of the earliest things that I looked at were, again,
anti-vaccine activists who media rightly stopped covering conspiracy theories,
alleging that vaccines and autism were in any way related.
And so they turned to social media where they could, you know, kind of
commandeer, you know, other people's hashtags with this theory.
Again, that same way that we saw Wayfair come into Pelosi Must Go.
They used to put forth these kind of vaccine, anti-vaccine hashtags into Black Lives Matter, actually.
It was one of the places that they kept trying to stick it in.
And I, so I think that potentially the center does begin to participate in some way.
Maybe we move into hashtags that are more policy-oriented or express some form of fact.
This might be like a total pipe dream.
It probably is.
it's my own dream we all need a dream yeah i think well i always when i was watching the vaccine
stuff play out in the early days in 2015 i felt like um it seemed impossible that tech platforms
were going to change anything because you may remember in 2015 this was also when like the
isis accounts were all over twitter right and if we were letting terrorists recruit on a platform
you know and calling that free expression because you know this what a slippery slope if you took them down
you know, what even was next?
And so it seemed at the time like platforms weren't going to change,
but then we did see actually really material changes.
And it's an interesting adversarial environment
where anytime the platform changes the rules of the game
by changing the structure, the policy, you know, the policy.
So we have a saying in some of our work at Stanford Internet Observatory
policy shapes propagation, right?
the rules that you set, your determination, your policy, determine what people see, determines
what people pay attention to. And so there are arguments for, is there going to be a creation
of, for example, like a public interest internet, Ethan Zuckerman's doing some thinking on that.
There's some people who are trying to think through this now, how do we regulate the algorithm,
which is an interesting, you know, interesting complex challenge. Should it be regulatory?
Should it be self-regulatory? There's a variety of arguments.
kind of for and against those positions.
There are people, you know, interested in things like CDA 230 amendments that take into account
whether a platform proactively pushed something to someone thinking about amplification of
speech as opposed to carrying of speech, you know, is there a line in, should platforms
carry the content but not boost the content?
I actually do, I'm very sympathetic the idea of carry the content, but don't boost the content.
I'm slightly less sympathetic when it comes into how do we incorporate CDA 230 protection into that dynamic.
I think there are certain things that are particularly hard to regulate since this is speech and expression.
And so I think ultimately there is going to be a lot of education and norms building that's going to happen.
I think that making people perhaps a little bit more aware of how things work,
I would occasionally ping people on Twitter who were alleging that they were being censored.
And I've asked a couple, you know, asked a few of these accounts,
why do you think you're being censored?
What is the reason for that?
Because these are very small, ordinary people accounts, not big blue-chain influencers
who Twitter would potentially throttle in some way.
And the response is, well, my friends don't see all my content.
And so the idea that there is an algorithm curating what goes into your feed is something that I don't know, actually, that the entirety of the public really understands, that there's such a glut of content that when you take out your phone, that kind of pride of place, those top six tweets that show up right when you open your app, that that is selected for you because of a belief that this is what you are going to be most likely to respond to for a variety.
of different reasons, I think it had created an impression among a lot of these folks that
Twitter was censoring them out deliberately because their friends weren't seeing their stuff.
And I thought that's a really fascinating interpretation of, you know, the idea that a curated
feed is somehow silencing you.
And so I think that that education is also really key, helping people understand
how, you know, how and why platforms decide to surface what they do.
Yeah. Well, look, I always love to end on a dose of optimism. So no matter how far,
fetched, I'm really glad you brought it up. And Renee, I appreciate you joining today.
Thanks so much. I appreciate you having me on. Definitely. All right, folks, make sure to read the
article. It's on Atlantic.com. The title is, it's not misinformation. It's amplified propaganda. I'll also link it
here. And Renee, I'm sure that the type of work that you're doing isn't going to go out
of fashion anytime soon. So please come back. Thanks so much. Well, we're going to have one more
guest, surprise guest here at the end of the show. Zoe Schiffer from The Verge is coming on.
She's been doing some great reporting on how Apple and Netflix have fired activists inside
their company in the past. Seems like companies like Google would be willing to listen to people.
It seems like the tech companies have learned from that experience, and they're showing them the door.
So we will be right back with Zoe for another few minutes here in Big Technology Podcast right after this.
Welcome back to one final segment here on the Big Technology podcast.
Well, look, it's been a crazy run of news, and there's a story that I think we have to cover before we sign off today.
And that is that both Apple and Netflix over the past.
week have fired organizers of worker activism movements. Now, you might remember that Google
had a very large and intense worker activism movement. And the company listened to its activists
in the beginning. It spoke with them. And then eventually, it pushed many of them out.
And it seems like Apple and Netflix, which have nascent worker movements, aren't taking any chances.
At least that's the perspective that I have reading the stories. Why don't we go to
someone who's been reporting and writing the stories. Zoe Schiffer is a senior reporter at
The Verge, first-time guest here on Big Technology Podcasts, but certainly, I hope, not the last.
And she's published a couple of these stories, just a few hours between one and the other.
And it seems like it's all happening at once. So I appreciate her coming on. Zoe, welcome to the show.
Thanks so much for having me. I'm excited to be here.
It's great having you. So let's just talk about.
it. Apple, Netflix. They both have worker activists. Now they've fired some of the leaders of them.
And usually there was a process where this took a little while before, you know, a company found
some creative way to push people out. But both these companies and Amazon have just said,
we're going to sort of nip these movements in the butt. What's going on and what's your
perspective on what's happening here? Yeah. You know, I think that a lot of these companies
looked at what happened with Google in 2018 with the walkout. And so,
said, we don't want to be Google. We don't want to let it get that far. We're going to nip
this in the butt a little bit. Particularly at Apple, you know, it's historically been so
top down, so hierarchical, but it's highly unusual. We've seen employees speaking out like this
and creating a platform, the Apple II platform, where other people can submit stories of
harassment and discrimination. And we had two very public faces of that movement, a woman
Sheriff Scarlett and another one, Yonika Parrish. And Yonika was fired just the
morning. So I think we're seeing Apple's meeting. Yeah, this is Friday morning. We're going to delay it a
couple of days. So last Friday. Yeah. Sorry, go ahead. Yeah, exactly. On Friday. Yeah, I think we're just
seeing the limit of what Apple will tolerate. They're using very similar excuses to Google in terms of,
like the reasons that they're giving, things like violating data security, confidential information,
etc. But I think internally the sentiment is this is retaliation for organizing. Yeah. So before we go on to
Netflix, what's Apple to?
So Apple II is this movement that came really out of the remote work advocacy channel.
So internally, employees at Apple were very unhappy with the idea that they were going to have to return to the office.
There was a ton of advocacy happening around that sharing stories with Tim Cook about how it would impact people's lives if they were forced to return to Cupertino.
That kind of spawned a whole bunch of worker organizing, people sharing stories about how they'd been treated internally.
And finally, kind of coalesced in this platform called Apple 2.
It's essentially just a website with a forum where employees can submit stories about what they've experienced internally.
And then two women, two employees, have been sharing those stories on Medium.
Okay. Wow. And so a lot of this organizing happened on Slack originally?
Yes, on Slack. And then it's being shared out on Twitter primarily. And then on Medium, there are a few different channels.
Yeah. And now one of those two organizers is out.
Yeah, and the other one is on medical leave.
So it's looking like, you know, both.
They always put people on medical.
I've heard this before where like someone has a problem with the company and the HR will be like, why don't you take medical leave?
Just to sort of, it's pretty unbelievable.
I mean, it's wrong.
Yeah.
Yeah.
The Google organizers have been speaking out today saying, oh, this is the Google Playbook.
And I think that that's really clear that there's a playbook these companies use.
Yeah.
Okay.
Netflix.
What's going on there?
Okay. Stop me if I get too verbose here. So Netflix, this has actually started a few years ago.
Trans employees have been meeting with executives for years telling them which content might be transphobic, harmful to the trans community.
In the past, they felt like these conversations went pretty well. If something was going to be super contentious, it would have kind of a disclaimer at the front saying like, hey, trigger warning, this might be offensive to certain groups.
with the Dave Chappelle special, they actually did meet with executives prior to it coming out.
They said that the jokes in the special were transphobic.
Dave Chappelle had really doubled down on some of the previous jokes he'd made about the trans community.
And executives essentially said, we hear you, thanks for your input, it's going up anyway.
When the special came out, a whole bunch of trans activists and LGBTQ groups started speaking out about how it was transphobic and dangerous.
And this kind of prompted employees to start speaking out externally as well.
Right. There was a very viral tweet read. Exactly. That this woman in Tara Field did a trans employee talking about her experience at the company and her kind of views on the special that went viral. She was then suspended. Netflix said it was because she attended this director level meeting that she wasn't supposed to. There was big public outcry. She was reinstated. And then just today, the company fired an organizer of the walkout. Employees had been saying on October 20th, we're going to walk out of work. Now the leader of that walkout who was all.
also the leader of the trans ERG has been fired from the company.
Right.
ERG is employee resource group.
Yes, thank you.
But Netflix, well, you know, we just acronyms galore.
So anyway, so Netflix say that they, that says that they fired this activist because they leaked the, they suspect they leaked the actual cost that are the amount of money that they paid for the Chappelle special.
If that's the case is that, isn't that a fireable?
offense? I mean, so it was unusual that the numbers leaked. There was this report in Bloomberg that
said how much Netflix. Very rarely happens. How many people that had reached, how much I'd
pay for it? I think the number, the price set on it was like $24 million. So it's pretty eye-polling.
The employee had shared those metrics internally as part of their job. And this is not unusual for
employees to do. Netflix has this culture of open transparency, open dialogue, et cetera. So there's a lot
of numbers being shared around.
Yeah.
To leak externally was highly unusual.
You know, if the employee did it, then I think Netflix has more grounds to fire them.
But I don't think that that's the view of many employees who I've talked to who say,
okay, this employee did share stuff internally, but we do not believe that they were the
person to leak it externally as well.
So I guess like, you know, we're thinking about Apple, putting one of the organizers,
there's, this is a moment, again, this is happening in a moment where tech giant employees have felt empowered to stand up and protest.
And Google obviously very publicly took, you know, a big hit.
I guess if you ask Google executives, the workers might be happier about what happened.
But definitely a lot of executive time and resources went into addressing some of the concerns of their workers.
What do you think it means for tech activism now that, you know, we're far beyond like the Google walkout, which was, you know, seen as somewhat unprecedented.
And now almost all the walkout organizers have either left or been pushed out of the company inside Google.
Amazon has fired some of its, you know, whistleblowers or, you know, whatever you want to call them, employee protest leaders.
Apple, of course, has brought down the hammer and now Netflix.
Is this the end of employee activism inside tech companies?
I mean, you know, at a certain point, you do wonder, you know, it seemed like Google employees felt empowered to speak up because they had been taught that in their culture, you speak up and that's fine.
But now is the signal being sent and there's a message being heard that employees just can't do this inside big tech companies?
What do you think is going to be a chilling effect of the hammer coming down on these.
individuals, but I think we have to differentiate between the individual outcomes and the
systemic outcomes. You know, a lot of these individual employees have been pushed out. They have
been fired, but they haven't been silenced. They're continuing to bring charges through the
NLRB, and particularly when we're talking about the fired five at Google, we're seeing those
cases wind through the courts right now. And really, they could create precedent that would impact
the entire tech industry. So, you think with Amazon, you know, they had this big union vote
And Bessemer, the NLRB said that some of Amazon's tactics there were likely illegal.
There's going to be another vote.
So I think that we still have yet to see what the systemic impact of all this activism will be.
I think it is definitely true that there will be individual consequences, unfortunately.
But I don't think there's any, like, stamping out the movement completely.
I think employees are organizing, they're speaking out, and they're seeing that when they tell the truth on Twitter, when they talk to colleagues on Slack, that they get a lot of
support and they can kind of create their own platform so that they cannot be silenced completely.
Yeah. What do you think the companies want? Because it seems it seems like, you know,
they'll push out people on, I don't know, on the left who are activists. It seems like
they'll push out people who have like conservative views or, you know, views that are unpalatable
to the left like Antonio Garcia Martinez at Apple. So it is, do the tech companies just
want employee bases that will sort of, you know, have these kind of milk toast political views
and sort of leave them alone and allow them to run their businesses.
I mean, it really depends on which tech company we're talking about.
We've seen organizations like Coinbase that say very explicitly, we don't want you to have
political discussions internally.
And then on the flip side, you have organizations like Netflix that say, we really value
employee dissent.
We really want you to speak back to executives.
But I think what they're seeing now is kind of the limits of the limits of the
ethos. Like, we really don't want not to ever leak externally. And it's kind of untenable to ask
employees to voice their opinions, but then when they start voicing them a little more broadly
to fire them. Yeah. So there's two philosophies that are emerging in terms of like what's the
best way to run a company. One is allow for, you know, political discourse and dissent, you know,
and that's some of the stuff that you've seen inside Google and potentially a little bit of
Netflix, although they've pulled back a little bit. And the other one is, don't speak about politics
at work, which is the Brian Armstrong Coinbase theory, focus on the mission. And if you
talk about politics, we don't pay you for that. Go do that somewhere else. So I do
wonder, what do you think the best approach is? I mean, is there an optimal way to do this?
because the Brian Armstrong perspective, you know, as we see some more activism taking place inside
companies, has won a lot of fans. And so curious what your thought is on this part. Yeah. I mean,
I personally believe that it's a fallacy to think that you can keep politics out of the workplace.
Like the very nature of that company is political in some ways. And I think we have to remember
that what we're seeing from Apple, from Netflix, from Google is a breakdown in the communication
internally. Many of these employees tried every avenue internally to have their voices heard
to have an open discussion. And they only spoke out externally when they felt like executives
were not listening. And so I think there has to be a way to have those political discussions,
to have an open debate, but to have more responsiveness from executives and more accountability
to the employees so that there's an ability to kind of take care of some of these issues
internally first.
Yeah.
Sometimes that might not be feasible.
Like I imagine there were some real discussions about that Chappelle special before it went
out.
There were.
Absolutely.
And there's going to always be part of the employee base that's just not going to be satisfied
in the company's answers.
So it's tough stuff.
But again, like, you know, for example, a lot of what we're seeing with,
Apple right now with the leaks. These are a lot of what executives were extremely upset about was an
all-hands meeting that leaked in September to the verge. The confidence of this all-hands meeting
were incredibly benign. Like as a reporter, I was like, this is very boring. There's no news here.
But Apple made an enormous deal about it. And I honestly think it shot itself in the foot a little bit
because it lost trust with employees by clamping down so hard on these leaks that weren't actually
really leaks, almost to just make a point. And I think.
internally, employees are feeling like, I didn't leak the designs of the next iPhone. I would never do
that. I'm just thinking out about like, you know, these policies that I don't believe in. And
the reaction feels so outsized. Right. Life and Fortress Apple. And then Tim Cook wrote an email to
employees, you know, about about how they shouldn't be leaking and that leaked as well. Did you
publish that one? Yeah, I did get that one. Yeah, that was a wild one. We'd love the leaks about the leaks.
I understand how people inside companies, you know, might feel negatively about leakers.
But I think you're really right.
Like, it does come down to can you have a productive discussion inside?
And generally when stuff, you know, goes as wrong as it does, it is a product of, well, I don't know.
I was going to say it's a product of communication breakdown inside.
I agree with you.
But also then I think about the Chappelle's.
special. And, you know, it's a, it's a tricky one. But that, that there was a breakdown there as
well. You know, there were those initial discussions, but they also employees have this open
Q&A document where they can submit questions and historically executives have been really good
about being in that document and responding to people. And there, a few days went by and there
weren't responses. There were a lot of questions that I saw firsthand, very, very thoughtful
questions from like many, many employees that just were sitting there.
And I think people felt like they were being ignored.
Do you think that, and let's just end on this one, because this is a sort of interesting question.
I mean, the Chappelle special, I think we'll be talking about that and its impacts to Netflix for a very long time.
And there have been some strong headlines about it, like the Times wrote that Netflix has lost its glow.
Weird way to put it.
Do you think that there would have been a way for Netflix to reconcile with its employees?
and still run that special.
Because from my perspective, it does seem like there was this kind of line in the sand
where some employees were like basically the only way we'll be happy is if you take it down.
Like, was it a matter of communication or was it a matter of content?
I really think it was a matter of communication.
One thing that I know from talking to tons of Netflix employees over the past couple weeks
is that no one I spoke to wanted the special to be taken down.
And they weren't asking an executive for it to be taken down.
Not one of the questions that I saw.
in that open Q&A document was, can we take it down?
It was really, can you talk to me about our line between hate speech and commentary?
Can you talk to me about how the decision was made?
How are we going to help employees who feel unsafe?
So they really wanted an open dialogue.
And I don't know if really the result that they were intending was for the special to be taken down.
And as we've seen, executives have not coming down.
Yeah.
Okay, but that is fascinating.
It is, I think we would all benefit from a conversation.
like the one that they were advocating for.
It is interesting how Netflix has allowed the culture war to play out on its platform through comedy.
And I don't know, to me, it just, I don't know, it seems like comedy is a good venue to have these discussions.
But it's tricky stuff.
It's hard.
It's hard.
There's, you know, I mean, I read something this week about how you need to be able to laugh at yourself.
And I certainly see that viewpoint.
And I could also see the viewpoint of the people inside the company.
You might have felt hurt by some of the lines or wanted some clarification from executives.
Yeah.
Okay.
Well, that's it.
We're being told by the buzzer to wrap it up.
So big thank you to you, Zoe.
Keep up the amazing reporting.
I feel like I've learned so much from your stories about what's happening inside these companies.
And people can find you.
Why don't you just shout out like the URL and on Twitter so people can find you?
Yeah, it's just my name, Zoe Schiffer on Twitter.
Okay, great.
Or on the verge.
Okay.
Well, we'll continue to watch.
And these stories, for sure, this isn't the end.
You know, so it'll be fascinating to see how it plays out, especially Netflix.
That stuff's wild.
No doubt.
Okay.
Thanks so much for having me on.
Thank you for being on, Zoe.
And thank you, everybody for listening.
It's been a long one today, but hopefully a.
productive one, one you've enjoyed. If you have any thoughts about how today's show has gone,
please feel free to email us, Big Technology Podcast at gmail.com. I'd love to hear your feedback,
especially with the new format. And I just want to say thank you once again for listening.
Thanks to Nate Kowatni for editing and mastering the audio. Thanks to Red Circle for hosting
selling the ads. And thanks to all of you, the listeners, most important of all.
We appreciate you being here every week. We'll see you again next Wednesday with a
New show with Tech Insiders and Outside Agitators.
Hope you have a good week.
Until then, take care.