Big Technology Podcast - Google Protest Leader Meredith Whittaker on the Future of Tech Activism and the Deep Flaws of ‘The Social Dilemma’
Episode Date: October 21, 2020When I interviewed Tristan Harris about The Social Dilemma earlier this month, my mentions filled with people saying, "You should speak to the people who were critical of the social web long before th...e film.” One name, Meredith Whittaker, stood out. An A.I. researcher and former big tech employee, Whittaker helped lead Google’s walkout in 2018 amid a season of activism inside the company. On this edition of the Big Technology Podcast, we spoke not only about her views on the film, but of the future of workplace activism inside tech companies in a moment where some are questioning if it belongs at all.
Transcript
Discussion (0)
Hello and welcome to the big technology podcast, a show for cool-headed, nuanced conversation of the tech world and beyond.
Today we're coming to you live from the World Summit AI, so it's a little bit different from our typical recording, but it's going to be a great conversation.
Joining us today is Meredith Whitaker. She's the co-founder of the AI Now Institute.
a former Google employee who founded Google's open research group and played an instrumental role in Google's walkout,
which protested payouts to executives accused of sexual harassment,
as well as, you know, a lot of other activism inside the company, which we're going to talk about at length in the second segment.
She's also the Minderu Research Professor at New York University.
Meredith, welcome to the show.
Thank you, Alex. Really happy to be here.
So let's get started with the social dilemma.
Actually, the way this interview came about is that I conducted an interview with Tristan Harris
talking a little bit about the criticisms of the Social Dilemma movie.
And then my mentions filled up with folks saying, hey, you know, why are we asking Tristan?
Why don't we, you know, ask some folks who have been critical of these things for a long time?
And your name was mentioned.
And that's what brought us together.
And I'm thrilled to be able to, you know, to speak with you about this.
this stuff. And I think the social dilemma really is, you know, on many levels, a criticism of
the attention economy today, which people might say, talking a little bit about how algorithms
at Facebook and Google, you know, will end up inflaming tensions and keeping us locked to our
screens. And the business model ultimately is bad for humanity. So though we're going to be
talking a little bit about the social dilemma, this will be new material for folks listening
in on the podcast and hopefully for people out here.
at the conference as well.
So let's begin talking a little bit about it.
It seemed like your perspective is a little bit different from Tristan's.
You're both ex-Google employees, but I'd love to hear like what your take is, you know,
on the film and its main message and where you think it may have gone right and gone wrong.
Great.
Yeah.
And I want to, I will start this by kind of a high-level framing, which may echo some of the
comments you saw in your mentions.
I think one of the significant weaknesses with the film was that,
it sidelines and didn't give a platform to a lot of the people who have been researching and
calling out these issues in frankly, often more nuanced ways for a very long time. So there are
folks like Safia Noble, Sarah Roberts, Raha Benjamin. I'd look at, you know, black women like
Inasa Crocket and Siddette Harry, who were sort of in 2014 and before calling out racist trolls
that were sort of, you know, germinated from message boards like Fortan.
And there are a lot of people who have actually been looking at some of the issues
that are produced through and amplified through social platforms
and the consolidation of power that is now represented in a handful of tech firm.
So I think, you know, that was one of the primary issues.
And along with that erasure, it erased some of the fundamental harm.
the way in which a lot of these platforms and these algorithmic systems reproduce and amplify
histories of racism, histories of misogyny, you know, who bears the harm of this type of targeted
harassment or, you know, the way in which algorithms represent our world as Safia Noble has
shown so brilliantly, and who, frankly, reaps the benefits. And a lot of the people who were
being interviewed were, you know, people who skipped from the side of reaping the benefits, right,
working at a tech company, which I certainly did, to being kind of critics of this technology.
But a lot of the criticism was drawing very heavily on this earlier work, which I think, you know,
I would love to see a number of these people on your podcast, and I would love to see these
critique kind of enriched with some of those perspectives.
Yeah, and Tristan made the argument.
I mean, he definitely addressed this in the show I did with him.
And he made the argument that since it was these people who had built the software in the
beginning, it was going to be a powerful device when you started to show visually the fact that
these were the people who had actually, you know, gone on and built it. And now they were saying
what we've built is a bit of a Frankenstein. And that's what makes the film powerful. So I'm
curious what you think about is defense from that standpoint. Sure. I mean, I think that is an
argument, but it doesn't obviate the fact that those are the people receiving a platform, that a lot
of people who are first kind of learning about some of these issues are learning it from that
perspective and that the voices you raise up, the people you represent, matter a lot in these
debates, right? And that there was, you know, there was a lot of prior art, frankly. This is not,
you know, these are not a brand new set of problems that just sort of occurred to folks, right? There's
been, you know, decades of work and inquiry around these problems that was largely ignored,
dismissed, or considered sort of, you know, that's the byproduct, the externality of ultimately
positive disruption, right? And it wasn't really taken seriously until
wealthy white men, frankly, in Silicon Valley began to feel some of the effects themselves.
Yeah, I definitely see that criticism. I think it's fair. Let's get to the argument itself.
What did they miss? I think you mentioned there was enough of a focus on the victim and enough of a
focus on who's going to get rich from this stuff. I didn't think that they did a decent job.
And, you know, I mean, in the previous show I did with Tristan, it wasn't all, you know, one big,
one big hug. There was definitely, I think that we went through the criticism of the film a lot. And now,
I guess, like, in our discussion, maybe I'll, you know, play the other side. But, like, there was a
discussion of the rise of nationalism. There was a discussion of the rise of isolation and loneliness
and polarization in our society. So what did they miss in terms of a byproduct of these algorithms,
of these social media platforms in terms of who they're hurting? What did they look over?
Yeah, I mean, I guess I will, I'll highlight a couple of things that I think are really important in any analysis of tech and its social implications.
And the first thing that really troubled me was this persistent picture of these types of technologies, these social media feeds, the kind of algorithmic systems that, you know, helped to curate and surface some of this content, et cetera, as almost superhuman, right?
you heard phrases like can hack into the brain stem, right?
Things that really paint this as a feat of superior technology.
This is innovation has reached the point and now it's turned on us, right?
And we are but these lizard brain plebeians who are now bearing the consequences of this tech.
And I think that ignores the fact that a lot of this isn't actually the product of innovation, right?
It's the product of a significant concentration of power and resources.
It's not progress, right?
It's the fact that we all are now more or less conscripted to carry these as part of interacting in our daily work lives, our social lives, as being part of, you know, the world around us.
And that's only increased during COVID.
I think this ultimately perpetuates a myth that, you know, these companies themselves kind of tell that this technology is superhuman, that, you know, it's capable of things like hacking into our lizard brains and, you know, completely taking over.
our subjectivities.
And I think that, you know, that also paints a picture
that this technology is somehow impossible to resist,
that we can't push back against it,
that we can't organize against it, right?
I think, you know, and I think that's a real problem
because it does sort of put us in a position
of being these sort of, you know, these kind of doughy subjects
who are, you know, incapable of resisting.
And that, again, I think there's much more to the story.
Right.
I want to narrow in on your argument a little bit because on one hand, you know, you mentioned
that these are a product of power.
I mean, let me see if I can extrapolate, right?
Saying that, like, we have to hold our phone.
We have to be attached to our computers because that's what, you know, the corporate
structure today make us do.
Like, we need to be, you know, available 24-7.
And then there's, there's an ability for us to organize and push back, you know, as people
to say, maybe this isn't exactly what we want.
Is that, is that, am I getting it right?
Is it the fact that, like, when you think about power, there are corporations that are having us use these devices and our ability to organize is something we should be thinking about more of, or am I off on that?
I mean, I think when I think about power here, I'm taking kind of a historical materialist analysis, right?
I'm looking at these firms, right, as these sort of centers of almost improbable power at this case.
And we have about five, you know, companies in the Western context.
that are dominating kind of this, you know, kind of big tech, as we call them, right?
And these firms are a product of the kind of commercialization of computational networked
infrastructure, so, you know, the Internet, and the product of kind of the development of
advertising technology and other, you know, other platform services that allowed them to
gain, you know, a massive foothold with infrastructure, massive data collection.
and pipelines that, you know, anyone who carries this is continually contributing data, you know,
often to all five of these companies.
And that's a phone for our listeners at home, yeah.
Yeah.
For the people not watching the video, I was holding up my cell phone, my Android.
And so, you know, we have to look, these companies represent extraordinary powers over our
lives, not magical, right?
that power is reflected in their ability to, you know, to give away platforms for education to all of our school districts, right, to replace other sort of social fora, right?
Instead of, you know, having the debate on CNN, we have the debate on, you know, YouTube now, to begin to become the spaces for our commerce, for our sociality, et cetera, and to then financialize and commodify those, you know, those roles.
in that ecology. So I think it's, you know, we need to, we need to also analyze kind of the
material power that these firms have and, you know, and look a little bit more closely at how
these technologies work. Okay, we're going to head right to break and come back with a little bit
of more of a discussion about the activism that you touched on and sort of what's happening
or what happened inside Google and where it goes today. All right, we'll be back in a moment
after this on the big technology podcast here at the World Summit AI.
And we're back on the big technology podcast with Meredith Whitaker, a former Google employee
who founded Google's open research group.
And we're talking at the World Summit AI.
And, you know, the first part of this discussion, we talked a little bit about the social
dilemma movie, which, again, tends to blame algorithms and the business model for big technology
companies for all the world's problems.
I think we did a good job discussing that with a bit more nuanced.
talking about where that film could have been better.
Now I want to talk a little bit about drill down
and talk a little bit more about your activism
inside Google, Meredith.
Because, you know, as a reporter watching this
from the outside, I had been speaking to folks.
And it was always clear that, you know,
you had played a pretty central role
in what happened inside the company.
Now you can take a look a little bit at,
you know, with the benefit of hindsight,
you did say you were retaliated against
and then ended up leaving the company.
We can get into that.
And, but now you've been out Google for a bit.
Do you think this activism worked inside the company?
What does it mean that you're not there anymore in terms of whether it will persist?
Yeah, I, I certainly think it works, right?
But I don't, you know, again, I don't frame that type of organizing as, as, you know,
having a goal and then deciding, right?
The goal was both to push back against, you know, unethical.
immoral business decisions and the you know the the um and those decisions just for the benefit of
the audience i think we were talking about use of AI in warfare the decision to you know pay out
$90 million to someone who was you know accused of sexual harassment uh and then there was yes
we talked about military we talked about that um yeah and there was you know it was yes all of all of
those things. I think what, and the, uh, the inequitable treatment of the contract.
Right. Laborers.
Workforce, which made up more than half of our colleagues, but were not, you know,
afforded the privileges of full-time work that you think about when you think about,
you know, the, the glorious tech company workforce. Right. And the scorecard on that was they
ended up not renewing the contract with Maven. I don't think, I think that people who were, uh,
Also accused of misbehavior inside Google were not awarded payouts.
They were sort of summarily fired.
And then the laborers, I think the laborers is still an open question.
But sorry, go ahead.
Yeah, they change some policies in the right direction, you know, resistantly.
And this is, again, you know, this is an ongoing struggle.
This isn't something you win once by changing executive's minds because they finally see the light, right?
This is, you know, again, we're dealing with capitalist logics and capitalist logics dictate that, you know, ultimately the objective function to use an AI firm of any firm is, you know, continued revenue growth, continued growth, you know, exponential growth forever over time, right?
These kind of impossible goals that we're beginning to question more and more as we see sort of the, the fragility of the planet we live on and they harm that type of operation in the world has done.
in so many ways. But beyond that, I think, you know, we need to continue to, you know,
that type of organizing is meant both to win these specific, you know, goals, right? We want to
make sure that all of our colleagues have, you know, a dignified job, are not sweeping in their
cars, right, are making a living way to have health care, and that we erase these sort of, you know,
two-class worker systems, right? We really want to do that because that's justice now. While doing
that, we also want to build these sort of collective muscle to gain the power.
to make these decisions ourselves, right, to have a, you know, more collective decision-making and not
leave these to, you know, a handful of people at the very top whose duty is to the board, whose
duty is to shareholders, who ultimately are sort of calibrating their decision-makers making
around those capitalist incentives. Right, but I also want to talk about, you know, who's going to
make the decisions inside the company. You know, I think laborers, that's something we could both
agree on. They should be paid more and treated better. There were some,
other, you know, like there were some more controversial protests, or there were some protests
over some more controversial issues inside Google, for instance, like the funding of the
conservative political action committee conference in Washington, D.C., and then there have
definitely been debate over Maven and whether, you know, whether it makes sense for tech
companies, you know, to work with the military. So I wonder, like, the employee activism definitely
takes a certain, you know, political bent and, you know, the leaders of the companies obviously
have theirs. But we talked about how these are very powerful companies. And shouldn't we have like
a more, you know, democratic type of way of deciding what, what these companies should do from
like much more, from like, you know, doing when they actually implement these policies that have a big
impact on the world? What do you think about that? Like it should it be, you know, should it be
in the hands of this one group of employees? And especially when you're looking at the outside
power that a company like Google has, right?
They type, you know, it's staggering to think about how much extraordinarily intimate
information that company has about billions of people, literally, right?
This is, you know, this is, you know, rooms full of saucy dossiers on each one of us, right?
And it's staggering to think about the way in which that company is able to use that data,
that information to, you know, create AI models, to create,
other services that are then sort of, you know,
making determinations in ways throughout our social institutions, right?
So there's two levels on which I think, you know,
we really need to take this power seriously
and recognize, you know, the risks there.
And certainly what I'm not suggesting is that, you know,
a handful of, you know, 100,000 or 200,000 people in the world
should be the arbiters of all of those decisions.
But, you know, part of the work we
were doing organizing was also building, you know, whistleblower networks, right, you know,
beginning to build the connective tissue with other social movements so that, you know, we could
kind of organize it in ways that that hinted at that type of democratic decision making that is
ultimately going to be, you know, extraordinarily necessary to, you know, create or recreate
technology that, you know, could serve the public interest. But I, you know, I agree with you
that the goal is not just build this sort of, you know, hermetically sealed workplace democracy
at Google, right, especially given the issues with representation, the issues with misogyny
and racism at that company, right? We don't want just like, you know, a larger collection
of the same people making those decisions. But that, you know, the politics of that was to
open up the space for at least more collective discussion and decision making. And ultimately,
the goal would be to sort of link with social movements and take a lead. And I think the no tech
for ICE movement is one example where you saw tech workers across the industries of taking
the lead of people who do immigration policy, immigration advocacy, you know, on the U.S.
Southern border and really understand the context and what it means to be, you know, hunted
and tracked by this technology, communicating that to the people who don't have that experience
but may have an understanding of how these systems work to build a campaign that is then
pushing back against the companies who are provisioning those types of systems to ICE.
Yeah, and I want to, I'm going to get back to a little bit of the political activism,
but I want to just do a quick digression one, because you mentioned the Stasi dossier's.
Can employees at Google just access, like basically all your, you know,
go into someone's personal information and access it?
So elaborate a little bit on what you mean by the dossier.
Is it simply somebody being represented?
Yeah.
Yeah, so it's someone being represented by numbers.
Sorry, go ahead.
Yeah. I mean, there are people who can access to different parts of that, right? And it's not in that, you know, I'm using that as a metaphor because it's more easily graspable than like, you know, the different shards of a database where different parts of that information that may not even be identifiable to me without sort of, you know, matching that to something, you know, whatever.
It, you know, yeah, but they, you know, were their kind of permission, were they to spend a lot of money and a lot of time reconfiguring their system?
and so everyone could have access, that would be possible, right?
They do, you know, collect that information.
And at this point, I will say, like, it is not easy to access that information.
They log that very strenuously, and that's in part because, you know,
when I started back in 2006, it was a lot easier to access that information or a couple of incidents.
Yeah, it has been interesting speaking to employees who have worked across,
let's say Google, Facebook, and an Apple.
and say Facebook and Google they felt were fairly open
and Apple was like Fort Knox when it came to user data
and it's not part of their marketing.
But we could spend an hour talking about that.
I want to go back to the politics stuff.
So Brian Armstrong, CEO of Coinbase,
you know, caused a bit of a stir in Silicon Valley
when he banned employee activism, political activism,
you know, within the company.
And, you know, I thought about that and I said,
well, we've definitely had like some interesting results
from employee activism inside tech companies over the past couple of years.
But on the other hand, like, maybe it's better for employees, you know, political energy
to be channeled through normal political channels versus, you know, trying to work within
their company to have them, you know, make statements.
I think about, like, the energy people could put in, you know, try to work through, like,
the way that the American political system operates, you know, versus, you know, working to get
their CEO, you know, to take a political stance or the other. And maybe the energy would be
better placed, you know, working through the traditional political system. So I'm curious
what you made of the move and what you sort of think about, you know, how people's energy is
best spent. Yeah. I mean, there's a lot of questions I have about that framing because when
you think about the traditional political system, I'm like, are you thinking about it
without the voter disenfranchisement
that has been part of the far right agenda
for 20 years driven by Carl Rove and others?
Are you thinking about it sort of with that, right?
Are you thinking about it before Citizens United
when, you know, kind of corporate donations
became corporate speech
and you had millions of billions of dollars
of kind of corporate dark money
flooding into these campaigns or before that, right?
Like what conditions of normal politics
are we talking about that,
about such that, you know,
volunteering to get out the vote,
would be as effective as, you know, organizing for worker power. And I guess that's a question
we all have to wrestle with because I think what we're dealing with right now is a extraordinarily
atrophied, if not broken, political system that has, you know, been at the receiving end of
legal activism and lobbying and a really organized campaign by the far right for many, many, many
years. And, you know, right now we're speaking, I think that the Amy Coney Barrett hearing is
happening right now, which could further gut, you know, the last threads of, you know,
voter protections that we have. So I don't, you know, again, I want to, I want to be really
careful about that frame. And I mean, yeah, go ahead. Go ahead. No, no. I was just going to say
that, like, these are, I mean, I think these are real issues, but like, it's also like, I don't know,
like, if you, if folks don't like the way that the political system is operating, you know,
it's, it's one thing to, you know, say it's wrong. It's another thing to throw up their hands and
say, well, this is, you know, it's kind of broken and we can't fix it. And I also wonder if the
energy would be well spent trying to push back against some of the things that you're talking
about. Yeah. And I think, you know, part of trying to push back against that would be things like
pushing back against the sort of, you know, slush money that the corporate packs are pushing into
our right causes that don't, you know, represent the views of the people who are there or represent
you know, arguably the best interests of the public, right?
So, you know, again, I think we can't ignore the outsized influence,
like vastly, vastly, vastly orders of magnitude,
outsized influence that large corporations have in shaping our political system
and the way in which sort of individual goodwill and sort of volunteerism doesn't even,
you know, it doesn't even kind of closer ranking against, you know,
the way these companies are able to operate.
So, you know, again, this isn't saying give up on the political system, right?
Like I think, you know, try all tools, you know,
definitely vote, right?
Get your parents to vote.
Like, do the work.
But I think, you know, frankly, I think worker organizing is also politics, right?
I think when you, you know, the Coinbase story, one of the pieces that I often hear
missing from this story is that the CEO wrote that, you know, that sort of polemic blog post
after he had been challenged by a number of workers who ultimately staged a walkout because
he wouldn't say Black Lives Matter.
So there was context already for that.
right? And, you know, what do you make of a CEO who won't say Black Lives Matter
during a time of the unprecedented rise and sort of white supremacy and a kind of veering
toward authoritarianism? Like, that is political. Yeah, I mean, that should be an easy,
easy one. But there's more, by the way, I think that there's more to it than just, you know,
volunteerism. Like, people can run for office. They can advocate for laws to be changed. Okay. So
you got retaliated against you left google i think clear stapleton also left inside amazon another
company i cover and spoken with tim bray former amazon vp who left because of this the whistleblowers
there were fired um uh and so i wonder uh what's going to happen now that it seems like most of the act
i mean not not only you and claire but it seems like a good chunk of people who led the walkout
are of google now so what's going to happen if the fact that like you know to these movements if the fact that a
lot of people who led them have now left and maybe a people, you know, who might take your
place, probably feel fearful of continuing on given what happened.
Well, happily, there's still a lot of organizing going on at these companies that I'm aware of.
I think, you know, the good news is that there's a lot of ways to organize with your colleagues
that don't involve kind of a public onslaught that engages the media, which was very much
part of our strategy. But, you know, there are a lot of leaders that people don't know about
that were part of that organizing, right? A lot of people who didn't, you know, for one reason
or another, didn't want to take the risk of being sort of public with that, right? And we each
made our own choice. But, you know, again, I don't, I certainly don't think this sort of diminishes
the strength of the organizing. And I would caution against sort of equating visibility with, you know,
of continuity. There's a lot of organizing that's continuing. And as you see with, you know,
the Coinbase, right? Like, you know, we now have tools in our toolbox across tech, like the
walkout, right? Like, you know, a number of Facebook workers who've sort of whistleblown and written
their stories as they leave that are becoming kind of common sense. And I think that's, you know,
that's one of the ways this type of organizing and this type of consciousness, you know, permeates
over time. And it's certainly, you know, there's certainly continuity between the organizing
that was happening when I was there and what we're seeing now. And I think what we're seeing
now is like learning from some of the mistakes that I and others made, right? There's, you know,
developing sort of stronger and more precise muscles to continue, you know, to continue this
work. Yeah, no doubt. And I mean, I covered this a little bit in my book. I'm talking about
the run-up to your protest action. But what was notable about,
it was that engaged, there was always dissent inside Google, but it engaged the outside world and the press in particular in a way that, you know, I don't think it had happened inside Google before.
Okay, let's take one more quick break and come back for discussion about AI ethics and politics, and we will be back in just a moment.
And we're back here for one final segment with Meredith Whitaker, who is the co-founder of the AI Now Institute.
And Meredith, you're one of the leaders in the field of AI ethics.
And I feel like we should talk about this because, you know, I think for a lot of people,
the term AI ethics has become a bit of a lightning rod where people see it as a way,
as a vehicle through which, you know, political views, you know, are potentially injected into the tech companies.
And we talked a little bit about conservatives when they see AI ethics.
They say, well, is it your ethics or is it our ethics?
You know, forgive me, but I'm going to, I don't remember.
really like his methods, but I found this one slide that James O'Keefe unearth was interesting.
He has this slide talking about how, you know, algorithms are programmed and then media is
filtered, you know, through those algorithms and then people are programmed. I'm sure you've seen
the slide. And so I'm just kind of curious what you make of that slide and whether you think
that these algorithms are actually programming people. No, I do not. I find, you know, the entire
project he's a part of, extremely problematic and, like, very, very, very flimsy, right?
And I think the critique I offered around the social network and this picture of tech as a kind
of, you know, almost godlike force that's able to subdue us near mortals with the power
of its algorithms.
Like, again, no, that's not what's going on.
And, you know, again, that, you know, that sort of bolsters some of the rhetoric that is
ironically coming from these tech companies themselves that are claiming that, you know,
these systems can do a lot of things that they've never been proven to do.
So no, that was an internal Google, that was an internal Google slide, I think.
So I, well, I, I, well, I would disagree with the person who made it.
I don't, you know, again, like, I, I don't spend a lot of time, um,
digesting project very tough.
Yeah.
No, I look, I'm not a fan of the methods, as I mentioned.
But, like, I feel like, yeah, there are moments where it's worth, like, taking a look at some of the material they unearthed whether the methods are good or not and then talk about it.
Yeah, I don't, like, that's, I don't really, I'm not, I'm less concerned with who said it, although that's definitely, you know, something we need to take into consideration.
Like, that's not true, right?
That's not how these things work.
And it's cheating, you know, dual narratives, right?
that I think the tech companies have one interest in presenting this technology is infallible
because it justifies the proliferation of this technology into domains where they are going to make money,
right? And the far right has another interest in, you know, presenting these technologies as like scary
bogeymen that we need to be very frightened of because it perpetuates a kind of campaign to subdue
these companies and ultimately bend them to the will of the far right. Yeah. And so I'm actually
curious about that. So can you talk a little bit about how like the company rhetoric is
playing into that far right campaign and what do you think the far right's goals are you know if we
talk about like them just kind of seizing on to AI ethics as one thing as a battleground that they're
interested in fighting fighting on what do you think they're aiming to do well I think you know I don't
I don't see as much of the you know the far right kind of fought around issues of AI in politics
and I also have problems with the term AI ethics because I think it's just you know almost so
broad as to be meaningless, right? But, you know, there was, there was organizing we did around
a sort of ethics review board that Google put together, you know, in effect to try to sort of
pacify some of the dissents around the choices they were making around sort of AI in the
military, et cetera, right? And on that review board, they had Kay Cole James, who was the head
of the Heritage Foundation, a deeply far-right organization that has taken, you know, she personally
and the organization as a whole have taken a number of fairly virulent anti-LGBQ positions, right,
anti-trans positions.
And, you know, when you look at the way that these AI systems constructed through machine learning work,
it is very clear to anyone who works with these systems that, you know, again, they aren't,
they aren't intelligent, right?
What they do is they process huge amounts of data, whatever data they have available.
And from that data, they build a model of the world.
So if you show them a bunch of data about cats, they're going to, you know, here's a million, 20 million pictures of cats, right?
They're going to get some picture of what a cat looks like.
And then if you show them a picture of a truck, they're going to be like, that's not a cat.
You show them a picture of a cat.
They're going to be like, I predict that this is a cat, right?
Like, that's how they work, right?
So they're given data from the world we live in, right, that represents our past and our presence.
and very, you know, kind of irreducibly, right,
like kind of encodes the values that are in that data, right?
So these systems very often,
and there's now, you know, there's research,
Jerry Bill & Wini, Tim Nekhebrew, Debraji,
others have shown this over and over again
that these systems replicate patterns of racism,
patterns of misogyny.
They sort of encode these assumptions
in, you know, their understanding of the world
because, of course, their understanding
is trained on data that is pulled from that.
very same world, the very same context.
So when you're looking at the politics of AI systems, when you're looking at their
implications, when you're looking at issues of bias and fairness and, yes, even ethics, you need
to be really, you know, weighting very heavily the views of people who kind of experience
those harms of marginalization.
So you're looking at, you know, people who, you know, understand the dynamics of race
and racialization, right?
people who understand, you know, issues with anti-LGB bias, right?
And so, you know, a lot of our organizing around these issues was sort of pushing back
on, you know, the idea that, you know, a board like that was suitable to make these decisions
and pushing forward the notion that we really needed to center the voices of people who
are most likely to be harmed by these systems.
Yeah, there was that case, that very famous case inside Amazon where they built this
recruiting algorithm.
and even when they didn't tell the recruiting algorithm
what the gender of the person was,
it would look for attributes
that would indicate that person was a woman
and then end up removing them from the search.
And it ended up becoming so broken,
I think Amazon gave up on even rolling it out.
Yeah, that's a classic example, right?
And I think, you know, what that also is
is a pretty interesting diagnostic tool
to kind of reflect back
the persistent misogyny that was sort of encoded in Amazon's hiring practices, right?
So we can also think about these systems as sort of, you know, showing us, you know,
some of these uncomfortable and potentially latent issues that are, you know, part of the
construction of the data that, you know, in this case, the resumes, the hiring, the weighting,
the performance reviews, et cetera, that trained this algorithm.
Yeah, now there is a field that you are in which looks at these issues.
And then you also have the U.S. government, which seems, you know, interested in sort of taking it into its own hands.
We're about to see, it seems like a push from the Department of Justice under Bill Barr to, you know, attack Google in some way.
I mean, obviously there's going to be real antitrust issues that they're looking at, but they're also probably going to be looking at the way that Google treats conservative content.
That's at least the, you know, the previews that we've seen.
That's what it seems like it might be.
So I wonder, do you worry that like your, even if the field that you're, even if the field that you're,
and does good work that the government will come in with blunt force and end up rolling some
of it back or changing it or actually, you know, even in bad faith manipulating the way that these
companies treat content and treat algorithms? Yeah, absolutely. And I think, again, these are,
you know, these are political battles. And what the, you know, the far right and the sort of,
you know, proto-fascist government is saying right now is that if you de-platforms, you know,
hate seat. If you de-platform, you know, not the content, if you de-platform the sort of, you know,
propaganda arms that we are, you know, that we have kind of implemented through these systems,
then we are going to come after you, right? That's the, you know, there has, there is zero proof
that anti-conservative bias exists. In fact, you know, these companies bend over backwards to not
enforce their terms of service for people like President Trump, right? But you see that
You know, you see that, you know, why, why during the kind of big tech hearings did, you know, Jim Johnson and Representative Gertz spend so much time just like kind of bloviating about this, right?
They're setting up a narrative that is effectively communicating to these companies don't touch this content.
And I think, you know, again, that gets back to the fact that this is organized, that there's actually, you know, there is money going in to the propagation of this type of content through these networks.
Right? It's not magic. It's organization and it's funding.
Yeah. Yeah, it's going to be a very interesting couple of months ahead as we see sort of first this Department of Justice Action, maybe the outcome of what we've seen from the House, you know, subcommittee on antitrust. And then, of course, the election. Okay, why don't we end with one question? Let's see. So you tweeted today, which will be tomorrow or whatever a week later by the time this airs. But you said, you know, if you could, you, an interview question.
you asked to people is if you could shut down the internet, would you? So if you could shut down the
internet, would you? Oh, well, one of the things I also tweeted when I tweeted that interview
question, which is something I used to ask at Google when I was interviewing people, was that
what I was looking for was someone who would actually take that question seriously, would think
about, you know, in total, do I think this is sort of a positive or a negative force?
And I think, you know, and I would work with people to sort of think through that, right?
You know, what would be the reason for shutting it down?
What would be the reason for, you know, keeping it on, right?
All things considered.
And I think my view on this is that, you know, it's going to be really hard to turn these sort of computational network technology.
So the Internet plus everything that's been built on top of these, you know, these original protocols and all of the infrastructure that was built out and is now mainly privately owned to, you know, carry this content, et cetera.
It's going to be really hard to sort of repurpose that to good, given the consolidation of power that is right now sort of dominating those infrastructures and, you know, frankly, given these sort of neoliberal capitalist incentives that are, you know, driving those who dominate these infrastructures.
So I think it's, you know, I think it's, you know, I'm certainly not, you know, anti-computers or anti-technology, but I think we have to recognize these as questions of power and control.
and not questions of the hypothetical benevolent uses of these technologies when, you know,
we can look at, you know, what generally actually happens when, you know, people with a certain
set of incentives are the ones governing their use and utility.
Yeah. And I think that the good news is right now we're starting to have these conversations
in a way that we weren't a couple of years ago. And I think your work, Meredith, is a big reason
for it. So we do appreciate it. And it's always great to be able to, you know, talk about,
about this stuff and go deep into it
and hopefully it leads to more and more discussions.
Okay, I think that's gonna do it for our time.
So I just wanted to say,
thanks everybody out there at the World Summit for AI.
We appreciate you tuning in.
If you like the podcast, we'll have more of these discussions.
We do it every Wednesday.
You can go to big technology podcast
in your app of choice and it will hopefully be there.
We just got on Stitcher, so we're there too.
And for everyone listening,
you know, through one of those podcast apps,
if this is your first time,
And you liked it.
Please hit subscribe if you're a longtime listener.
If you could give us a rating, that would help with discoverability.
So we appreciate that.
And most of all, thank you, Meredith.
It's been great sitting down here and speaking with you about these very important issues.
And I hope to continue the conversation.
So thanks for being on this show.
Thank you, Alex.
Thank you all.
Have a great day.
Thank you.