Tech Won't Save Us - AI Hype Distracted Us From Real Problems w/ Timnit Gebru
Episode Date: January 18, 2024Paris Marx is joined by Timnit Gebru to discuss the past year in AI hype, how AI companies have shaped regulation, and tech’s relationship to Israel’s military campaign in Gaza. Timnit Gebru is t...he founder and executive director of the Distributed AI Research Institute. Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon. The podcast is produced by Eric Wickham. Transcripts are by Brigitte Pawliw-Fry. Also mentioned in this episode:Paris is speaking in Montreal on January 20. Details here.Billy Perrigo reported on OpenAI lobbying to water down the EU’s AI Act.Nitasha Tiku wrote about the push to train students in a particular idea of AI.Politico has been doing a lot of reporting on the influences on AI policy in the US and UK.OpenAI made a submission in the UK to try to get permission to train on copyrighted material.Arab workers in the tech industry fear the consequences of speaking out for Palestinian rights.972 Magazine reported on Israel’s use of AI to increase its targets in Gaza.Jack Poulson chronicles the growing ties between military and tech.Timnit mentioned No Tech for Apartheid, Antony Loewenstein’s The Palestine Laboratory, and Malcolm Harris’ Palo Alto.Support the show
Transcript
Discussion (0)
The point we were making there was that labor is the key issue here.
Because, you know, whether we're talking about discriminatory AI systems or like, you know, face recognition, or we're talking about autonomous weaponry, or we're talking about like generative AI like this, or quote unquote AI art.
The reason being that if they were not able to exploit labor, their market calculations would say that this is not going to work.
And so they won't be so quick to go to market with this stuff.
Hello and welcome to Tech Won't Save Us in partnership with The Nation magazine.
I'm your host, Paris Marks, and this week my guest is Timnit Gebru.
Timnit is the founder and executive director of the Distributed AI Research Institute.
And you might remember that she appeared on the show last year around this time as well,
just as the AI hype was taking off.
And that ended up being our most popular episode of
the year last year, because it gave people an introduction to what ChatGBT was all about,
why people were getting so excited of AI in that moment, and whether it all really made any sense.
And so now that we are over a year into this cycle, I figured it was a good moment to have
Tim Neap back on the show to discuss what we have seen over that time, the ideologies of these people who are backing and pushing this industry, and the very clear consequences that we're seeing with these AI technologies as they roll out into the world and end up having very real consequences, not just in kind of our everyday lives, but in the lives of the people who do the labor to make these tools possible, but also in the lives of people in places like Gaza,
where AI tools are being used to target them in an ongoing bombing campaign.
So as I said, this is a very wide-ranging conversation where we touch on a lot of
topics kind of recapping what has happened over the past year in AI policy, in ideological developments,
in the arguments that these companies have been making. And of course, as I said, you know, we
end with a discussion of the relationship between this hype cycle and the tech industry more
generally, and what is going on in Gaza right now, as Israel continues its genocidal campaign.
And as we will hopefully see some action on that in the International Court of Justice soon after the case that has been brought by South Africa.
So I don't really think there's much more that needs to be said to introduce this episode.
I will also mention that as you listen to this, as this episode comes out in a few days on January
20th, I will be doing an event in Montreal with Rob Russo and Nashua Khan talking about the Canadian
media. So if you're interested in coming
out and joining that, you can find more information in the show notes. And with that said, if you
enjoy this week's episode, make sure to leave a five-star view on your podcast platform of choice.
And you can also share the show on social media or with any friends or colleagues who you think
would learn from it. And of course, we might be in partnership with the nation right now,
but they are not paying us. They are just helping to promote the show
and have it reach a wider audience.
So your support is still essential
to putting this together,
to ensuring that we can keep having
these critical conversations
and spreading these essential perspectives
about Silicon Valley.
So if you want to help to make sure
that we can keep doing that,
you can join supporters like Emily from Brooklyn,
Nikita in Cambridge, UK,
Jordan in London,
and Kevin in Raleigh, North Carolina,
by going to patreon.com slash techwontsaveus and becoming a supporter yourself. Thanks so much,
and enjoy this week's conversation. Timnit, welcome back to Tech Won't Save Us.
Thank you for having me. Was it last year that I was here? Because it feels like, I don't know,
so much has happened since then. Yeah, time is hard to keep track of sometimes, you know, you were on the show,
uh, January of last year, as this AI hype was just kind of taking off, right? Chat GPT came out end
of November, 2022. You know, we were starting to see those kinds of stories in the media around
how it was going to change everything. And you came on the show and kind of gave the listeners
an introduction to what AI is, how this stuff works, what we should actually expect. And now we have had this kind of whole year of hype. And I
wonder kind of just to start with more of a general question, like what have you made of that past
year and the way that kind of AI has been treated and talked about over that period?
What I've made is that the people pushing this technology, if we want to call it that, as an end-all, be-all,
either the thing that will save everybody or apparently render all of us extinct,
and I'm not exactly sure how, have really had a really good campaign and have succeeded in infiltrating government of all kinds, multilateral, you know,
EU, US, UN, whatever you want to call it, media, federal organizations, institutes,
schools, what have you. And yeah, that's really what I've made of it, honestly.
So you're saying that DARE is not running a campaign to make sure that we nuke the AI facilities and stuff like that to protect us? I don't know if we'd be able to
write a time op-ed. I don't know if we would be invited to write a time op-ed asking anybody to
nuke anything. I would think that the FBI would be at my door nuking me before I can get a chance
to say anything like that. We are not composed of people who are
allowed to say things like that. So we're not planning such a campaign anytime soon, but no,
yes, no, we have not done that. Okay. Good to know, but you'll tell me first.
I'll tell you first if we need to nuke anything. Yeah. I'll let you know. So you can hide,
I suppose. I'll get away from the data centers to make sure that I'm protected.
Remember, we need, according to them, we need a few people to ensure that civilization,
quote unquote, it still exists when everything is being nuked. So some of us maybe could be
some of those people. Sounds good. Sounds good. But, you know, talking about all that, like you
talk about kind of the campaign that these tech CEOs, you know, AI CEOs have really waged over the past year to ensure that their narrative is the one that we're believing. And what that really brings to mind is kind of the campaign that Sam Alton was on earlier this year, where, you know, he was basically on this world tour talking to politicians all over the world to sell this vision of what AI is, how it should work, how it should be regulated.
I feel like it was often presented in the media as this kind of coming out and this kind of like
almost altruistic thing to introduce this to the world. And then you got these reports, for example,
like in Time Magazine, where they reported that, you know, he was kind of lobbying on AI regulations
and the AI Act when he was over there. And we saw in the final version of that, that it kind of aligned with what he wanted it to look like, right?
Yeah. And I remember it's so interesting. Time is really weird because I can't believe all of
that happened last year. They had these articles saying he's the Oppenheimer of AI or he created
this thing and now he's really worried about it. And at the same time, he's saying, you know,
regulate us, but not really like that. Right. So he appeared before the Senate and that was also
last year. But yeah, so he appeared there. He was talking about how this thing is so dangerous and
that it needs to be regulated. There needs to be some new complicated structure to have it regulated.
And then, you know, the lawmakers were literally suggesting that he be the head of some organization that regulates organizations
like his, right? Which is sad to hear people say that. And then as he was doing that, and he was
doing those tours, trying to supposedly convince all of the lawmakers how dangerous the thing he
can't help but build is, because, you
know, he still has to build it. They were in the back doors lobbying heavily against the actual
regulations that were being proposed. And the media was talking about him as if, you know,
he's this altruistic person who's so worried about this thing that he once again can't help but build.
You know, in the UK, especially, they dissolved a whole advisory group of people that they had.
I mean, Neil Lawrence is one of the most outstanding machine learning researchers for a long time.
He knows a lot about data and everything else.
He was not buying into the hype and, you know, dissolved, right? And so this campaign to capture research direction, regulatory direction,
media coverage, and funding, even federal funding direction, I would say in the last year has been
successful. Maybe because of that, I also would say that for the first time, we've seen some media
coverage discussing some of those motivations. I remember
there was a Politico article talking about how the effect of altruists and, you know,
Emil and I are calling the test grill bundle. And I know you had him on your show to talk about that
have actually essentially been successful in capturing this conversation. Natasha Tukwu had
a wonderful article talking about the amount of money that was being put into pumping
students into the field of AI to supposedly stop it from killing all of us, right? Because in order
to stop AI from killing all of us, you need more people, more people and more money being pumped
into the field, which is a very logical kind of conclusion to reach.
So on the one hand, they've been so successful and it's been very frustrating for me to watch.
On the other hand, I've also seen more conversations about those actual ideologies,
more stories coming out. And, you know, I hope to see more of those this year.
Absolutely. And there are so many things in that answer that I want to dig into through the course of this conversation. I think just on the
question of time, I find that really interesting, right? Because, you know, we've been talking about
how it's kind of wild that all of this just happened in the past year. It feels like,
you know, things that would have happened over a much larger stretch. And it feels to me like that
kind of shows almost the kind of cyclical
nature of this and how these cycles kind of work in the Valley, right? Where we can have this kind
of compressed interest in something like AI and it's going to change the world. And we all need
to be so kind of focused on it and worried about it and all this kind of stuff. And, you know,
by next year, there will probably be like something else that we'll be getting that
whole range of attention. And we'll be like, AI, like, that's the last thing. Like,
is that still important anymore? That's so last year, you know,
like, is anybody talking about crypto right now? I don't even know, you know, it's like it never
kind of, I mean, some people are talking about it, but it's not like the thing that's going to
save the world. I remember there was some VC or another person having some type of suggestion for unionizing and decentralize
this or something or another, you know, and we're like, that's not how it works, though. You know,
you need interpersonal relationships, like you can only get rid of humans to a certain extent,
you know. So I don't, I'm not seeing those kinds of conversations and all the crypto grifters,
or actually even the pharma grifters have congregated around so-called AI. So that
tells you where the new grift is, right? Absolutely. And, you know, since you were
talking about regulation there and, you know and how these AI companies have so successfully
kind of captured this discussion around it, whether it is in the United States or so many
other parts of the world, obviously the White House has been kind of making gestures towards
AI, has been speaking to AI CEOs and some other people who are in the AI field.
What do you make of the way that the US
government has approached AI regulation over the past year? And do you think it's the type of thing
that you would want to see them doing if they were really taking this seriously? Or does it look like
they're basically following the line from the Sam Altmans of the world? The one organization that
I'll say has not been captured is the FTC. I'll stand behind whatever the FTC is saying.
If you look at how they were talking about, a number of things are within their jurisdiction.
For example, deceptive practices, the way they've been advertising how chat GPT works,
for instance.
I don't know if they've changed it now, but I remember a few months ago, I think I was
looking at their readme files and we talk about how whatever their product is doing is not really doing understanding or things like that.
Like, and maybe that could be up for debate, whatever, you know, even though we have a specific position on that.
But even if you take that premise seriously, they don't even try to scope out whatever language, for example, they say that their product understands, you know, they're like, oh, you know, chat GPT understands general language, this, that, like, these are the kinds of things they're doing, right?
And so for the FTC came out and said that, listen, you know, AI is not an exception.
If your organization is engaged in deceptive practices and you're deceiving customers, that's our jurisdiction.
And that is very different from
the kinds of things that Sam Altman was asking for when he made his appearance, right? He was
acting like this is some new uncharted territory that requires some sort of new governance structure
that we need to create and figure out. Whereas organizations like the FTC are saying, no, that's actually not true.
You are a huge multinational organization and we have jurisdiction over you. They've done a number
of things like that that I really appreciate and makes me think that they're not buying the hype.
Now, contrast that with what Chuck Schumer was doing. I didn't even want to be a part of that, to be honest.
Like, I'm sure I will never, ever get such an invitation after I spoke up about it.
And it's totally fine.
I know they'll never invite me, but maybe they'll get the memo and invite someone else
or a number of other people and not have the kind of thing that they did, which they literally
had Elon Musk and all the different CEOs there for their
AI insight form. I think that's what they called it. And they had a couple of people just as window
dressing, right? So that we don't criticize them for taking that approach. I got an email like a
day before or something kind of asking, and I, and I chat and I asked around and it was because
they wanted to do a backfill, you know, the last minute thing.
I'm like, I don't want to be a part.
What would I accomplish by being a part of this?
It's already set, right?
Like my voice is not really going to do anything except for have a stamp of approval, right?
As a stamp of approval as to what they're doing.
You mean you didn't want to go meet Elon Musk and shake his hand?
Oh, you know how I love him, right?
You know, you and I have that in common, right? That's the one only thing we have in common. I
mean, I didn't do a whole series on him like you did. So maybe my love doesn't go that far,
you know, but I already met him in 2016. I can't even, you know, I already regretted that. Like
he came to Stanford and it's just so ridiculous. He just kept on talking about stuff that makes no sense.
And I should have put two and two together, right? Because I asked him afterwards,
you know, why is he worried about AI as an existential risk? What about climate change?
And he said, well, climate change is not going to kill like every single human. It's going to be,
you know, it's exactly the kind of argument that these existential risk people have. But at that time, you know, he was starting to talk a lot
about that. And I hadn't put that analysis to get, you know, two and two together with the test
grill and all that. But, you know, anyways, so I don't need more of that now, right? You know,
so there's that. And then there's the White House executive order that just came out. And I have to
tell you, I have not really looked at all of it because that was in the middle of a genocide that
they are also announcing and funding. And so to me, it was just like, you know, to celebrate that
while we are seeing what they're doing. And it just kind of reframed everything I'm thinking about, because how can
I do that when I'm seeing that the weapons and all of, you know, the book, the Palestine Laboratory
and all this, we just had an event called a No Tech for Apartheid to amplify the No Tech for
Apartheid campaign, but it was in the middle of that. And so I'm like, I cannot celebrate this
right now. You know, I know that there are some transparency requirements and
other things that I appreciate, but still, you know, it was in the middle of that. And so
how could I go and say, you know, congratulations, thank you when this is what they're doing?
Absolutely. There are much bigger issues out there to deal with and to be looking at, right?
At a moment when this kind of AI executive order is coming out. And I don't want to make it seem
like we're just moving on from that. I do want to come back to the issue of what's happening in
Palestine and the campaign that Israel is in the process of carrying out with American assistance
a bit later in our conversation. You talked about how these people in the AI industry are seeing
what is going on in kind of very different ways or presenting it in particular ways to the public, right? In terms of the ideologies that are kind
of present in this industry. And as you're saying, you know, I talked to Emil about that last year,
the end of last year, but I wanted to discuss that with you as well, because it does seem like,
obviously, there are these ideologies that have been present in Silicon Valley for a long time,
that kind of position technology is the way that we're going to kind of solve all the problems in the world and all this kind of stuff. And even as you were talking about Sam Altman there and what
he was saying at the hearings in the government versus kind of what he's been saying out in the
world, it seems like he has been kind of using arguments from both sides of this, where on the
one hand, he's saying,
we need to be paying attention to AI, we need to be regulating it because it's this massive threat to humanity. But at the same time, he is kind of pushing for this acceleration of the rollout of AI
into like every facet of our lives. And arguing that, you know, it's going to be our doctor and
our teacher and going to be an assistant for everybody and all this kind of stuff. So kind of what do you make of how these people, the Altmans of the world,
the Andreessens, the other folks are positioning AI and what that says about their kind of views
on it, but also I think their kind of ideologies more generally. Oh yeah. I forgot about Marc
Andreessen and his manifesto. It's just too much to cover, you know, and now there's this new
EACC thing that I don't even know if we can fit in our test reel acronym. But, you know, again,
same shit, different day, I guess, same movie, different day. You know, they want to be saviors,
right? And they want to be the ones who save humanity in one way or another.
It's a secular religion, right?
You just kind of want to believe in it.
And so, of course, if you so strongly believe that you're doing something good and it's
really important, so it's fine that you're amassing all that wealth and money because
what you're doing is saving humanity.
And what's really interesting to me is there are different factions that are kind of fighting against each other. But for me, it's like, they're all the
same. There's the billionaires like Altman and Andreessen and all of these, you know,
tech leaders, Silicon Valley leaders. There are like the quote unquote philosophers. I don't even
know if you want to call them that, like the, you know, EA people and Nick Bostrom and all these
people. There are the, like,
people like Max Tegmark. I mean, that's a good example here. So if you look back at, if you,
if you can stomach it, some of these singularity summit or effective altruism lectures, you know,
and this is why I appreciate Emil, because when we collaborate, like, Emil can, like, go through
all of those things and read them and get the quotes and stuff
because I'm like, every single line I read, I'm just so angry that I have to do it.
And there was this slide from Max Techmer. I don't remember if it was 2015 or 2016
from the Effective Altruism Conference, where literally the title of this slide is,
if we don't develop technology, we are doomed, right? That's what he's saying.
If we do not develop technology, humanity is doomed. And he has this chart and, you know,
climate change, question mark, like that's not an exact doom scenario, right? And then after that,
there's like a cosmocalypse, I think is what he called it. At the same time, you know, he has his future of life,
you know, institute or whatever, a future humanity, future of whatever, you know,
that's what they all call. I get them both confused.
I know, it's like future of X. Now I just, you know, future is one of the words that they've
just ruined for me, like, you know, and funded by Elon Musk and all on all the best people. Around that time, they had a letter,
a petition, you know, about existential risks of AI and stuff like that. The same kind of letter
that they just recently had, I think it was in March, you know, that like, it was all over the
news, right? We have to worry about the existential risks of AI. It's the same dude who was saying that if we don't develop technology,
we are doomed. Now he gets to make the money saying that. He gets to make the money also
saying that it's an existential risk to humanity. They're just circling money around themselves.
And that's what they're doing because there was even like Jeff Hinton, the godfather of whatever,
deep learning,
you know, who was who has started to say that, you know, he's so worried about existential risks and chat GPT. If you look at his tweets a few months prior to that, he was starting to talk
about how chat GPT is the world's butterfly. It took the world's data, and then it turned into
a butterfly. And a couple of months later, he's apparently super worried
about existential risks. He's making the press rounds. And the thing is that they get to fool us
both times. That's the thing that is so upsetting about it. They get to take all the money to tell
us how it's the best thing and then take all the money saying they are also the solution.
They're the problem and they're the solution. And this is a sign of
amassing power, privilege. And when we talk about the real risks of AI versus whatever they're
thinking about, in my opinion, it's because they can't fathom any of the real things,
mundane things affecting human beings to get to them. It's kind of like what you were talking
about in your book about the different solutions all the billionaires come up with, like flying cars and, you know, Hyperloop or whatever,
it's not going to work. But this is the thing they're really fantasizing about, they're thinking
about, right? It's not like what the quote unquote masses experience. So, you know, it's really
nothing new, but it's just sort of a new technology to dress up these ideologies.
It's fascinating to hear you talk about that
and bring up Jeffrey Hinton again,
because I remember in the interviews with him,
he was explicitly asked like,
you know, what about these more kind of like real risks
that people like you were drawing attention to,
like how it's being used against people in the here and now.
And he was like, you know,
very explicitly like dismissing that and saying like,
no, it's this big existential risk that is the problem, not like the real things that actually affect real people that are happening right now.
Yeah, he's like, you know, specifically, I remember they asked him, so what about the stuff she said again, right?
And he was like, I don't find the, you know, while discrimination is an issue, hypothetically, I don't find it as serious as the idea that these
things can be super intelligent and this and that and such and such. You know what I mean? Yeah.
It's, it's, it's so ridiculous. Like it's so frustrating, you know?
Yeah. Especially because like, you know, these are the types of people that like,
that the media is like more likely to listen to. And then thus these are the perspectives that the
public hears more and more. And so that like informs the whole kind of conversation that we
end up having. And it's like, we're being totally misled as to what we should be discussing around
AI, what we should be concerned about, whether we should even be talking about it in this way,
but because the people in the industry have that much influence because, you know, they have all
these people like the Jeffrey Hintons who will kind of repeat that stuff. And then the media
just seems to be like, I don't know, just seems to fall for it and, or seems to be not really
be interested in the real story. And again, like there's plenty of journalists out there who have
written great stories about AI and who have challenged this perspective, but like the
general thing that we hear, the kind of general framing that gets presented is what the Sam Altmans and what the Jeffrey Hintons are saying,
not what the Timnit Gebru's and the Emily M. Bender's and people like you are saying.
Yeah. And it's, it's really interesting because people forget that I am a technologist. Like I
studied engineering and math and science, and I wanted to be a scientist. I didn't want to go around telling them, no, don't do this.
But it's taken away the joy of actually thinking about technology, right?
Because it's like this nightmare that we're living in.
And so to present us as just the naysayers or the whatever, that's because we're not
getting to put forward our visions and because we have to
fight them, right? Like that's the only reason we are doing these things, not because that's,
I don't know, by our very nature, you know, that's what we were looking forward to or anything like
that. And when you're talking about the media, I mean, the influence is huge, right? It's not just
in the US, it is worldwide, right?
When I have interviews in my native languages,
I have interviews in Tigrinya or Namharic
and they talk to me about AI
and they ask about how do you think
it will impact the African continent?
What are the good things that it can do and all of that?
And is it gonna kill us all?
Is it an existential risk?
And what about labor?
And I'm
thinking, think about the labor context in those countries versus what these people are talking
about. I grew up in a country where most people are farming using exactly the same methodology
as thousands of years ago with cows and they don't even have tractors, right? Let alone,
you know, autonomous such this
and that. Labor is much cheaper than goods, for example, because a lot of the goods are stolen
from those, from, you know, countries in the continent such that. So people are not even
encouraged to do contextual analysis of what these people are saying because the media,
the megaphone is so loud, right? Or the way in which when labor is
impacted, it's the way in which people are exploited, like the workers in Kenya. It's so
interesting how Sam Altman, I don't remember if you saw some of his tweets, talking about how
there is such a concentration of talent at OpenAI because there's only a few hundred employees and
compare it to all of these other
people and all of these other organizations. And of course, he was not counting the millions of
people that data is stolen from, but also the exploited and traumatized workers that they are
benefiting from, right, that they're not paying. So it's like very little pushback. And the pushback does not have as much as loud of
coverage. And that's also part of because I think they don't talk about movements, right? They talk
about individuals. So they want gods. But then they also wonder, a lot of times they build up
these people. And then when that bubble burst, they wonder, they do a postmortem, like, how did
this happen? Remember Elizabeth Holmes, you know, how many covers she was in and anybody who said
anything was a naysayer. And now they're just analyzing, how did this happen? Well, who built
up Elon Musk? You did, right? And so like, what are you expecting now that he has amassed all this
power? And a lot of people are wondering, like, you know, he has more power than
multiple governments combined and all of that. Well, who let that happen? You know, the media
had a lot to do with it. Absolutely. And, you know, when you talk there about kind of how the
media is kind of focused on the individual rather than, you know, looking at a movement or looking
at more of a collective, did you have much of an experience with that kind of when the media
spotlight was on you after, you know, the Google firing and things like that?
Like, what was your experience of that in that moment and how the media treated that
and how they kind of treat, you know, individuals and kind of tech in general, I guess?
Yeah, I mean, I think that I definitely noticed that, right?
They talked, they liked, they wanted to talk about me as an individual.
And I wanted to sort of bring out that, yeah, I mean, I don't
want to discount stuff that I went through as an individual or stuff that I am doing as an
individual. But also I wanted to make sure that they knew that, for example, the reason that all
of this was news in the first place was because I had a lot of people supporting me and there was a
strategy for how to do that. There were people who have never come out publicly
and still work there and who can't just quit or whatever.
And working in the background,
there are collections of people
who were writing petitions, statements.
There were just so many people doing grunt work
in the background, right?
That job wasn't like to be a public face.
Not everybody can be a public face.
So it's more difficult for the media to see things that way. They want to create a godlike figure,
you know? And so then once they have somebody who they think is a godlike figure, they just
elevate them. And then they get surprised when these people have so much power, you know,
like Elon Musk. Absolutely. And I feel like that became so
clear recently, kind of when we saw this drama at OpenAI around Sam Altman, you know, Sam Altman
kind of being deposed as CEO, taken out by the board. And then you had this kind of few days of
back and forth and the media is trying to figure out what is happening. And you have all these
people in tech and on Twitter kind of pushing for Sam Altman to be returned to the post. Meanwhile, we don't even really know why he was removed in the first place. Certainly, there are some rumors around people at the company not agreeing with the, that there were issues with how he managed the workplace, the decisions that
he was making, all those sorts of things. I wonder what you made of that whole episode and how it was
treated and how Sam Altman was ultimately kind of restored to this position and seems to really
have any kind of guardrails that might've been there before kind of taken off his leadership now.
Yeah. I remember when that story came out, none of us had ever like that entry. We were like, Whoa, what? So our first reaction was
for a board to make a statement like that, and then immediately remove from like, what happened?
What is coming? What, you know, that was my first reaction because I just didn't think that,
you know, they would make a public, I don't know, as a company, announcement like that and remove him immediately.
I was wondering, like, are they about to be sued?
You know, this was what was running through my head.
And then some people said, well, you know, Annie Altman, his sister's allegations.
I'm like, do you guys know the tech industry?
You really believe that they care about anything to do with sexual assault or harassment?
Like, I mean, I don't understand. Like, do you understand that they silence us? Like they would
punish us like a dead? No, that had nothing to do with it. I'm pretty sure. Right. That was my
first thought. But like, again, it was like in the middle of the whole Gaza things, or I wasn't
really like paying that attention. But then I remember OpenAI explicitly saying like
members of their board are not effective altruists.
And then like nobody was really checking that,
which was ridiculous because they were.
I mean, that was what they were.
The media was asking me about one of the board members
apparently wrote something that was critical of OpenAI.
And then, you know, Sam Altman wanted to suppress it.
And so people were starting to connect that with my story at Google. And I was like, I don't know about that,
you know, because, again, I didn't know the person that well, but just a quick scan. I'm like, yeah,
this is a very effective altruist kind of like person, the whole US China thing, you know,
I'm like, yeah, it's not I don't know, I don't think it's that similar. So at some point I was like, maybe it has something to do with effective altruism and
the board members being that, because, you know, I don't, that's what I thought. And then of course,
the end results ended up being all the women get removed. And then people were asking me if I would
consider being on that board, which I thought was the funniest thing. I was like, I have said literally since the
inception of this company, if my only choice was to just leave the field entirely and like have to
work with them somehow, it would unequivocally be the second option because, you know, I don't know
how to place it exactly as to the why, but I just really disliked that company from day one,
the whole savior mentality and the way that the media was selling them as a, you know, nonprofit.
So my takeaway from this whole thing, and then, you know, the people were asked to write in
support of him. And then there's the whole Ilya, you know, Sam Allman thing, which was also weird.
So my takeaway from the whole thing was, I did really think that it
didn't seem like the board was mature just to start because like as a company that's so public
like that, if you're going to make public announcements and things like that, maybe
you should, you know, discuss it. I just didn't seem very mature, but then it seems like there
are no guardrails around OpenAI except for Sam Allman. He is the company, the company is him, is how it's being run right now.
And then the end result is for people like Larry Summers to be part of the board, which is excellent.
Like you not only got rid of the women, now you added the guy who's talking about how women can't do STEM and things like that.
So, you know, that's sort of my takeaway of what happened.
Yeah. You think about how there are often arguments about how like, you know, the AIs
are not discriminatory or they're not being influenced by like kind of the culture that's
around them. And it's like, well, you're setting up this whole corporate culture that is like
sidelining women, that is bringing in people like Larry Summers, that, you know, is very demonstrably like discriminatory if you feed it certain prompts and things like
that. And also, I mean, honestly, even when they had women, it was the kind of representation
politics thing where, you know, I still, I would, they could have a board of all women and the way
Open AI is ran, I still would think that they don't care about women, but now it's just like, they don't even
care about the optics of it, let alone, you know, actually care. Absolutely. You know, and since
we're on the question of open AI, there are a few things that I wanted to dig into with regard to it.
You know, you talked earlier about kind of the privacy and the data that's being collected by
these systems. And of course there has been a lot of debate and
discussion recently around all of the data that they use in order to train models like the one
used for ChatGPT. And there have been lawsuits around copyright. Obviously, there has been a
lot of discussion in the past year when you had the actors and the writers unions going on strike
and talking a lot about AI in their contract negotiations and what it might mean for their professions and their industry.
And then just recently, we had OpenAI make a submission to the House of Lords
Communications and Digital Select Committee, where they basically argued that they should be able to
train their models on copyrighted material and not have to pay for it. Because if not,
they say they would not be able to, quote,
train today's leading AI models without using copyrighted materials.
And that limiting training data to just public domain data would not result in kind of like
a high quality model or whatnot.
I wonder what you make of these discussions, kind of the growing debates that people are
having around copyright and what relationship AI models and AI training
should be able to have to them. Because it does seem like, especially going into this year,
it's going to be one of these big fights that's playing out, especially as the New York Times is
suing over this and other things like that. Yeah. I mean, so we wrote a paper with a number
of artists called AI Art and its Impact on artists recently. It was great.
It was, well, I mean, if I may say so myself.
Oh, it was a great experience for me working on that paper
because it was a number of artists
whose jobs are on the line because of this.
It's not hypothetical in the future kind of thing, you know?
And some legal scholars and philosophers
and machine learning people.
And we talked a little bit.
The whole, the point of the matter is, and we wrote another article, I think it was 2023,
don't quote me on the time, called The Exploited Workers Behind AI.
And it was kind of synthesizing a bunch of research and, you know, a bunch of stuff us
and a bunch of other people have done.
The point we were making there was that
labor is the key issue here. Because, you know, whether we're talking about discriminatory AI
systems, or like, you know, face recognition, or we're talking about autonomous weaponry,
or we're talking about like generative AI like this, or quote unquote, AI art. The reason being
that if they were not able to exploit labor, their market calculations would say that this
is not going to work. And so they won't be so quick to go to market with this stuff.
And that's exactly why the open-ended people are saying, it's like, what do you mean? If we can't
steal everybody else's work, then we literally cannot make our product. And we're just like,
yeah, exactly. That's kind of what we're telling you, right? You're profiting off of everybody else's work. That's my number one takeaway. But the second one
is, you know, you get to see the kinds of arguments people in AI make to defend this kind of practice.
And one of them is, you know, what we call anthropomorphizing. It's talking about these
systems as if they are their own being. And that what we were talking
about earlier about existential risks and all of this fits into this. Because if you talk to people,
if the media regulators, or if you push this agenda, that these are systems that have like
their own mind kind of thing, who knows what they're going to do. We're not thinking about
copyright. We're not thinking about things on earth, right? Open AI companies, regulation, theft. That's not our
labor exploitation. That's not what we're thinking about. We're distracted by thinking about, you
know, can this machine be ethical? Can this machine be inspired by the data, just like humans
are inspired. So the way they talk about it is saying, no, this is not theft because it's like, you know, when human artists are learning, they look at other artists.
You know, these are the kinds of arguments they make.
And no, human artists are not just copying.
They're not combining existing data and composition and then spitting something out.
They're putting their own experiences and they're coming out with new stuff.
They're doing all sorts of stuff.
And so these people want us to believe that art is nothing more than just combining a
bunch of stuff that already exists and just spitting it out.
And what about when people come up with a completely new stuff for the first time?
It's like they make you talk about art as if they know actually what they're talking
about. And so to me, it also makes me sad for them because I feel like the depth of humanity is so small for them if this narrative that they have, that they don't want
us to think that they are actually stealing from people and profiting from it, which is what they're
trying to do. They want us to think that they're creating some magical being that can actually
solve the world's problems if we just let them do their thing, you know, and or kill us all. I don't
know which one could go either way. They want to keep us high up on
that discourse so that we're not really looking at the actual practices that they are engaging in,
which is not complicated. We all understand what theft is. We all understand what
big corporations do. And we all understand what kind of laws we want to guard against those things.
Yeah. It's not surprising at all to see these companies trying to get past copyright regulation when it helps them and wanting to defend it when it doesn't,
right? I think it's so interesting to hear what you say there around kind of the people arguing
that, you know, it's just like how a human reads an article or reads a book or looks at something
and that inspires them to make something else. Because I feel like I see that argument so many
times by people who want to defend this. And it's like, no, the problem is that these systems do not have brains like humans.
They don't think like humans. They don't approach these things like humans. It works completely
differently. And you can't compare these things because you know, these computers are just making
copies and then kind of, you know, developing these kinds of models. But I feel like on the
copyright question, I think that there has been a lot of legitimate criticism of the way that the copyright system is constructed for decades,
right? But I think at the same time, there's a legitimate use of copyright right now in order
to defend media publications, in order to defend artists and their work against the attempt by
these companies to use all of their
work for free to, you know, inform their business models and stuff like that. And I don't think that
those things are actually in conflict at all, or at least they shouldn't be right. Wanting reform
of copyright and also wanting to protect artists and media publications and whatnot.
Yeah. That angle was very interesting for me to hear from people and be like, Oh,
you're a landlord, you want copyright or, you know, stuff like that. And also, it's really interesting,
like you are saying, to see how companies like OpenAI are saying, you know, they have to use
copyrighted material. But then in their terms of service, I don't remember if this is in OpenAI,
but a whole bunch of these generative AI organizations have terms of service saying
that you can't use their APIs to have competitors or as input to other generative AI systems or something like that.
I'm like, okay, so you want to have these restrictions, but you don't want to
honor anybody else's restrictions. And also, I think that the reason any of us are even talking
about copyright, for me, I'm not a copyright expert. I'm not a legal expert, but I'm just listening to what the artists whose jobs are on the line are saying.
They are trying to exist in the current system that we have and barely make a living. And what's
the point of living also for all of us if we can't even interact with the art that is created by
humans? It's an expression, it's communication, right?
And so there is this thing, this is the only thing that currently is helping them navigate
the situation. If there is something else, I'm pretty sure they'd be happy for that other thing
to exist. But we read in our paper that copyright is also not very well equipped right now to even
protect these artists. Because, you know, imagine like
the courts take forever to do these determinations. You mentioned a number of lawsuits, like
Carla Ortiz, one of the artists is, you know, a plaintiff for one of them, right? And so
imagine the kind of time resources it takes to go up against these large organizations. It's not
really a sustainable way, I think, forward. So nobody's saying like everybody loves copyright
and nobody's trying to protect Disney or whatever. And you can't sing happy birthday or something
like that, the ridiculous things that they're doing. We're talking about the artists. It's
already difficult to exist as an artist, right? And so why are we trying to take away the few
protections that they have? Exactly. People are using the tools
that are available to them, even if they are imperfect tools, to try to defend what little
kind of power that they have to push back on these things. You mentioned earlier the fact that it was
difficult to engage with these things in a moment when we're seeing a genocide being committed
against people in Gaza and Palestinians more generally, right? I wanted to turn to that
because when we're talking about OpenAI, I think that this discussion is very relevant to what is
going on when we talk about AI as well, right? We talked about Sam Altman, of course, who is the
leader of the company and probably the most influential voice in AI right now. But OpenAI's
head of research platform, Tal Broda, has actually been posting quite a lot about what is going on
in Gaza. He's posted tweets such as, quote, more, no mercy, IDF don't stop, while, quote, tweeting
images of neighborhoods turned rubble in Gaza. He's tweeted, quote, don't worry about killing
civilians, worry about us, and, quote, there is no Palestine, there never was, and never will be. I wonder what these sorts of tweets and
this kind of approach to this kind of horrific situation going on in Gaza right now, you know,
being committed by the Israeli military and government, tells us about some of these people
in the AI industry and the ideologies that they hold to be able to say things like that or kind
of see this in this light. What it tells me, first of all, is how embold to be able to say things like that or kind of see this in this light.
What it tells me, first of all, is how emboldened you are to say something like that, right? Just
the fact that you feel it's okay to constantly say that, that means you've never had any pushback
to saying those kinds of words. And honestly, actually, I have to tell you, I'm not surprised
at all by those tweets. I am very surprised that
there has been some amount of pushback and that Sam Altman said something about Palestinians,
which is the bare minimum, but I've never seen that in the tech world.
Yeah. Just to say, Sam Altman tweeted on January 4th, quote, Muslim and Arab,
especially Palestinian colleagues in the tech community I've spoken with feel uncomfortable speaking about their recent experiences, often out of fear of
retaliation and damaged career prospects. Our industry should be united in our support of
these colleagues. It is an atrocious time. I continue to hope for a real and lasting peace
and that in the meantime, we can treat each other with empathy. Now, you know, this is a statement
that he put out. As far as I know, there hasn't been any action against Tal Broda for the types of things that he has been saying that I'm sure make a lot of people who are Palestinian and even who aren't Palestinian in Silicon Valley and in the tech industry because people are scared of speaking out because of
the degree of support that exists for what the Israeli military is doing. But sorry, please
continue. Silence is one thing. We need to talk about the tech industry's role in this whole thing,
which is pivot. So while we're talking about this, I want to mention the No Tech for Apartheid
movement that's created by Google and Amazon workers, right? And so any
tech worker, they can go to notechforapartheid.com. I think they, yeah, they had a mass call yesterday,
but they've been protesting and they're kind of modeling it after the anti-apartheid activism for
South African apartheid, right? And so to say that the tech industry is silent, it's like, you know, if it was just silence,
it's one thing, but they are actively involved.
There are IDF reserves working at these large tech companies, right?
There are actual members engaged in these horrific acts who are currently employed,
and they have the full support of these organizations.
These organizations are supplying the Israeli military with technological support. And we know
that a lot of the startup scene out of Israel comes out of like military intelligence kind of
arm and is transported across the world for surveillance
and suppression. And the VC world is very intertwined with that. So it's like the tech
industry is absolutely pivotal to this. And because of that, it is career suicide. I mean,
I know for the last, let's say two decades, maybe I've been in this space or even when I was in
school, it is the scariest thing to, supposedly the scariest thing to talk about. Let me tell you,
even when I started talking about the genocide in Tigray, and I want to talk about it because
it has been heart-wrenching. We have teammates who have been experiencing this genocide,
1 million people dead, currently starving, over 100,000 women raped.
Just think about that.
Out of a population of maybe 6 million people, right?
This is what we're dealing with.
With that, we see the social media companies and how they just don't care, right?
Because they don't have to do anything.
Nobody cares.
The UN does not care.
Anybody.
So it's more of, you more of profiting and ignoring.
And in this particular case, it's actively retaliating against you if you say anything.
And I remember Tigrayans even telling me, hey, I know you spoke up, but be careful here.
Be careful right now.
Because that's kind of what we've been told and what we've seen for anyone saying anything.
Because of that, I mean, going back to Tal Broda and his horrific, absolutely horrific post,
how can you have anyone at any company
publicly saying things like that,
thinking that it's okay?
Even with that, that's why I was actually surprised
to see a whole bunch of people pointing that out
and asking for him to be fired, which he should be.
I mean, really, the baseline is like, you should not have genocidal people like that working at
your company or any company. But because that's been the norm for so many years, and we know the
kind of repression that people face and retaliation people face, whether it's protesters or tech
workers, because of that, I was actually surprised to see this pushback. face, whether it's protesters or tech workers. Because of that,
I was actually surprised to see this pushback. And unfortunately, it's taking for a genocide
of this proportions for us to see that, right? But the tech world is absolutely central and pivotal
to the Israeli apartheid and occupation. Absolutely pivotal.
Yeah, I think it's an essential thing
to discuss, right? And I've had Anthony Lowenstein, author of the Palestine Laboratory on the show
in the past to talk about this. And of course, Marwa Fitafta was on last year to talk about
how tech works in this conflict as well. And of course, what was happening on the ground at the
moment that we were speaking. I feel like at a time when we discuss artificial
intelligence and when this is so kind of ever present in the discourses around tech at the
moment, it's impossible not to ignore how that is being used in a campaign like what Israel is
carrying out in Gaza. Not just because we know that Israel has been using AI weapons and AI tools for a long time.
And of course, I discussed that with Antony in our conversation. But on top of that, we obviously,
as this kind of AI hype year has been happening, you know, Marc Andreessen, I know, has written a
number of times about how AI would make war less common or less deadly or anything like that. Meanwhile, we have the reports from,
for example, 972 Magazine about the gospel AI system that we know Israel is using for targeting
and other reports about how they're supposedly using this really kind of targeted system,
but it's actually ensuring that they can find more targets throughout Gaza in order to hit,
which is leading to much more civilian death. You know, I wonder what your reflections on the kind of AI component of this is.
So there is a 2022 report from Tech Enquiry that is talking about how more and more tech
companies are becoming military contractors. I think they were looking at at least Microsoft,
and they're looking at US and UK
governments and how they're purchasing from big tech companies are dominated by deals with the
military intelligence and law enforcement agencies. Jack Polson is the person who started this
organization, also left Google over other concerns. So there's that. There is the fact that
artificial intelligence was born out of military. That's kind of, you know, they're the ones who wanted these things. Like there is a book called The Birth of Computer Vision that I haven't read yet that I'm hoping to read about, again, how computer vision specifically was birthed out of military interests, autonomous weapons. This is just the history of AI. And so for Mark Andreessen to talk
about how, you know, even when you look at drone warfare, what these things do is it inflicts less
harm on the entity that is doing the drone bomb attacks and more harm on the person experiencing
it. Like these kids who talk about how they can't even, they're traumatized by blue skies because that's when the drones come. But you're not going to see the same drones coming in New York City, right? Then it's going to be, you know, all hell comes loose on these, whoever does it the entity that has all these systems is able to
inflict as much pain as possible without facing the impacts. And so that's why I'm extremely
worried about the increasing use of movement towards that, even though I know the field was
always kind of going in that direction. And Silicon Valley is going more and more in that
direction. A whole bunch of people have been writing about how Silicon Valley needs to collaborate with the American government and the military and things like that. So I'm definitely not looking forward to more of that happening. are pushing that, but it's far beyond that as well, right? As you say, Google, Microsoft, Amazon,
they're all contracting with US military, but also I believe Israeli military, you know, on cloud and
things like this. Obviously, SpaceX is a big military contractor. They just launched a US
spy satellite or military satellite into orbit, you know, and then we have all these other kind
of military companies like Andruil, founded by Palmer Luckey. Obviously, there's Palantir with Peter Thiel. There's just all
these companies that are increasingly focused on getting these contracts from the military.
And, you know, not only as the world seems to be moving toward more conflict, but they seem
incentivized to want to see that happen because it would be good for the bottom line, right?
Yeah. I mean, when we see the military budget, it's just like unlimited. It's just a bottomless
pit. Like they don't, I don't know. I heard that like sometimes they have to buy things just to
say that they're spending it so that their budget is not cut. And so, I mean, I just wonder like,
what if we lived in a world where the bottomless pit was like budget for housing or food. I just like, it does not make
any sense, right? It does not make any sense. And so it makes sense if you're a Silicon Valley and
you're seeing this bottomless pit of a budget, that's what you want to get a pie of. And of
course you kind of delude yourself into believing that's also the right thing to do. And it's nothing
new, right? That like the book Palo Alto, like, I see it in your background, which I'm also reading, right? It's just, it's nothing new,
because again, it's been completely intertwined with the military. But I kind of feel like some
of the tech companies were trying to act like they were more idealistic, you know, like the
newer ones is what I mean, like, Facebook, Google, cetera. They were trying to act like, you know, they're not like that or Apple. And I kind of feel like probably now there's a new phase where
that's not going to be the case. Absolutely. We can see exactly what they are. And it's clear
that they're just as bad as any of the rest of them. You know, it's difficult to kind of pivot
away from that conversation to wrap up, you know, everything that we've been talking about, but
just to kind of close off this broader conversation that we've been having, you know, we're now just about a year into this
wave of AI hype. I wonder where you see it going from here. Do you think that this hype is already
kind of, you know, starting to decline for this, this kind of cycle of it? And how has seeing what
has happened over the past year shaped the kind of work that you're doing
at dare and kind of, you know, what your team is thinking about?
I honestly, I'm not exactly sure where the hype is, if it's at the height or in the,
in decline, it's unclear to me. So, but I cannot imagine it, the promises being delivered
and the money that's raining down that they predict being delivered.
So I'm anticipating that it's, I don't know how long they're going to keep this going for.
Given that to your question about what we're doing at DARE, I think I said this last time too,
that, you know, it's just like, we keep on being in this cycle of paying attention to what they're
doing and saying no, and we kind of need a different thing. We
can't just continue to do that. And so this year, what we're really thinking about is how are we
putting forward our vision for what we need to do? So one example of that is a bunch of
organizations like Lilapa AI, Amgana NLP, and Lesson AI are thinking about how to create some
sort of federation of small organizations that can maybe have, you know, clients while they don't
out-compete each other or like try not to monopolize things. Because the idea, at least for me, is that,
you know, I want to push back on the idea that you can have one company for everything located someplace that makes all the money, is about showing how, just one example, these smaller organizations, machine translation
models outperform some of these larger organizations saying that they have one model for everything.
Because these organizations know they care about certain languages that these other organizations
don't.
They know the context.
And so the idea is,
what if these smaller organizations can band together and have some sort of a market share?
Because a client can come and say, hey, I need a lot of language coverage. And so maybe that
client would want to go to some big corporation. They don't want to deal with like 100 different,
you know, like 20 different organizations. So we're thinking through like what that might
look like, you know, so that's kind of an example of how can we show like a different way forward,
given that, you know, labor is a central component of all the issues that we talk about in AI.
We have a lot of projects collaborating with data workers. I mean, we have one example is
Turkopticon, which is an advocacy group for Amazon Mechanical Turk workers, right? Or we have collaborations with some of the Kenyan workers who were, you know, you read about right in the time or other articles. And so it's a combination of trying to empower the people that we don't think are currently being empowered, and also the organizations, because we think that we are living in this ecosystem with other organizations. So we also want to support the other organizations that are
kind of advancing a future that we believe in. Yeah, I think that makes a lot of sense. And
especially with the model that we have right now, it's kind of like one major American tech company
has to dominate what's going on. And of course, it always almost always has to be in the US,
but a different model can
have these kind of smaller groups that have their expertise in different parts of the world that
care about what happens in their parts of the world. And that leads to a much richer kind of
ability to think about how these technologies are going to affect what's happening there.
So I think that sounds fantastic. Yeah, I'll keep you posted. We're excited about it. We
don't know how it's going to... there's specifics still that we're working on,
but the idea is to help people survive and be sustainable
if they don't wanna be monopolies
and take over the world, right?
Like how do you support the organizations
that are just trying to do their thing,
be profitable, but not like take over the world?
Yeah, and I'll be looking forward to updates on that or, you know, whether you decide to
change gears and just nuke some data centers or whatever.
Yeah. Yeah. And, you know, as if we were nuking some data centers, creating this thing, who knows,
you know, one or the other. Timnit, always great to speak with you. Thanks so much for
coming back on the show. It's always wonderful to talk to you.
We always cover so much ground.
So thank you for having me
and congratulations on your show again.
It's such a great show
and I look forward to more episodes in 2024.
Thank you so much.
Timnit Gebru is the founder and executive director
of the Distributed AI Research Institute.
Tech Won't Save Us is made in partnership with The Nation
and is hosted by me, Paris Marks.
Production is by Eric Wickham,
and transcripts are by Bridget Palou-Frey.
Tech Won't Save Us relies on the support of listeners like you
to keep providing critical perspectives on the tech industry.
You can join hundreds of other supporters
by going to patreon.com slash techwontsaveus
and making a pledge of your own.
Thanks for listening, and make sure to come back next week. Thank you.