Tech Won't Save Us - AI Hype Enters Its Geopolitics Era w/ Timnit Gebru

Episode Date: March 13, 2025

Paris Marx is joined by Timnit Gebru to discuss where the AI industry stands in 2025 as AI increasingly becomes a geopolitical football even as the big promises made by AI companies fail to materializ...e.Timnit Gebru is the founder and executive director of the Distributed AI Research Institute.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Eric Wickham.Also mentioned in this episode:Timnit wrote about the TESCREAL bundle of ideologies with Émile Torres.The Data Workers’ Inquiry allowed data workers to share their experiences in their workplaces.Support the show

Transcript
Discussion (0)
Starting point is 00:00:00 We have to pressure politicians, you know, we have to pressure the politicians who are elected into office. It's not just electing the right people into office. You know, I'm much more interested in what happens before. Hello and welcome to Tech Won't Save Us, made in partnership with The Nation magazine. I'm your host, Paris Marks, and this week my guest is Timnit Gebru. Timnit is the founder and executive director of the Distributed AI Research Institute. Now, Timnit has been on the show a few times in the past. It's always great to get her insights.
Starting point is 00:00:42 And with everything happening with AI recently, I figured it was a good opportunity to get her back on the show to reflect on these things, but also to think about the wider moment that we're in and, you know, the bigger conversations that we probably need to be having right now, if we're to think about what a better future might look like or could be in, you know, a moment that does seem very dark, very discouraging when, you know, probably we need to be thinking of some hopeful things here too. So in this conversation, we dig into what DeepSeek, you know, this new Chinese AI model actually means for the American and European AI industries, how OpenAI and a lot of these companies seem to be trying to charge ahead
Starting point is 00:01:25 with this model of AI development that they have been pushing for the past, you know, couple of years now, at least. But how, you know, that really doesn't seem to be working out. How, you know, creating these very expensive AI models and AI systems does not seem like it's going to lead to some business that is actually going to make sense at the end of the day. And while they claim that this is going to deliver some big advancement in computation by building these computer systems that are going to be so, so smart, you know, it's hard
Starting point is 00:01:54 to really believe those assertions. And even if we did, is that the future that we really want to achieve? But of course, the other big question here is around geopolitics and how AI is increasingly brought into these wider geopolitical conversations, not just between the United States and China, but increasingly with Europe as well, as the United States becomes much more hostile toward its traditional allies under the Trump administration and with Vice President J.D. Vance. So these are all big questions that are not just technological ones, but that look at, you know, the wider politics and the people who are involved in these particular companies and governments and just the conversations that are shaping
Starting point is 00:02:35 our trajectory in this moment. And so I thought Timnit would be the perfect guest to come on and to try to sort through all these things with me. We had a fantastic conversation as always. I always enjoy speaking with her and I was so happy that she could come back on the show. So if you do like this conversation, make sure to leave a five-star review on your podcast platform of choice. You can share the show on social media or with any friends or colleagues who you think would learn from it. And if you do want to support the work that goes into making Tech Won't Save Us every single week so we can keep having these critical in-depth conversations
Starting point is 00:03:07 on the tech industry and the wider politics of technology, AI, and all these other sorts of things that are on all of our minds, you can do so by joining supporters like Julie in Switzerland, Jan from Berlin, Mac in Lansing,
Starting point is 00:03:20 Eliza from Nashville, Tennessee, Rebecca from Lexington, Kentucky, Karen from Copenhagen, Daniel from Copenhagen, and Omar from Toronto eliza from nashville tennessee rebecca from lexington kentucky karen from copenhagen daniel from copenhagen and omar from toronto by going to patreon.com slash tech won't save us where you can become a supporter as well thanks so much and enjoy this week's conversation welcome back to tech won't save us thank you for having me i mean i guess tech still won't save us so i'll probably be back a number of times. The title remains accurate.
Starting point is 00:03:46 It remains in 2025. Speaking of 2025, I guess to get started, what you make of everything that is going on in this moment, whether it's related to AI or just more broadly, what the tech industry is up to right now, its relationship to the US government. Well, Elon Musk is emperor of the US. So that's wonderful for those of us
Starting point is 00:04:06 who've been talking about him for such a long time. I think it's just a merger of all the worst dreams, like all the nightmares that we've had, right? Not only Elon Musk controlling the information ecosystem, but now controlling all of this data to be able to train his systems on it, and then controlling what the US government is doing, firing all these people. I mean, where do we even start? And then all of the tech tycoons kissing the ring, not pretending anymore, right? Let's just say the mask is off. I don't think anything about them has changed. I hope it's a wake up call. I don't know what else to say to everyone. No, it's certainly been great watching everything that's going on there.
Starting point is 00:04:45 And like being outside of the United States, it's still like watching it. It's like this is so terrible to see, not just because surely there will be repercussions beyond the US borders, but we still care about all the people down in the States and what they have to put up with right now, which is just terrible to see, right? One of the things that really stood out to me with your answer there was Elon Musk kind of moving into the government. And what we see is like time and again, them breaking down these barriers to get access to all of this data within the federal government, and then the wider consequences of what is going to happen there, especially when it seems like all of that data is being funneled into some kind of an AI model that they seem to be trying to train
Starting point is 00:05:22 for the government. I can't even keep track of everything that's going on. Like one morning, people were like, did you see the emails Elon Musk sent about summarizing their work? And then the next day, it's announced that he wants to use some of his summarization tools or whatever to then make decisions. And then it reminded me of opening eyes whisper tool, even just the automatic speech recognition tool, you know, that was making stuff up about patients when doctors were trying to use it to transcribe their notes. And this was actually not a problem we had with speech recognition before is what I'm going to say was speech to text transcription tools, because there were ways in which you aligned the speech with the text, right, which is not a step that opening takes. But then,
Starting point is 00:06:04 of course, you know, it doesn't have to work or not work, like they can just advertise these tools as being able to do this. And so there's that. And then there is the fact that he controls Twitter and all of the things that are coming down the pipeline against refugees and immigrants in this country, and all of the data they're going to use on them. I remember I signed a letter against something called the Extreme Vetting Initiative. It was in 2018. And something like akin to that right now where they want to analyze people's social media posts and revoke visas if you're quote unquote pro Hamas. I mean, I don't even know where to start and where to stop because their strategy seems to be shock and awe, overwhelm us with everything. And so that can make it such that you don't know what to focus on. You're so overwhelmed with all the terrible
Starting point is 00:06:49 things. We're seeing some terrible preemptive compliance instead of saying, hey, well, you know, we're going to challenge you in court. Like we're not going to abide by this or that. The institutions who are most resourced to fight back are saying, oh, we're going to scratch DEI, diversity, equity, inclusion, you know, from our websites, right? Like preemptive compliance, they can't wait to just comply. I don't really know. I'm going to these Elon Musk protests, like there's that take down Tesla thing, you know? I mean, I've been a longtime hater of Elon Musk, so I'm just happy to participate in anything that has anything to do with that. I love that.
Starting point is 00:07:27 How have your experiences been at the Tesla protest? Well, I haven't. I'm going to one tomorrow, so I haven't been to one yet. I just came across this website. What is it? Shut down Tesla or take down Tesla. I don't remember. Tesla take down.com.
Starting point is 00:07:39 Tesla take down.com, which is excellent. Thank you for their service, right? Yeah, so I don't know yet. But I was posting about it. One of my friends told me that she was going to go to the one on Tuesday in San Francisco. So I'm going to ask her how that went. I'm planning on going on tomorrow. You know, the thing that comes back to me now a lot of people are on board with saying that what's happening in the US is bad, right? I'm on LinkedIn now, you know, God help me. I'm also on LinkedIn. It's okay. I'm not judging.
Starting point is 00:08:08 Right. I see you there. We're doing our best. We're trying to build our networks, you know. Exactly, you know, five ways in which you can get up and, you know, be more successful kind of thing, right? When we post about, at least when I post about the US government, less people are saying, oh, no politics or whatever, right? Because now it's affecting them too. But just the way in which people built up Elon Musk, it wasn't the extremists or the Nazis who built him up.
Starting point is 00:08:34 The California government gave him all of the tax money. The Time Magazine covers when Stephen Colbert still had the Colbert report before the Tonight Show. He had him as his guest and man, he was fawning all over him. I couldn't deal with it. I was watching that interview.
Starting point is 00:08:50 He's like, man, you're such a genius. You're doing all these things, batteries and cars and stuff. How do you do that? You know, it's all these people who build these people up. And so it's over and over again that, and then there is no repercussions for the,
Starting point is 00:09:03 well, I guess there is repercussions for the journalists now, but they do that. And then there is a postmortem. Oh, what happened to Elon Musk, and then you know, they're gonna do it again for the next thing. And so how do we prevent this cycle from happening over and over again? I honestly, I'm at a loss for strategy or words. Seriously, I don't know how we stop this from happening again. And that's so frustrating to me as well, right? Because like you, I was seeing all these positive things being written about Elon Musk, the way that he was being framed and being like, why are you not paying attention to like the bigger picture of this man when we
Starting point is 00:09:35 know that there are so many issues with what is going on? And so many people just did not want to see it for so long. And now for me, it's impossible not to see what he's doing today. And to think of the fact that there were so many opportunities to kind of head this off if we hadn't been so willing to build him up and to ignore those pieces of who he was, you know, how he was treating his workers, the management style, how he was treating the women in his life, the ideas that he had about population and kids and how that overlapped with eugenics and these kinds of ideas. Like all of these things were there for a very long time, right? Sometimes I see these stories about
Starting point is 00:10:09 people who own Teslas now and who are getting these bumper stickers to be like, I got the Tesla before Elon like lost it or whatever. And it's like, you know, exactly. Like he's been this guy for a long time. He came to Stanford when I was a PhD student. And my God, it was like, I was like, wow, it'd be it must be so nice to just be able to make these statements was not having to back them up. This was I believe, in 2015. He was like, we're gonna cure cancer in 10 years, just like that. Oh, are we doing that by cutting the cancer research funding right now? Is that what you're doing? Oh, self driving cars are a solved problem. He said that. I should have told you for your book when he said that, just like that in 2015. I mean, all the other stuff he said, and he's just, in addition to being a terrible person, he's just such a, well, I'm trying to
Starting point is 00:10:56 choose my words here. I'm not exactly sure what to say. He's not the smartest guy either. People just treat him like he is, but he knows how to get the money and get the credibility because what does that say about our society? We love narcissistic men who just make these kinds of claims without tempering them or without backing them up. So it's on us really,
Starting point is 00:11:17 and our societies for building up people like that. In my circle alone, there are so many of these people who heard me harp about this guy, his workers, all of the things you mentioned, and still bought a Tesla right around then. They could have bought some other kind of electric car, but they decided to buy a Tesla. And now it's a badge of shame, which as it should be. Hopefully they dump their Teslas and join the rest of us in their protests. Yeah, I hope so too. And obviously I relate to
Starting point is 00:11:45 everything that you're saying. I'm sure you did a whole series on him. So yeah. I wanted to pick up on something that you mentioned in one of your earlier answers, which maybe for someone who is less kind of technically inclined, they won't understand why this is. But you mentioned that before these kind of generative AI models that are doing this kind of transcription, we weren't having these problems of theative AI models that are doing this kind of transcription, we weren't having these problems of the models making up all of this text and making up stuff that they're finding in these recordings when we were using kind of, you know, an older style of transcription. Can you explain why that change happens and why these newer models do end up making all this stuff up and why that is. People call it hallucinations,
Starting point is 00:12:25 you know, bad terminology. I don't love the word. Yeah. Yeah. But every time I heard about hallucinations or we talked about hallucinations, it was in the context of generating things, whether it is images or some kind of output. Whereas speech to text wasn't like a generation problem, right? It was like aligning the speech to a text, a certain amount of speech is input and a certain amount of text that is aligned with that speech is output. So there was an alignment step where you want to make sure that a certain portion of the speech is, you know, aligned with a certain portion of the text. Whereas in this age, everything is AI and nothing is AI. I don't know,
Starting point is 00:13:05 you know, like everything is AI, right? It's such a marketing term. Before neural networks became synonymous with AI. And then it was like the next level was large language models became synonymous with AI. Now it's generative AI systems. So AI seems to be anything that is generating languages or generating stuff. So now OpenAI's model is based on large language models once again. And I'm not saying that like language modeling is a task that I don't have any issues with. But in this case, you have this one big hammer and that's the only thing you're using for anything that you're doing. So imagine if you had this alignment step, you wouldn't be in a position where there's like one minute of text that is just made up from the speech, right? Actually, I mean, we're writing our research fellow who is also a founder of a language tech company called Lesson. I asked him
Starting point is 00:13:57 to actually write and explain why this happened because his specific expertise is natural language processing, whereas mine is more in computer vision. And so if you had this alignment step, you wouldn't have large swaths of text that don't align with the speech that you're trying to transcribe. And secondly, if you were curating data for this specific task, and you were only caring about one task, which is speech transcription, and that is the specific model that you built, you'd be less likely to do that, right? I'm very behind in the times because the hype is moving so fast that the terminology is like, I keep on saying this pre-trained models with what we used to base models or pre-trained models, which doesn't engender any kind of like
Starting point is 00:14:39 hype in anyone's brain. And then it's foundation models, which some autocorrect correct it to foundational, which is just convenient, right? And now it's frontier models. But actually, I think we've even like outdone frontier now it's agentic or agent, whatever models so and so in this age of like trying to say that you've built a machine God, you're using this one big hammer for any task, right? So you're not building the best possible model for that best possible task. So in my opinion, these are some of the reasons that we're in this situation now where we're having a provenance or like an integrity issue of the output when we didn't have this issue before. I mean, I'm not saying that we didn't have issues
Starting point is 00:15:21 of mistakes with your speech transcription, but that's different from making up large swaths of text and especially stuff that's like a racist. Like I was reading, you know, in this medical transcription tool that if it was a black person, it would make stuff up about, I don't know, gangs or some other stuff that the doctors didn't write. And you couldn't even go back to what the doctors said. Like they don't remember everything they said, right? Because they're just transcribing. So I don't think the audio was even recorded from what I read.
Starting point is 00:15:52 So this is the kind of stuff we're dealing with right now. My God, that's so terrible. Like, as you say, right? Like even with these older models, like they weren't perfect, right? You could feed a recording into it and you'd get something that kind of looked like it, but you'd still likely have to go through and kind of massage some of the words and fix them up
Starting point is 00:16:08 because everything wouldn't be right. But you would have the right number of words to correspond with like what was in that recording. It wasn't making up whole swaths of things and inserting it in there as you're explaining, right? And, you know, it helps people to understand like what these generative AI companies and models are doing is distinct from the way that things were approached before we kind of entered this moment. And I really liked what you said there about how this is a common issue that we have been talking about for a while with these large language model companies, OpenAI in particular, how they're trying to build these models that can do everything because, you know, as you say, they're trying to pursue this notion of like the computer that is a God that is all knowing that reaches human intelligence, at least
Starting point is 00:16:48 instead of being like, how can we build a model, a form of AI that is really good for this specific task? And maybe it's not going to do everything. Maybe if you try to apply it to things outside of that, it's not going to work. But for this specific thing that we need it to do, it is going to do a good job and that is sufficient to us. But for these companies, that is not something they're okay with. No, because you know, you have to convince people that they need to give you $7 trillion. So who's going to give you something? I mean, if you have to make it sound like as goddish as possible, you're going to be like, well, you know, we're gonna have this speech transcription tool, who's going to give you $7 trillion for that you have to say
Starting point is 00:17:25 frontier agentic AGI, this, this, this, that and then you have to like sprinkle some hype in the air so that it's all an echo chamber that they're all hearing the same thing. And then you know, some AI race like the US China, you have to make all this stuff up for anybody to be convinced to give you this money. And that's not even enough. So now OpenAI, I just saw, is gonna have like a $20,000 a month tier for what they call their PhD level service or something like that. And to me, this says that it's not working
Starting point is 00:17:54 with this level of funds and hype. They're still losing money. It's not working. And so now they're trying this other stuff out. Going back to DeepSeek, I hold two simultaneously views that seem to contradict themselves. On the one hand, I love that it happened. It's like a little poke in the bubble, right? It shows you how fast things can change. I can just imagine the anger one would
Starting point is 00:18:19 feel if overnight it's these people whose names you don't even know. You don't know where they went to school. They didn't publish at even know. You don't know where they went to school. They didn't publish at your conferences. They didn't go to the UN when you were like gallivanting or Davos, where they all go with their private just to talk about inequity and climate change or all of these things, the Nobel prizes, whatever, they're all echo chambers of each other. And so the same people are amplifying each other and other people who say something else, you're kind of ignoring them as a nuisance. And out of nowhere, these people who don't have the credentials that you've been telling people that you need to build these
Starting point is 00:18:55 things, you even convince the government to have tariffs on a whole other country because you're saying, you know, we shouldn't give them the best GPUs. They beat you at your own game. And they're all such sore losers. Like when you hear what they say, they're like, well, they're not really innovating. And it's not just OpenAI who's saying that. It's all a lot of the leaders in machine learning who are European, who are American, who, again, feel left out of what just happened, right? Because it had nothing to do with them. And so they're sore losers. Innovate. If someone, a deep mind, had A, used a different architecture like they did based on pure reinforcement learning, and B, went around CUDA, the hardware language, right, for GPUs, just one of those, people would have written about it like these people are geniuses. But now,
Starting point is 00:19:43 downplaying everything, right? Except what it showed though, is that I believe it was like the most downloaded app at Apple or something like that. People still wanted to download it. On that end, that particular aspect, I love that that happened. But on the other end, then you can tell people, look at what they did. You don't need to go to these conferences, to these circles, ask for their money, burn all of these billions. You can do something different. On the other hand, I imagine what if people's imagination wasn't so captured by this stupid LLM, large language model race? Like this is the one task everybody needs to work on. What could the deep seek people have done differently? They showed us that if they are
Starting point is 00:20:22 not so constrained by the resources that you think you need to have, that you can do something better like this. But their imaginations are still captured by the we have to do LLMs thing. So what if that wasn't the case? And secondly, of course, they're stealing data like everybody else. I thought it was hilarious when opening, I was trying to be like, only two core data. Like, come on. Yeah. And then they stopped saying that, right? Because they saw like how ridiculous that was. And nobody was buying it. Like, nobody cared. No, but I was like, what? Okay, sure. What if they did also? You stole our data. So, you know. Yeah. You've been spending the past two years going around the world telling governments
Starting point is 00:21:00 they couldn't protect anyone's data or copyrighted works or anything. And now you're mad someone did it to you. Right after the Nobel Prize, like Google didn't even spend a day right in their copyright. I don't know what lawsuit this was. They submitted something saying even the Nobel Prize saw the need for us, you know, we're helping the world. So we you shouldn't constrain us in this way kind of thing. And they have some nerve now to when other people are giving them a taste of their own medicine to complain about it. So I think they just stopped that because they saw that they weren't getting any traction for it. I hold these two views there. I don't even remember why I brought this up. But maybe it doesn't matter. Okay, but I still think it was fascinating to hear you outline and I want
Starting point is 00:21:44 to come back to the second piece of that a bit later, I think. And I want to return to the first and you were talking about DeepSeek. And of course, you know, people will be familiar with this model that came out of China that really shocked a lot of the European and American AI industry, as you were talking about, because it was doing things more efficiently than these companies were. And, you know, it was using graphics processing units, if I understand or remember correctly, that were not as powerful as the ones that are typically being used by these American and European companies that are trying to advance this notion that you've been talking about of needing to build these large foundation models with these large
Starting point is 00:22:19 language models that can do everything and all this kind of stuff. But I wanted to ask you this as well, because I feel like I'm also seeing two different stories or two different threads play out. And I'm wondering how they fit together. Because on the one hand, you have this development, this innovation by the team at DeepSeek that created this model that is more efficient, that had the European and American companies kind of looking a bit scared there for a little while. And then on the other hand, you still have OpenAI and Sam Altman announcing GPT 4.5, saying that it's a giant, expensive model, that they're getting hundreds of thousands of new GPUs to run it, which really seems kind of out of sync with this notion that we can start to do these things more efficiently. So what do you make of these two pictures and what we're seeing there?
Starting point is 00:23:05 What's happening with open AI right now? Yeah, that was interesting. And then he was like, heads up. You know how he uses no capitalization on Twitter? His brain is too cool, you know? He's too cool for that. Too cool, right? That's for the rest of us.
Starting point is 00:23:19 He's like, heads up. This model doesn't crush reasoning benchmarks. I felt like I was listening to a human give good advice or something. I'm like, well, you spent how many billions of dollars to have a human sound like giving you advice? I think I saw someone being like, man, I think someone told these people that they have to make a friend and they took it too literally. They're like, we have to make a friend, you know, my thoughts here are one, if you're someone like open AI, or all these people who have, I don't know if you're convinced, but you like to believe that the scale is all you need approach,
Starting point is 00:23:59 you know, that you need more and more GPUs, more and more data, etc. is what you need to do. You want to believe that because that is the only way that you can convince everyone that you have an edge that nobody else is going to catch up. I remember someone telling me that this was on the slide deck of Anthropix pitches to investors. And this was like a couple of years ago. If people don't get into this game now, they'll never catch up. This is your fundraising strategy. You want to believe this and you want to also starve everybody else of GPUs. So I don't anticipate that they'll change this approach. I don't anticipate if you want research on more efficient and less resource intensive models, it's not going to come from these people because I just
Starting point is 00:24:43 in my opinion, they have nothing to gain from you knowing that, you know, except for having more competitors. I think that they'll continue to say, if you can make whatever we did before more efficiently, well, we're going to do something even better that you can't do like with more GPUs. So I think that's their play, which I don't imagine changing anytime soon, except maybe if this bubble bursts, which I really hope it does. So do I. But I find that so interesting, right? Because I feel like we have these stories about how when OpenAI launched that $200 a month subscription, as you're saying, now they're planning to offer $20,000 a month ones, the
Starting point is 00:25:19 price keeps going up. But even then, with the $200 a month, they admitted that that was not profitable, right? That they weren't actually making money on this. We know last year, they lost $5 billion alone, because they are trying to pursue this model where you need to build these massive foundation models that require so much compute that requires so many GPUs that is so expensive, and we just kind of need to take it on faith that eventually they'll find some way to like properly monetize this and make money back, which seems like a really terrible way to kind of build a business and expect that you're going to have something down the road that is going to pay off. There was this announcement at the White House recently of this plan to invest $500 billion in this Stargate like data center
Starting point is 00:26:00 project to again, like fuel these ambitions that OpenAI has for what its vision of the future of AI should look like. And to me, it feels like looking at something like DeepSeek, this really leaves the door open for other companies around the world to do something that is much more efficient, that, you know, maybe doesn't work as well as some of the models that OpenAI is putting out, but works well enough for the vast majority of people who would want to use this kind of thing and is actually like somewhat reasonably affordable to actually run in a way that it seems like
Starting point is 00:26:33 none of these OpenAI models will be able to do anytime soon. So remember, when OpenAI was founded, they don't want to be a business. They want to build AGI that's beneficial to humanity. And in fact, and I don't even know if this is in their charter anymore. If somebody else gets to build AGI that's beneficial to humanity before them, they would consider their mission accomplished. That would be accomplishing their mission. So they are collaborative. They don't care who gets to beneficial AGI first. They just want beneficial AGI. That's been in their charter for a long time. I don't know if it's still there, but it definitely was there. Their story has been that this is going to bring humanity utopia. So it's not about business. They lose $5 billion, it's worth it. I mean, we're talking about humanity with a capital H. Who cares about
Starting point is 00:27:19 these couple of people? You're taking up too much water, building your data center, whatever, you know, we're bringing utopia. I guess that only lasts a few years, you know, and investors don't care about utopia, they want money. And so at some point, these investors are like, okay, where's our money? And I know they have SoftBank. And they have, I read that SoftBank is putting in 3 billion. I'm not sure if I have that accurately. And I think Saudi Arabia is one of the biggest, they have the most amount of money in one of the most amount of money in SoftBank. And so, you know, you ask them like, you know, yeah, the Saudi Arabian regime and beneficial to humanity, that's a very interesting combination, right? But no, that doesn't matter. It's like when these like US government congresspeople and stuff are like,
Starting point is 00:28:03 we don't work with dictators. And it's like, what are you talking about? Like, you can look just at Saudi Arabia, but there's so many other instances where the US works with, yeah, dictators all the time. You know, they're not called dictators when they're your allies, right? It's just like when I was writing this thing for the New York Times, when I said totalitarian government about the Eritrean government, there was absolutely no editing or any like, yeah, sure. You know, anything I said about Israel, oh my God, let me tell you all the editing apartheid. Oh, bells and whistles. Genocide.
Starting point is 00:28:36 Oh, but I don't, you know, I'm just like, my God, why are you guys? And I would ask like, well, you didn't have any issues with me saying this about the Eritrean government. And you didn't ask for any, you know, sources or anything with respect to OpenAI. The one other thing I wanted to say was it was very interesting for me that Sam Altman talked about, you know, that they don't even shatter reasoning benchmarks, because to me, the word reasoning is not even well defined. I don't even know what they mean when they say that these models are good at reasoning or something like that. And with that undefined kind of not well-defined term,
Starting point is 00:29:11 a bunch of works have shown that these benchmarks are not really good at testing for these things, given that nobody is asking what kind of data leakage there is, as in whether OpenAI is basically testing on its training data. How do we know that they're not even training on these reasoning, quote unquote, reasoning benchmarks and memorizing everything and then saying that they're shattering them? Even with all of these issues, he's giving us a heads up that they're not shattering these benchmarks. I just don't know what it takes for the hype to die down. You know, I am seeing all of these celebrities and artists, and I don't know if you saw like the whole AI Action Summit kind of situation,
Starting point is 00:29:51 which was another thing with, I think, was like France trying to position itself as an alternative given what's happening with US and China or whatever. But, you know, it was like all of these company leaders, like Sundar was there, Pharrell was there. But everybody's imagination is just captured by the hype coming from open AI. And I don't think anybody else is as good as Sam Altman at capturing people's imagination with this hype. And I like wonder what are things we can learn from this? You know, why are we not doing that? Totally. It often feels like one of the big innovations of Silicon Valley is around like And I like wonder what are things we can learn from this? You know, why are we not doing that? enthrall to these companies in this sector and whatever they're doing on that point you know you bring up the ai action summit and of course jd vance the u.s vice president was over there gave this big speech where he basically laid out that the united states was going to lead in ai you know
Starting point is 00:30:56 was going to kind of like dominate this industry that europe could certainly work with them but was not going to be able to like lead on its own, that this was like a US thing. And, you know, the US was going to keep dominating these technologies of the future. And you talked about the hype, right? And I felt like the hype was starting to wane at the end of last year, you know, through like the second half of last year, but it feels like this real embrace of AI and this notion that generative AI is this geopolitical thing that America wants to control, that China wants to control, that Europe wants to control. And this has been even brought more to the forefront with the Trump administration and its allegiances with Silicon Valley,
Starting point is 00:31:36 that this seems to be a moment where even if it's hard to prove that there's really much benefit from these technologies, that we're actually seeing the types of things that Sam Altman was promising a couple of years ago, that regardless of whether that actually arrives or not, I think we're not really seeing it arrive, that because these governments have embraced it and made it clear that they believe this is like key to their power in the next whatever decade or so, that this is maybe going to keep the hype going. Like, I wonder how you feel about that and this positioning of AI as this like geopolitical fight, I guess. Yeah, you know, I've been talking about the test grill bundle for a long time.
Starting point is 00:32:13 You know, the transhumanist, effective, let's say, extropianist, let's like remember, I know you had a meal on the show too, singularitarianism, where the singularity is coming. See, Cosmists and then the Rationalists, Effective Altruists and the Long-Termists and all that. And these people have been selling AI as a geopolitical war kind of thing. When did they write the pause AI letter? Is it March 2023 or February 2023? I don't know. I just don't understand how uncritical the mainstream media is. And then Elon Musk signed it and everybody's like, yeah, we have to be worried and this and that. And then a few days later, he announces XAI. And Maximilien Curious, I don't even know what he said. I wonder if he writes these things high sometimes. And so now, because they're all so worried about existential
Starting point is 00:33:00 risk to humanity, that's why they founded OpenAI. And now there is a fallout between the accelerationist camp. Now he's joined the accelerationist camp, I guess that says just like build it, you know, we need more AI more or whatever. Don't worry about safety. These people, I've always said they're two sides of the same coin. The same people are like, Oh my god, we need to worry about existential risk, which means that we have to be the ones to build it. Because if we build it, we bring utopia. China is going to be the bad ones. Let's not let China build it.
Starting point is 00:33:32 Right. And then they flip flop between like, let's accelerate because that's when we're going to save humanity, blah, blah, blah. And now we're seeing the other side of that coin with Elon Musk. So he's had a falling out with the so-called safety people. I hate this AI safety because the scam is we should build AGI spend the trillions and the billions and the whatever. If you go to the effective altruist website, literally, they told their people that to save the world, one of the top things that you need to work on is AI safety. And then the other ones I think they had there were geopolitical US China stuff, right with respect to AI safety. And some of the places they said you really need to work at to do this are Anthropic, OpenAI,
Starting point is 00:34:11 join their safety teams because they care so much about safety. And so they've been working through this geopolitical stuff for a long time. And it really helps the companies because we talked about DeepSeek before, what I think is going to happen now is Opinion is going to use it. And they've already started saying this stuff to say, see, DeepSeek has done this. Don't even think about regulation. And don't even think about telling us we can't use enough nuclear energy now or whatever. I know one of the things that apparently Macron was saying, talking about nuclear power and how they're all clean and stuff, which irritates me to my core, because they're not mining the uranium in the Louvre or some nice cafe in Paris, right? Like, they're doing it in their colonies, basically, where everybody's dealing with radiation and
Starting point is 00:34:55 pollution issues and all of that. So this geopolitical situation has been progressing for a long time, right? Now the change is that the Europeans are like, oh, so US is no longer with us. So we have to do our own thing. There's US, there's China, and there's us, and there's Russia, you know? So I think that's kind of how it's being positioned. This thing that is advertised to do magic is being positioned as the geopolitical asset that everybody has to have and fight over, you know, not water, you'll give water to this thing, but not the humans, you know, like somebody was telling me, you know, I was just in Taiwan. And I don't remember, like someone was telling me that they had a drought a few years
Starting point is 00:35:33 ago. And in the city where the data centers were, at some point of every week, people were allowed to have water for two days, and the rest of the days were for the data centers. You know what I mean? We're gonna see this at a larger scale with this whole geopolitical repositioning that's going on. Being up here in Canada, water is continually something that we worry about and talk about in the sense that there's always the concern that the United States is going to try to take our water or pressure us to, especially as we see what the United States is doing now with the tariffs and the threats around the 51st state, we know that water has come up in conversations between Canadian and American officials. So obviously, this is a much bigger picture thing than just what's going on with AI. But like that
Starting point is 00:36:14 is always on the forefront to you were mentioning there this notion of like AI safety, right? And how we have had this framing that have the accelerationists, right, who just want to push this regardless of whatever. But then you also have the AI safety people who also want to push it. They just want to do it safely or something. And then, of course, you people like you are framed as AI ethicists. Where do these framings come from? And like what comes out of framing you as an ethicist over a safety person or something like that? Like how does that make you framed in a certain way when you are like making these criticisms versus what these other people are doing
Starting point is 00:36:48 and how they're then positioned or seen? I've never called myself any of these things. All I know is I went to school to do electrical engineering. You know what I mean? And then I did computer science too and all this and I did my job and I went to work at places and I started seeing things and I was like, this is not what I want to do. Oh wait, this is not the thing I want. You know, I kept on saying that,
Starting point is 00:37:08 right? So when I became an ethicist, which is like the most ridiculous thing to frame me as, like, I cannot tell you what an ethics curriculum looks like, what classes you should take that when I think about ethicists or something, I'm thinking about the trolley problem and whatever to me, an ethics is like a nice to have a Well, you have to be ethical. Should you give this thing to this person or to this other person? Well, let's think about different ethical framings. Let's think about pros and cons. That's how I think of it. Anyway, I don't know. I'm just saying that's what it engenders in my mind. What about safety? Is it safe for your kid to go to the pool when they can't swim? Well, no. Is it ethical to say no if someone wants to eat your food when you have this much versus that?
Starting point is 00:37:53 Well, it's up for debate. You know what I'm saying? So this is what it sounds like to me. And it is so insidious how they have framed themselves as safety. The same people who have the same degrees as me, or not even. Actually, many of these people do not have the kind of experience that I've had. Products in the, you know, I've shipped or built or whatever. But they're, somehow, they're safety.
Starting point is 00:38:15 Hypothesizing about super intelligent machines and whatever, I don't even know what they're talking about. That's supposedly safety. But when Joy Bulimini and I write a paper showing systematically how the errors of face recognition systems are much higher on darker skinned individuals than lighter skinned ones, and literally saying that this is going to lead to false arrests of darker skinned people
Starting point is 00:38:38 who will be falsely flagged as criminals on security cameras. That's an AI ethics thing. Is it ethical to do that to them or not? You know what I mean? I've been very irritated by it for a long time. And now that they've created that framing, I don't ever want to be associated with the so-called AI safety people
Starting point is 00:38:55 because I'm like, I'm no, I'm not that. And then I'm like, but it's a word called safety that they've hijacked. Safety is a good word. Let me tell you the words that they've hijacked. The effective altruists have hijacked effective. A great word. Future is a word that I don't like anymore. I mean, future is a normal word. Flourishing. There's all these keywords they use. It's extremely insidious the way they've done it, right? And so now, I mean, is a government going to be concerned about
Starting point is 00:39:22 something that's ethical, something that's safe? Clearly, they're going to consider something that's safe versus not for their population, for the geopolitical regions and all of that. You know, ethics is not like a legal situation or what is it? So that's the kind of stuff they've managed to do. And let me tell you that initially, all of the white dudes were the ones doing safety. And all the women, especially the women of color, talking about colonization, climate change, racism, policing, those are matters of ethics. Okay, should you kill all the black people or not? Well, that's ethics. Should we be concerned about robots taking over? That is a matter of safety, Paris. So it's just so insidious. And then when journalists talk about a fight between the AI ethics and the AI safety people or whatever, Emily Bender wrote a
Starting point is 00:40:11 whole post about that, a blog post, that framing is just so insidious. So you know, I reject it. I've never called myself as an AI ethicist or whatever. I'm definitely an electrical engineer. I'm a computer scientist. And you can call me any of those things. I'm not an ethicist. I don't have that training. I've learned from a lot of other people. I've learned from people like Ruha Benjamin, who I don't know if she's an ethicist or not. She's not. Science and technology studies, right? Or information. Sophia Noble has a PhD in information studies. That means she knows more about information, the internet, how things are organized than any of these people. I don't know if they even categorize her into any of these things. So I've learned a lot from these people, but I remain an electrical engineer and a computer
Starting point is 00:40:55 scientist. And I kind of feel like they want to erase that, right? They want to erase my authority on some of these subjects. Yeah. Take away your technical understanding and make it seem like, oh, you're just concerned with the ethics and what's right and what's wrong. Hand-wavy kind of stuff. Yeah. Exactly. Exactly. When not only do you understand the technical underpinnings of these things, but like, because you are more critical than the people who are running these companies would want you to be, because you think about these things in a much broader way than like, are we going to build the computer god and realize our sci-fi dreams, then you're kind of outside the realm of conversation
Starting point is 00:41:31 that they want to have. And so it does feel like ethicist immediately kind of has been used as like a branding to be like, these are the people you can listen to a bit, but like, you don't really need to worry about them because they're off like doing their own little thing. And we don't really care so much about that. We're the serious ones, right? Yeah, we're the serious ones doing matters of safety. Of course, there are ethics problems we do care about, but those are not safety. And I have to say that I don't rank people's expertise based on whether they know the math or the computer. You know, I don't.
Starting point is 00:42:03 I have equal respect for everyone in that arena. You know, I've learned so much from artists, from all sorts of people. But it's just that my expertise happens to be a specific thing, you know, and that is the lens with which I am analyzing these things. And so this weird branding as ethicist, how do you take an electrical engineer and change them into ethicists? I don't really understand that. Or a computer scientist, you know?
Starting point is 00:42:30 It's a very interesting thing that has happened over the years. And it started happening when I started talking about these issues. It's been really fascinating to see, like just generally like how this industry uses language and deploys language in very useful and convenient ways, right? You can think of the term artificial intelligence. You can think of terms like even how everything with the internet and it is smart, apparently. Smartphone, smart TV. Frontier models, foundation, agentic or whatever is happening at this moment. They have very good PR teams. Let's put it that way.
Starting point is 00:43:02 That's not what we have. We need to figure this out, you know, because so I was thinking about this, right? Like when you're telling people what not to do, they're only thinking about what they're losing. When we're thinking about like cars or road self-driving cars, we're just saying this is bad, your pollution. How can we tell them what they gain? I'm imagining if there were no cars, no road space, like more trees, more people playing, more hanging out with your community, your neighbors, better
Starting point is 00:43:32 transportation, no traffic, like bullet train. I just feel that we have to be better at painting this picture of what people gain. I've been trying to think about what these people are good at, right? Because that's what they're good at. They give you this imagination of what they think you will gain or they, you know, and people want to think that they can be like Elon Musk, a billionaire flying around his jet or something. I don't know. At D.A.R.E., we're starting to, we're doing this possible future series to just get us to think personally, like to imagine because we're too stuck in this, like, don't do this, don't do that. And, you know, we can be imaginative and innovative too. That's why I became technologists, like not to stand on
Starting point is 00:44:10 the sidelines and tell other people what to do. So how can we reframe our messaging to say, this is what we gain by not even having cars. I mean, it's good to, to show the history and to show why it's bad and all of that. But I think we have to allot a lot of time in our messaging to kind of paint that picture of what does a city without cars look like? Let me visualize it. Let me see what my life would look like in that kind of world. Not to kind of cut off that point. Obviously, this is very kind of big thinking, but I feel like it even relates back to what you were saying earlier when you made those two points about like the deep seek moment and what it showed us. And you know, your second point was obviously
Starting point is 00:44:47 that it showed that maybe if we weren't so focused on this, like LLM model, and the notion that LLMs need to do everything that maybe there could be breakthroughs that we're not seeing, but because everyone is kind of forced into this particular funnel and this way of doing things, because that's where all the hype and the excitement is right now that there's likely a bunch of things that are lost. And I feel like that's not even just the case with like machine learning and work on these like AI tools and products and models and what have you. I think of that with technology in the tech industry more generally, because so much of it is forced into what this model of technology that gets kind of Silicon Valley people going is. And so that is
Starting point is 00:45:26 where like all the excitement and the energy and the resources and the investment goes when technology could be this much wider thing could be deployed and developed with very different ideas for how it could be used. But because Silicon Valley is what drives this and this notion that like it needs to work for whatever model they have developed that feel like leaves us very impoverished in some ways. Asma, who is the founder of Lassan, the language tech company that I told you had attended the African AI village, which is like a side event at the AI action Paris summit, you know, he was telling me it was debriefing with me. And one of the interesting things, you know, people were talking about how they don't have resources, they don't have GPUs, they don't have data centers. And I was thinking, I'm like,
Starting point is 00:46:07 shouldn't they be at the epicenter, the leaders of low resource, limited data research. And I was thinking, you know, before deep learning took over, and I think it was like 2012 was like the inflection point in computer vision, for instance. Now, let me just do an aside and say the goals of computer vision have always been horrible. It's like based on surveillance and all of that. The methodology was to let's say you want to see if there's an orange in a in an image or something. I don't know. First, what you want to do is you want to learn some feature representation of an orange that is the same kind of similar for all oranges in any kind of image, right? And then a car has to be in a different kind of feature representation, you would come with mathematical
Starting point is 00:46:49 representations for this feature based on your knowledge, and you wouldn't need any data to train this. That was kind of the paradigm in the beginning, right? And then Jan LeCun was complaining that all of his papers were getting rejected from the computer vision venues. Like in 2012, he wrote an open letter, you know, it's still too demoralizing for students. He's not going to submit these papers. These people are hostile to deep learning, but they don't understand them and whatever, you know, stuff like this. Now, the only paradigm that exists apparently is that we have to learn these features from a whole bunch of images. I do wonder why can't a certain segment of people just go back to that
Starting point is 00:47:27 philosophy and vision and just work on it, regardless of whether their papers are getting accepted right now, whether people listen to them or not understand the extent to which it works when it doesn't. Guess what? Google's not going to do this because they would hate for this to happen. They have all the data. That's their competitive advantage. OpenAI is not going to do this. None of these people are going to do this, right? But like you can go back and ask these questions. If you poured all the money, the resources, all the minds were stuck in this one direction into this other direction.
Starting point is 00:47:58 And then you figure it out. Like it's first of all, 100% explainable. You know exactly how you appeared. You arrived at this answer, right? And then secondly, you require no training data. You need to have evaluation data. That was kind of the road we were on, right? And then now we're on this other road
Starting point is 00:48:15 because it showed that this other road did a lot better in some benchmarks, but it doesn't mean, you know, why have we completely given up on this other road? So, you know, I just, our imagination is completely captured. I find that so fascinating, right? It's one reason why I love to talk to you and learn from your knowledge of this space. But it's also why I really like to pay attention to
Starting point is 00:48:34 what you guys are doing at DARE, the Distributed AI Research Institute, because I feel like I pay attention to the projects that you guys are doing. And it gives me such a different picture of what is going on in this space with the different approaches to AI, the focus on the workers. You know, you talked about this project that you're talking about now where you're looking at the future. Can you give us a bit of an idea of what you've been doing at DARE lately? You know, not just you specifically, but the folks there and the kind of issues that you have been looking into? At DARE, we try to do both mitigate the harms of AI systems and like forge a different idea for us of what technology should look like.
Starting point is 00:49:10 We try to do two because one is very easy to do because there's too much shit going on. Sorry, you know, like just like a dumpster fire after dumpster fire. So in terms of one, you know, even when we talk about the mitigating the harms of AI systems right now, we're doing it in a way where we're like trying to figure out, OK, So in terms of one, you know, even when we talk about the mitigating the harms of AI systems, right now, we're doing it in a way where we're like trying to figure out, okay, what is our path forward? So for example, you mentioned workers, one thing that's really important for us at DARE is to make sure that people speak in their own voices. So I don't
Starting point is 00:49:39 want to speak on behalf of the workers, I want to make sure that they get to speak on their own behalf. So Mila is a researcher at DERN. She also is in Berlin at the TU Berlin. She really wanted to do this data workers inquiry project. It was like a whole Karl Marx thing, you know, like he did. Yeah. So it was based on that. And so we ended up having workers from four continents. If you go to data-workers.org. Yeah, I'll put a link to it in the show notes. You know, people wanted to write reports or animation shorts, or like they just chose whatever medium they thought was best for their message. And then Kenya is a big hub for this kind of outsourcing because there's a lot of highly educated English speakers. So you even have ghostwriters writing like college essays for all these rich kids in the US.
Starting point is 00:50:28 I mean, there's a whole bunch of stuff going on there. So the Data Labelers Association just launched. One of the most beautiful things I saw during that launch was a data worker from Italy was talking about how inspired they were and what's happening in Kenya. So we're trying to foster this type of cross-border solidarity because what happens is that workers in one part of the world stand up and then they shut that facility down or whatever. And then they go to another side of the world with other vulnerable people. We're trying to figure out how to help connect all of these workers from around the world. And I remain, as I've always been, a believer that the labor movement is one of the most potent ways that we have to counter any kind of harm, because these things cannot be built without our labor. Yes, politicians, you know,
Starting point is 00:51:20 we have to hold them accountable. But that's after the fact, we have to pressure politicians, you know, we have to pressure the politicians who are elected into office, it's not just electing the right people into office, you know, I'm much more interested in what happens before, like the students, the workers. And so even now, workers can still stand up at Google, at Facebook, wherever, if we believe this, that we need resources, there's a lot of workers who ask us about what to do with respect to AI and their work contracts. So we're working on resources for them. And we've been sort of doing one offs, like we worked with the nurses union one time. And so we're trying to take these learnings to like give some sort of curriculum for workers. So Alex is working on something called the Luddite
Starting point is 00:51:57 lab. You know, they call us Luddites, but whatever, wear it proudly, you know, and so that's that. And I mentioned the possible future. We're going to have an event, I think, sometime in April. I think second week of April. Dylan and Pauline, our team, worked on an interactive workshop that people were going to announce for people, small groups of people that they can sign up for. I think we all need to be more imaginative, right? So just to build that muscle. And so we'll launch, we'll kick that off somewhere in the spring.
Starting point is 00:52:28 I think I've been talking about this for a long time. We're still working on a federation that we're calling the Huniki Federation. Huniki means everyone in a South African language that only has one speaker left. And so this name came from one of our, you know, collaborators, Pilonomi, who has a company called Le Lapav, which is a South African company. And this one, again, is like to counteract the whole one size fits all large language model, you know, one model for everything approach, we're really trying to support companies and organizations that want to stay small products, quality over quantity for their communities, and then, you know, discourage them from trying to be monopolies, right? These are some of a few of the things we're working on right now.
Starting point is 00:53:05 And yeah, I'm out of the loop too. I'm learning about agentic AI. I'm about to read Margaret Mitchell's paper on why they shouldn't have such a thing. So I was just asking her, I'm like, when did we move to agents? Because I've only heard about agents in the context of reinforcement learning.
Starting point is 00:53:21 So what are they saying now? We went from frontiers to agents. Like, when did that happen? That whole deep seek thing, I'm not happy about data theft and the LLM thing. But it's reinforcing my view that I don't need to be in the loop always, I don't need to go to their conferences and watch what they're saying always, because that infects my mind and my imagination. I love that. And I feel like that's something to reflect on too, right? Because I feel like sometimes people are so obsessed with making sure that we're paying so much attention to these powerful people. And certainly that is important to do, right? And making sure that we're trying all the new tech products to be able to understand
Starting point is 00:53:57 properly how they work and stuff. And it's like, you know, sometimes maybe we can also just like take a step back and not make sure that we're constantly like in this kind of ecosystem. Some of the work that you guys are doing at DARE really helps to do that too, right? By widening out people's scope and to say there's many other different ways to look at what is going on here. So it's always great to see the work that you're doing. And it's always great to speak to you, Tim. Thank you so much for taking the time to come back on the show. Thank you so much for having me. It's always great to be on the show and to hear your show because it always feels like we're not crazy and that we're all on the same boat. We're definitely not crazy. Thanks. Tim Nigebru is founder and executive director of the Distributed AI Research Institute.
Starting point is 00:54:39 Tech Won't Save Us is made in partnership with The Nation magazine and is hosted by me, Paris Marks. Production is by Eric Wickham. Tech Won't Save Us relies on the support of listeners like you to keep providing critical perspectives on the tech industry. You can join hundreds of other supporters by going to patreon.com slash tech won't save us and making a pledge of your own. Thanks for listening and make sure to come back next week. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.