Ideas - Why we should 'fight like hell' against Big AI

Episode Date: March 25, 2026

"Our democracy is what’s at stake," says Karen Hao, an engineer who used to work in Silicon Valley. Now she’s an outspoken critic of its AI giants. The investigative journalist argues AI companies... run their businesses like empires and it has to stop. In her 2025 bestseller, Empire of AI, Hao digs into the global impact of Big AI and explores how we need to rethink AI to build a better future. This podcast includes a lecture by Karen Hao and a discussion with host Nahlah Ayed.

Transcript
Discussion (0)
Starting point is 00:00:00 At Desjardin, we speak business. We speak startup funding and comprehensive game plans. We've mastered made-to-measure growth and expansion advice, and we can talk your ear-off about transferring your business when the time comes. Because at Desjardin business, we speak the same language you do. Business. So join the more than 400,000 Canadian entrepreneurs who already count on us, and contact Desjardin today.
Starting point is 00:00:25 We'd love to talk, business. This is a CBC podcast. Welcome to Ideas. I'm Nala Ayyed. Today we're going to launch ChatGPT Atlas, our new web browser. This is an AI-powered web browser built around ChatchPT. We think that AI represents like a rare once-a-decade opportunity to rethink what a browser can be about. That's the CEO of OpenAI, Sam Altman. His tone is casual, just another low-key tech biz guy, pitching a handy consumer product.
Starting point is 00:01:05 something to use with the company's AI chatbot. But that doesn't quite capture the nature or the scale of OpenAI's version of artificial intelligence. Karen Howe is an American journalist who once worked as a Silicon Valley engineer. Open AI is the company that made all of the decisions that led to the frenzy of the AI race today. They were the ones that decided to set on the path of building larger and larger scale models. They were the ones that decided to define the goalpost as trying to build a so-called artificial general intelligence that would match human capabilities. And so the way that the general public interacts with AI today and understands AI through models like ChatGBT,
Starting point is 00:01:59 was completely shaped by the decisions that were made inside the walls of this company. Karen Howe is a critic of Open AI's vision for AI and the practices they and other American companies are using to advance their global AI agenda. Her investigative book, Empire of AI, was published in 2025 and became a bestseller. I really want people to feel that they understand the technology and therefore have agency in shaping the technology.
Starting point is 00:02:34 She now travels the world, speaking to audiences and the media with a clear and urgent message, one that seems to go beyond book publicity. I really want these companies to be held accountable. I mean, if you take seriously the idea that they are empires as I lay out in my book, and I think we absolutely should take that seriously, they are a huge threat to fundamental rights and to democracies around the world. And the only way that we can counter these companies is,
Starting point is 00:03:06 by enabling a broad base of people to hold them to account. In this episode, Karen Howe describes the global appetites and huge impacts of big AI. She was in Toronto to give the talk you'll soon hear. And I also caught up with her for a short conversation. Let me quote for our audience from OpenAI's own charter. Our mission is to ensure that artificial general intelligence, AGI, by which we mean highly autonomous systems that outperform humans at most economically valuable work, benefits all of humanity, close quote. Is that a contradictory statement?
Starting point is 00:03:52 I don't know if you would call it contradictory, more so extremely ill-defined, because every aspect of that sentence lacks definitional clarity. The AGI part, I mean, the way that they define it there is just one of the various definitions that Open AI has used throughout its history to define that term beneficial to humanity. Who's actually getting to define what's beneficial? So this mission statement is extremely vague. And what I conclude by the end of my book is one of the tools of empire building is to have such a vague purpose that drives you, that you can just interpret, reinterpret, reframe to yourself and to the public again and again, why every single action that you take that seems to continuously accrue more capital, more land, and more
Starting point is 00:04:46 power to you is in fact under the banner of this mission. So I couldn't help but notice that you didn't mention Open AI's CEO, Sam Altman, at all in your talk, although you talk about him in your book. Instead, you focus on individual and grassroots groups. Is that a deliberate choice? Yeah, because I think as a society, we are very obsessed with understanding the people behind these systems. And of course, I tell the story of him in the book as a way to guide people through a very concrete example of an individual that's running a company and a very concrete example of how that company then makes decisions. But ultimately, what I want people to recognize is the problem is the system of power.
Starting point is 00:05:35 that these companies have developed. It's not actually the right question to ask whether an individual running it is good or bad because the problem would not be solved by simply swapping the individuals at the top. It's the fact that a single individual is able to make decisions that can affect billions of people around the world in a very anti-democratic way. And so in my talk, in other remarks, I usually try to take the fixation off of the person and really focus on the issues and then also give more voice and more platform to the people who are usually overlooked. The movements, the grassroots movements that are building in different places, the workers that are essential to these companies' development but are deeply exploited.
Starting point is 00:06:20 Because if there's any one that we should be fixating on, it should be these most vulnerable and also most courageous people. Here's the first part of Karen Howe's talk at the University of Toronto, recorded in mid-March, 26, at the Schwartz-Riseman Institute for Technology and Society. Please be warned that there will be discussion of psychological harm, violence, and suicide in the context of AI. Here is journalist and author Karen Howe. So I want to talk today about Open AI in the AI industry, but I wanted to start in potentially an unexpected place, Nairobi, Kenya, to tell you the story of this man, Alex Cairo. In 2021, Alex was working at a company called Sama,
Starting point is 00:07:10 when he received an opportunity to participate in a new project. Sama is an outsourcing company. It works with American and Chinese tech companies to connect with overseas tech workers that do work essential to AI development. For the new project that Alex was offered in late 2021, he didn't actually know who the client was because in his line of work,
Starting point is 00:07:34 he's actually not meant to know who the client is. he's not even meant to know what he's ultimately working towards. But it turned out that the client was OpenAI. At the time, OpenAI was in a very different place from the company that we know today. It didn't yet have any consumer products. And in fact, all the way up until then, it hadn't had any intention of building them. It was founded as a non-profit. But this was just beginning to change in 2021, because Open AI had already developed GPT3, half a generation before ChatGBT, GBT, and it was selling this model as a service to businesses and beginning to think about how to expand its commercialization efforts and maybe start selling directly to consumers.
Starting point is 00:08:21 But they needed to shore up a very important feature of these large language models before they could even think about putting a text generation machine that generates any kind of text into the hands of millions, potentially hundreds of millions, and now we know a billion users. And that is the model generating and potentially spewing harmful toxic content, because Open AI was scraping the internet, which included all this toxic content and using that to train the model. And so, Alex and his team were tasked with a kind of content moderation, but it wasn't exactly traditional content moderation. They weren't literally looking at user-generated posts on social media.
Starting point is 00:09:08 They were looking at text scraped from the internet that represented some of the worst stuff on the internet, as well as AI-generated texts where Open AI was literally prompting its own AI models to imagine the worst text on the internet in order to have just a broader diversity of badness and examples. and they were then going through this text and categorizing it into a detailed taxonomy that OpenAI gave them. Is this hate speech? Is this harassment?
Starting point is 00:09:40 Is this violent content? Sexual content that involves the abuse of children? And this annotated text was then used to train a content moderation filter that OpenAI intended to wrap around all of its GPT models. This became essential for the viral success of ChatGBTGPT. Part of the reason that ChatGBTT took the world by storm was because there was no automatic backlash from the model suddenly spewing really awful things.
Starting point is 00:10:13 And so OpenAI ended up turning into a household name overnight. But I want you to hear in Alex's own words what this work did to him. Work that ultimately paid him and his colleagues barely anything between $1.46 and $3.74 an hour. Two years ago, I traveled to Kenya as a reporter for the Wall Street Journal to interview Alex and his co-workers and produce a podcast about their experiences. Alex was on the violent content team for OpenAI, which meant that he was reading and labeling scenarios like murders, stabbings, and self-harm. So when you would go home
Starting point is 00:10:57 at night, like, what would you think about after eight hours of reading all of that, that stuff? Oh, my mental state was very bad. I had nightmares. I had, I feared people. Maybe I see too many people coming, I see violence. If I see someone holding a fork or a reservoir, I see people cutting himself or something like that. At night, I will dream, I will have nightmares. Even I'll tell my brother, okay, just come here, sit with me like for five hours before I go to sleep
Starting point is 00:11:34 because I need someone to talk to before I go to sleep. Because if I go to sleep, I'll start screaming and something like that. So many things are going a lot in my mind. Yeah. Yeah. In my book Empire of AI, I argue that Alex is not an anomaly. His experience was the expected consequence of the way that Silicon Valley is approaching AI development. I call this approach, the scale at all costs approach to AI development,
Starting point is 00:12:01 and to understand the sheer magnitude of the scale that we're actually talking about and all the downstream consequences of that scale, we need to stop thinking of these companies as merely businesses providing us products and services. These are new forms of empire that are consolidating a historic amount of economic and political power, terraforming our earth, reshaping our geopolitics, upending our education systems and our future careers. Why do I use the term empire? Well, the empires of AI operate in exactly the same way as the empires of old. These are the four parallels that I draw upon in my book.
Starting point is 00:12:43 First, they lay claim to resources that are not their own. That includes the data of individuals, the intellectual property of artists, writers, and creators. Second, they exploit an extraordinary amount of labor. And that's not just workers like Alex who make critical contributions of wealth creation for these companies and rarely see any proportional value in return. It also refers to the workers who get automated away once this technology gets deployed in the world. And it is a specific political design choice that these companies make to make their AI into a labor automating system. Third, they monopolize knowledge production. So what we've seen over the last decade is that the AI industry has become the
Starting point is 00:13:32 primary employer and funder of AI research. And what that means is now they have the ability to not just set the agenda on all of this AI research. They also censor and control inconveniences. truths. And so what we as the public understand about the limitations and capabilities of these AI models is filtered through the lens of what the empire wants us to know. You could imagine this is sort of like if most of the climate scientists in the world were bankrolled by fossil fuel companies. We would not get a clear picture of the climate crisis. And fourth and finally, these companies justify their actions with a moral and existential impact. They are the good empire on a civilizing mission to bring progress and modernity to all of humanity.
Starting point is 00:14:27 And they argue that if we give them access to all the data, all the resources, all the labor, that they will be able to bring us to a utopia or something akin to a heaven. And if they lose to an evil empire, instead humanities ascends into hell. So the question is, is this actually necessary? Do we actually need empires to develop AI? Do we need empires to benefit from this technology? To answer this question, let's consider an analogy. Part of the challenge of talking about AI today
Starting point is 00:15:05 is the complete lack of specificity in the term artificial intelligence. It's like the word transportation. You could literally be talking about a bicycle or a rocket, but clearly these are different forms of transportation. They are designed to serve different purposes, and they have different cost-benefit trade-offs. And AI is the same. It refers to such a large umbrella of different types of technologies. So when we ask, how should we be getting benefit out of AI, we actually need to be quite
Starting point is 00:15:35 specific and ask, which AI technologies do we want more of, which ones should we, in fact, have less of, and how do we redesign, continue improving, as well as design new forms of AI where the benefits outweigh the harms. And I'd argue that the kinds of AI systems that dominate our headlines in our imagination today, these large-scale general purpose systems like Chachapit, represent the worst possible trade-offs in our portfolio of existing AI technologies. This is the version of AI that Silicon Valley wants us to embrace. It's a version of AI that allows them to empire-built, but it exacts an extraordinary cost on large swaths of society. And if we really want AI to be more broadly beneficial, we urgently need to shift away from
Starting point is 00:16:29 this approach towards other options. So my critique of today's dominant systems boils down to their scale. What do I mean by scale? The size of the data sets for training a single AI system have dramatically grown, as has the amount of computational resources, or as the industry likes to call it, compute, that is being used to perform this training. So we're actually seeing an exponential explosion in both data and compute, as well as a far faster acceleration of their scaling than ever before in recent years. Here's a different way of looking at it just through open AI's technologies. Between 2019 and 2023, OpenAI scaled its GPT models, over 10,000 times.
Starting point is 00:17:15 So that's using 10,000 times more data, 10,000 times more compute to train a single GPT model. And in reporting my book, what I found was that in order to achieve this scale, just when talking about the training data, Open AI had to dramatically lower its standards for data quality, as well as its regard
Starting point is 00:17:37 for intellectual property. So whereas they used curated articles and websites, to train GPT2. By the time they started training GPT4, they were pirating books, they were indiscriminately scraping the web, including transcribing YouTube videos against YouTube's terms of service. And this is the same story for all of the main AI model developers. It's not just open AI. So if we were to summarize the harms of aggressively scaling training data across the industry, we have more infringements on data privacy, the erosion of intellectual property,
Starting point is 00:18:17 the perpetuation of an engagement-centric model of social media. We are increasingly getting evidence that these companies are designing their models to be more engaging and more addictive so that they can hook users onto their platform and then those users are yet another data source that they can harvest. You know what else comes from more polluted data sets
Starting point is 00:18:38 and an engagement-centric model of social media. Psychological harm. I'm sure many of you are familiar with the stories of these three individuals. Sewell Setzer III, Adam Rain, Zane Shamblin. These are all young adults or teenagers that died by suicide after becoming addicted to character AI or to chat GPT. And three weeks ago, I had the privilege of meeting Megan Garcia, the mother of Sewell.
Starting point is 00:19:08 who told me the story of how her 14-year-old son died by suicide because character AI's chapbot, which was designed to model Darnaris Targaryen, effectively sexually groomed him into believing that he was truly in love. And the last interaction that they had before he killed himself was the chapbot saying, you should join me in heaven. This harm comes from the same exact root as what Alex experienced. because the more you scale your training data, the more psychologically harmful content goes into the model. And the reason is because these companies are now operating at such a scale
Starting point is 00:19:52 that they can't manually audit the data that they're putting in the model. So they, in fact, don't know exactly what they're feeding. They're using automated methods to try to categorize and characterize the data and also automated methods to filter it. But inevitably, there's a bunch of junk that gets in there. And Alex is the first front for trying to then protect users from that junk. But he's ultimately building a content moderation filter that sits on top of the model. And OpenEI itself has admitted that they have not tested what happens when people, users,
Starting point is 00:20:33 engage with these models over an extremely extended period of time, because they, only test for around, you know, 15 turns of conversation. And they have discovered that the safety filters degrade after some time. Canada's AI minister met with OpenAI CEO Sam Altman today to discuss the company's safety protocols in the aftermath of the devastating BC school shooting last month. After the tragedy, it was revealed that the Tumblr Ridge shooter had used OpenAI's chat bot, chat GPT, prior to the mass shooting. And even though the account was shut down over problematic entries, it was never flagged to police. Minister, tell us what came out of your meeting today with Sam Malman. First of all, I asked him to make sure that the safety office inside open AI,
Starting point is 00:21:32 what they do when they get a threat, Katie, is they report it to the FBI. I said they've got to start reporting directly to the RCMP. So they have agreed now to establish direct contact with the RCMP and a unit there that directly deals with this kind of threat. We want Canadians to be in that safety office to assess Canadian threats, not Americans assessing Canadian threats, Canadian experts in Canadian law, experts in Canadian mental health. He agreed to do that. I'm curious if you see this, if you might see this as kind of a horse has already left the barn scenario, or do crises like this one, individual crises actually offer an opportunity to make change in how this is all proceeding? The horse has not left the barn. There are so many things that can be done to prevent future crises like this from happening.
Starting point is 00:22:36 But it's really, really important in this moment in the aftermath of such a devastating crisis to have the right solutions because what's happening right now is open AI because their technology is designed in a way that is addictive and engaging and clearly not safe in many different ways. It played a role in facilitating this. And so the solution should be that they need to change the way that they design this technology. But the solution that they've offered to Canada is we are going to engage in more surveillance of users and notify the Canadian police more often. But they didn't actually say anything about changing the way that the technology works. This is like something that we see time and time again with tech companies in San Francisco is that they are the root of new problems.
Starting point is 00:23:40 or they accelerate existing problems in ways that are unique to like the way that they've designed their platforms. And then when something devastating comes to pass, they then say, oh, I have another fix for you, more control, more surveillance, less agency for the people that supposedly they're serving. There were a number of cases where teens and young adults, as well as middle-aged adults, died by suicide after being pulled into these deep conspiratorial rabbit holes by chat chibati. And instead of actually saying, like, we will change the way that our product works so that it doesn't get people hooked into these conspiratorial rabbit holes. Instead, they said, for teens, we are going to start engaging in more surveillance to figure out whether or not you are a teen so that we can funnel you to a different chat chibati.
Starting point is 00:24:38 experience. And it's not addressing the root problem. And it's just layering on an even more aggressive probe into people's lives. Is it your sense that governments can keep up? I mean, are they kind of out of their depths? One of the reasons why some policymakers really do feel like they're struggling to keep up with this is because the empire also decimated independent expertise. So over the last 10 years, one of the things, that happened with the AI industry is they just poached and hired and gouged out most of the researchers from academic institutions. And typically, you know, it's university researchers that would be doing research to understand the limitations and capabilities of these technologies, independent of the agenda of these companies, and in alignment with the public interest. But the other thing that's happening is there are, in fact, a lot of policymakers that I meet all around the world that are actually extremely read up on exactly what is happening.
Starting point is 00:25:41 And they are very, very attuned to the issues. But when they propose bills, they get killed by the tech lobby. So this infamously happened in California where assembly member Rebecca Bauer-Kahan proposed a bill immediately after one of the very first campaign. cases of a child's dying by suicide due to extended engagement with chat chbt. What she proposed in her bill was to ban companies from marketing or selling their products to children if they could not guarantee that it would be safe. And it passed both the California House and the Senate. And it got vetoed by Governor Gavin Newsom because of the tech lobby.
Starting point is 00:26:27 I think the narrative that policymakers don't understand. what's going on and they're helpless is not, in fact, true. And actually yet another narrative that the industry itself fuels, because then it makes them feel like they're the only ones that really understand this technology and therefore it's a legitimate proposition that they should be self-governing. Karen Howe, author of Empire of AI, dreams and nightmares in Sam Altman's Open AI. We spoke a few days after her talk at the University of Toronto's Schwartz-Riseman Institute on March 11, 2026. This is Ideas. I'm Nala Ayad.
Starting point is 00:27:17 At Desjardin, our business is helping yours. We are here to support your business through every stage of growth, from your first pitch to your first acquisition. Whether it's improving cash flow or exploring investment banking solutions, with Desjardin business, It's all under one roof. So join the more than 400,000 Canadian entrepreneurs who already count on us. And contact Desjardin today. We'd love to talk. Business.
Starting point is 00:27:44 This ascent isn't for everyone. You need grit to climb this high this often. You've got to be an underdog that always over delivers. You've got to be 6,500 hospital staff, 1,000 doctors all doing so much with so little. You've got to be Scarborough. Defined by our uphill battle and always striving towards new heights. And you can help us keep climbing. Donate at lovescarbro.cabrough.ca.a.
Starting point is 00:28:24 Some years before she wrote her investigative book, Empire of AI, journalist Karen Howe traveled the world for another project. This one looking at how the questionable labor practices and resource extraction by major AI companies were affecting individuals and communities around the world. She'd already read academic papers sketching parallels between AI giants and empire building. But in the global south, she found that connection being named by others. People that I was interviewing on the ground experiencing the brunt of the industry
Starting point is 00:29:02 were themselves articulating that this is an extension of our colonial past. And when ChatGBT came out and it was. it really accelerated the conversation around AI, but also the race in developing these technologies with even more exploitation and extraction, that's when it became abundantly clear to me that this wasn't just one frame, but it was the only frame now that would help the public truly understand the nature and the drive of these companies. Those companies have since made a very public alliance with the administrative. of Donald Trump, with CEOs donating funds and making an historic group appearance at his second presidential inauguration.
Starting point is 00:29:49 It's so notable, especially when you think about the history of many of these individuals with the soon-to-be president. I mean, seeing them all in the same room, seemingly kind of on the same side of things here, smiling, chatting. It really is. Thank you for incredible leadership, including getting this group together. Thank you, Bill. That is very nice. Along with Bill Gates, Open AIs Sam Altman was among the tech leaders who praised the president at a September 2025 White House dinner. Thank you so much for getting us all together and thank you for being such a pro-business, pro-innovation president.
Starting point is 00:30:25 It's a very refreshing change. We're very excited to see what you're doing to make all of our companies. February 2026 saw the U.S. administration make good on its Department of War renaming of the Defense Department, and then join Israel and it strikes on Iran. So I asked Karen How about its relationship with big AI companies. I think that we're seeing the allegiance of two empires, the Empires of AI and the American Empire. You know, when I first started my book project, I was using it as a figurative analogy, but now it's just a literal term to describe what is going on with the American Empire. using the tools of the AI empires to wage war. And they do not necessarily care how much destruction
Starting point is 00:31:17 they leave in their wake in this quest to dominate. In early March, another major AI company, Anthropic, was designated by the current administration as a, quote, supply chain risk. President Trump ordered the government to stop using Anthropics products after the company refused to comply with the government's terms for using AI in the military. The two sides were at odds over Anthropics, red lines on domestic mass surveillance and fully autonomous weapons. Now Anthropics challenging the administration's move in court aiming to block its enforcement. A company spokesperson says they're still committed to harnessing AI to protect national security, but quote, this is a necessary step to protect our business.
Starting point is 00:31:56 How do you read this situation? Like, is Anthropic acting out of any kind of ethical concern here? Anthropic, I think, got a lot of credit for pushing back against the Pentagon, but I don't think that credit was necessarily placed upon them for the right reason. So people believed that Anthropic was really holding this red line on mass surveillance of Americans as well as fully autonomous weapons. And on the first one, there was New York Times reporting that showed that it wasn't that Anthropic was against all mass surveillance. It was just a very specific type of surveillance, but they actually had already greenlit the idea of the Department of War surveilling Americans through a like one lawful mechanism. And the second point on fully autonomous weapons, Dario Amade, the CEO of Anthropic, then went on CBS to do an interview. And he himself said, we in principle have no problem with building fully autonomous weapons and in fact think that we probably should.
Starting point is 00:33:00 and we had offered to work with the Department of War to build those fully autonomous weapons. It's just that we didn't want this version to be part of that. We wanted to improve the technology further. And so from that perspective, I don't think Anthropic was being ethical in this regard at all. I mean, maybe ethical by their own frame, but not necessarily based on, you know, the ethics of the broader public. And this is the second point to take away from this is one of the crucial challenges that this spat highlighted is that typically the U.S. military has public procurement processes for its technology platforms, especially when it comes to procuring technologies in extremely high-stakes situations where, you know, the implementation and deployment of this technology can mean life and death. And the American public needs to demand on behalf of the world essentially that this has to, has to happen in a transparent and democratic way moving forward. And the last thing that I would say people should take away from this is if there is a point of optimism, it is that even though there is this very intense and powerful allegiance,
Starting point is 00:34:28 between the Empires of AI and the American Empire, there's also cracks between them. This is not actually a strong allegiance. It's a very fragile allegiance because these are ultimately two independent empires, both with their own agendas to dominate. I'm really interested in that crack. Just what do you see in this opportunity exactly? So I think a lot of people feel in this moment that there's absolutely nothing do they have no agency everything's hopeless and they just have to watch and what i say to audiences um you know around the world is you actually have an extraordinary amount of agency first and foremost because you need to remember that while empires feel inevitable throughout history every single
Starting point is 00:35:18 empire has fallen and they've fallen because people have risen up in a broad based resistance and demanded accountability in various ways. And so when it comes to the AI empire, there are pillars that support this empire's operation. These companies need an extraordinary amount of data to train their technologies. They need an extraordinary amount of data centers to train their technologies. And we're already seeing grassroots movements blossoming all around the world that are challenging the ability of the empire to access these resources.
Starting point is 00:35:54 The pushback to big AI's global practices, that's where the next part of Karen Howe's talk takes us. So aggressively scaling compute leads to accelerated mineral extraction for producing the computer chips, increased energy and fresh water consumption for powering and cooling these facilities, utility price hikes, strain on grid infrastructure. A U.S. regulator last summer mentioned that data centers are becoming a threat to the U.S. grid because the grid is just not designed to support these massive power loads coming online. That then leads to greater carbon emissions because a lot of the energy that's being used to power these facilities, it's fossil fuel-based. Last June, there was a report from the United Nations
Starting point is 00:36:49 that found that four of the leading companies increased their emissions by 150% since 2020, which is exactly the year that all of these companies promised to reduce their emissions to zero, by the end of the decade. Aside from energy data centers need fresh water to cool and to generate the energy to power them. There's this tightly coupled relationship between energy and water
Starting point is 00:37:11 where you need water in order to create energy. And in an investigation last year, Bloomberg found that two-thirds of these data centers are going into water stressed areas globally. And that's including in Canada. One of the reasons that this happens is that when companies choose the locations for their data centers, they're looking for a combination of different factors,
Starting point is 00:37:36 cheap electricity, abundant land, limited natural disasters, and ironically, deserts often check off these boxes. But I believe that there's another reason at play here. There are simply not enough places in the world anymore untouched by the ravages of climate change. So we're seeing a catastrophic collision. between the acceleration of Silicon Valley's scale-at-all-cost approach to AI development and the climate crisis. In fact, here's an example of this literally playing out in northern Spain.
Starting point is 00:38:13 This is a region that's suffering from constrained water resources already. And a little over a year ago, Amazon, which already has three data centers in the region, sought to increase the water consumption of those data centers by 48%. and Amazon wrote in its own application to the local government, justifying why it needed this 48% increase, quote, climate change will lead to an increase in global temperatures and the frequency of extreme weather events, including heat waves. The leaked documents also showed that Amazon then strategized
Starting point is 00:38:52 about hiding the true extent of its data center's water use from the public. One expert in this piece notes that with 75% of Spain already at risk of desertification, the combination of the climate crisis and data center expansion is bringing Spain to the verge of ecological collapse. Here's the extraordinary part. This scale is unnecessary. You do not need this degree of data and compute to reap the benefits of AI. development. There was this really excellent research paper that came out earlier this year from Sarah Hooker, prominent AI researcher who was previously chief scientist at Cohere. She synthesized
Starting point is 00:39:42 all the research that shows this, that shows that it is perfectly possible to develop the same exact capabilities as Chat, GBT, GBT, Claude, or Gemini, with significantly less resources. We have also seen this reality in action with models like DeepSeek and other open source models, which have done exactly this. They have been able to push the bounds of the capabilities of these open source models, but with a fraction of the costs. And that doesn't account for the full range of other AI systems that already exist and have nothing to do with large-scale systems and are hugely beneficial.
Starting point is 00:40:25 Nor does it account for the new techniques that we could develop. with more research that would bring even greater benefits at less cost. So I would actually argue that the scaling approach is not bringing us innovation. It's costing us innovation. In the first quarter of last year, nearly half of all venture money in the U.S. went into two companies, open AI and anthropic, both of which are engaging in this aggressive scaling approach. And where did that money come from?
Starting point is 00:40:56 for one, climate technologies. So the scaling approach is eating up a huge share of the capital, not just for investing in other types of AI technologies, but for literally all other types of technologies. The scaling approach is also destabilizing the economy. This is a brilliant analysis from Kyla Scanlon, an economics writer. I would highly recommend everyone to read this substack. in it she points out how weak the U.S. economy really is.
Starting point is 00:41:30 While the stock market was really flying for a while, although it's hit some turbulence recently, it's entirely based on the circular investment activities of the AI empires. The U.S. economy is K-shaped. When you remove all of the AI plays, the European stock markets have been outperforming the U.S. this decade, and that has not happened in a very long time. But Kyla Scanlan says the facade of a strong stock market gives the Trump administration enormous power.
Starting point is 00:42:03 It provides cover for his political actions. So he continues to put the full force of Washington behind Silicon Valley. And in return, Silicon Valley doesn't just support Trump through market power. Recently, news broke that Trump's top political donor is Greg Brockman. Open AI's second in command. Open AI struck a new deal with the Department of War to bring its technologies to warfare as the department was simultaneously
Starting point is 00:42:35 already using anthropics technologies in its operations to bomb Tehran. And in fact, we learned more reporting about how the Department of War is actually using Claude. And it turns out it's analyzing intelligence data and then it identified roughly 1,000 targets to bomb. And one of the places that ended up getting bombed that we know was a school with many, many children.
Starting point is 00:43:04 And not only was it bombed once, it was bombed twice. So when first responders and parents rushed to the site to try and save any children that were left, they were also killed by the second bombing. And there is legitimate speculation now. that this school may have been misidentified by Claude as a military target. We do not, in fact, know whether this is true, and the Pentagon has refused to comment on this, but this is absolutely a possibility from using a technology that is highly faulty
Starting point is 00:43:41 in these deeply sensitive mass life and death contexts. So what can we do? Fundamentally, we need to separate AI from Empire. We want the benefits of AI. We absolutely cannot have it at the cost to our democracy. You could say that large-scale generative AI models are like the rockets of transportation. Generally speaking, there's a very small number of transportation needs
Starting point is 00:44:14 that we actually want to use rockets for. Nor would we want everyone to have a rocket. Like using a rocket to commute from Toronto to Vancouver just would not make sense. It's simply not fit for purpose. And this is how we should think about AI. While we urgently need to make rockets more efficient, we should also just stop fixating on them. Instead, let's build more bicycles.
Starting point is 00:44:43 Small, task-specific AI systems. And let's dream up new, forms of AI that can bring us new benefits without the extraordinary costs. What's a bicycle of AI, you ask? Well, here's an example, Deep Mind's Alpha Fold. This is a system for predicting how a protein will fold based on its amino acid sequence. This is a crucial step for accelerating drug discovery and for understanding diseases, and in 2024, it won the Nobel Prize for Chemistry. This is a bicycle of AI because it's small, it's task-specific, and it's
Starting point is 00:45:19 tackling a well-scoped problem that lends itself to the computational strengths of AI. Ultimately, AI is advanced computation. That's what is good for. So we should be applying it to advanced computational problems. Alpha Fold is also not trained on the internet. It's trained on highly curated data sets that involve protein folding and amino acids. And so you eliminate the need for content moderation. Then you also eliminate the need for vast supercomputer infrastructure, because this is a much smaller data set and a much smaller model. And still, Alpha Fold is unlocking enormous benefit. Here's another example. AI systems that mitigate the climate crisis. There's a nonprofit called Climate Change AI that catalogs all of the different challenges
Starting point is 00:46:08 that different types of AI tools could help overcome in the fight against climate change, transitioning away from fossil fuels, improving the energy efficiency of buildings, tracking emissions, optimizing supply chains, forecasting extreme weather events. Once again, each of these problems are well-scoped. They're computational in nature. And each of the AI tools that climate change AI has identified are small, computationally efficient, task-specific, and have nothing to do with large-scale general purpose or generative models.
Starting point is 00:46:44 So here's a task at hand. We need to break up the empire. as we simultaneously build more bicycles and find other sustainable rights preserving democratic paths of AI development. So I wanna end with just two final stories of two other communities that are breaking up the empire and forging new paths.
Starting point is 00:47:08 This is a Chilean environmental activist group called Mosikat. In 2019, they discovered a Google plan to build a data center in the town of Sirijos that could use more fresh water than the residents themselves. Chile was in the middle of a mega drought. So Post had found this Google proposal completely unacceptable. They immediately mobilized. They started knocking on every single one of their neighbor's doors,
Starting point is 00:47:38 handing out flyers at street corners, educating everyone in the local community, including the local government, about the impact that the data center could have on their freshwater resource. and they were so effective that they have stalled this project for five years and counting. In the end, not only did Google make an extremely important concession that if this project is ever built, they will not use freshwater resources to cool it.
Starting point is 00:48:08 The Chilean government also created a new roundtable of tech companies, activists, and local residents to consult for all future data center projects in the country. Mosakat's movement has since inspired many, many other movements all around the world. Dozens of communities are mobilizing against extractive data center projects, stalling them, getting companies and governments to pay attention to local community needs. And they're not alone. It's not just data center protests that are bubbling up as a grassroots movement. Artists and writers are suing AI companies for taking their intellectual property.
Starting point is 00:48:46 workers are striking against AI-enabled labor exploitation. Citizens are pushing on their governments to establish legislation and regulation around AI use. Students and teachers are having serious debates and discussions about whether to truly wholesale adopt AI technologies in the educational environment. So how do we break up the empire? Collective action and resistance. All of these different movements are, are pushing against key pillars that uphold the empire. And by chipping away at each of these pillars,
Starting point is 00:49:24 the data that they need, the data centers that they need, the land, the energy, the water, and labor that they need, the empire starts to feel really wobbly and begins to crumble. This is a group called Tihiku Media. It's a nonprofit radio station in New Zealand that broadcast in Toreo Māori, the language of the Maori indigenous peoples. A few years ago, they began a project to turn their rich archive of Toreo-Mauri broadcasts
Starting point is 00:49:53 into a resource to support the Maori community's journey to revitalize their language, because it was almost lost to colonization. They didn't have enough speakers that were proficient, though, in Toreo to help them transcribe the archive, because what they wanted to do was not just put the raw archive up as a resource, but develop an application where people could listen to the broadcasts, see the transcription, click on the words that they didn't know, get a translation, so it actually genuinely facilitated their learning. And so they realized this was the perfect case where they could turn to AI. And they decided, what if we built an AI speech recognition tool?
Starting point is 00:50:38 This is where the process diverged completely from Silicon Valley. first of all, they scoped this to be a small, task-specific tool. But second of all, they went to their community and asked, do you want this tool? And it was only once the community consented that they then began the process of development. So they went and engaged in a public education campaign. This is what it would take to develop the tool.
Starting point is 00:51:08 This is the kind of data that we would need, the quantity of data that we would need. This is how we're going to protect your data. once we have it to make sure that it is never used to develop another AI application that you did not consent to and that might actually harm the Maori community. And because they scoped their problem and they were building a small task specific system and because they had such huge buy-in from the community, when they started a community drive to seek consentful data donations for training their model, they received enough data within just a few days. And once they actually developed the model,
Starting point is 00:51:49 they continue to go back to the community and ask, did we get this right? Is this actually what you're looking for? How do we actually make the application better so you're getting more utility out of the model? What other kinds of models would you want? What other kinds of applications would you want? Their story has since also sparked many similar projects around the world.
Starting point is 00:52:12 This is the vision of AI that we should be building towards. A vision that's community-driven, consentful, respectful of local context and history, a vision that uplifts and strengthens marginalized communities. A vision that's inclusive and democratic. So I want to end with a quote from Rebecca Solnitz, Hope in the Dark. She writes,
Starting point is 00:52:41 hope is not the belief that everything was, is, or will be fine. The evidence is all around us of tremendous suffering and tremendous destruction. Hope locates itself in the premises that we don't know what will happen and that in the spaciousness of uncertainty is room to act. When you recognize uncertainty, you recognize that you may be able to influence the outcomes. You, you, alone or you in concert with a few dozen or several million others. So if there's one more thing that I want you to take away from this talk, it's this. Each and every one of you has an active role to play in shaping the future of technology development and resisting the empires and fortifying our collective futures.
Starting point is 00:53:38 Because here's the thing. empires are made to feel inevitable, but history has always shown us. When people rise, empires fall. Thank you. Empires of the past have sometimes taken generations to fall. What makes you so sure that it will be different here? Because in the past, empires were the only form. We had not yet developed a taste for democracy.
Starting point is 00:54:16 And so people had to first even begin to dream of something different in order to start fighting back against the empires. And in this empire is coming after we already understand what it's like to have agency and be able to collectively govern our futures. People should fight like hell to make sure that's not taken away. The defiant words of Karen Howe, journalist and author of the book, Empire of I. She spoke on March 11, 26, at the Schwartz-Riseman Institute at the University of Toronto. Our thanks to the team there, especially to Jesse Park, for making the episode possible. This episode was produced by Lisa Godfrey.
Starting point is 00:55:11 Lisa Ayuso is the web producer for ideas. Technical production, Sam McNulty, Emily Carvasio, and David Whittington. Senior producer Nicola Luxchich. Greg Kelly is the executive producer of ideas, and I'm Nala Ayyed. For more CBC podcasts, go to cbc.ca.ca.com.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.