Tech Won't Save Us - We All Suffer from OpenAI’s Pursuit of Scale w/ Karen Hao [Replay]

Episode Date: January 8, 2026

Paris Marx is joined by Karen Hao to discuss how Sam Altman’s goal of scale at all costs has spawned a new empire founded on exploitation of people and the environment, resulting in not only the los...s of valuable research into more inventive AI systems, but also exacerbated data privacy issues, intellectual property erosion, and the perpetuation of surveillance capitalism. Karen Hao is an award-winning journalist and the author of Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon. The podcast is made in partnership with The Nation. Production is by Kyla Hewson. This episode originally aired in June 2025. Also mentioned in this episode: Karen was the first journalist to profile OpenAI. Karen has reported on the environmental impacts and human costs of AI. The New York Times reported on why we're unlikely to get artificial general intelligence anytime soon.

Transcript
Discussion (0)
Starting point is 00:00:00 We absolutely need to think of this company as a new form of empire and to look at these colonial dynamics in order to understand ultimately how to build technology that is more beneficial for humanity. Hello and welcome to Tech Won't Save Us, Made in Partners, with the Nation magazine. I'm your host, Paris Marks. And as we get started on this new year, 2026, we are revisiting some of the important topics and issues that defined last year in tech. So last week, we revisited an interview with Liz Peli looking at streaming services as part of this broader kind of consideration of the way that we consume culture now, the way that
Starting point is 00:00:51 digital mediation and streaming has changed that, and where, you know, we might go from here, right? The unavoidable story of last year was, of course, AI. this continued push to adopt generative AI, the continued growth of Open AI in particular, and how that has propelled the rest of the industry to basically glom onto this idea that Sam Altman initially pushed with ChatGBT, BT, that so many other companies are now trying to chase it on
Starting point is 00:01:18 or trying to beat it on, that has also resulted in the construction of so many data centers that have so many physical implications on the world around us. And so as we consider, that issue and the role that it played last year, but the broader impacts of generative AI in particular, I think the interview that I did with Karen Howe is a great one to go back to as we start off this year. And before we get into, you know, new original interviews for 2026 that will start next week. And of course, Karen is an award-winning journalist who has written for many publications
Starting point is 00:01:51 and also the author of Empire of AI, probably one of the best books that came out last year, especially given the subject matter that it covered and how important that was for the continued conversation around AI, around Sam Altman, and around Open AI in particular. At this stage, it's really hard to say, you know, what role generative AI is still going to play going into this year. Obviously, we can talk about a bubble, but it's clear that it has not popped in any meaningful sense at this point. There's still a lot of energy behind this AI bubble, behind the construction of major data centers around the world in order to power AI technologies, but also the other things that these major companies, these major cloud providers like
Starting point is 00:02:33 Amazon, Microsoft, and Google are pushing to. And it's become very clear that governments have now become very entangled in this AI push. So there's a lot of kind of public money that is being put behind supporting this vision that Sam Altman has laid out and that allows him to continue commanding a ton of power, right? Because even as countries talk about digital sovereignty, there's also a really big push to make sure that they have the capacities to create these large foundation models powering tools like chat GPT or image generators or video generators or what have you within their own jurisdictions, right, so that they can try to create their own versions of these technologies, regardless of whether those technologies are really attractive or useful or
Starting point is 00:03:18 socially beneficial at the end of the day, right? It's like, you know, there's a lot of excitement behind this. There's a lot of money behind it. So, of course, everyone should be chasing it. And we certainly can't ask any questions about whether, you know, this is really in our favor, really helping us broadly socially because it is beneficial from an economic standpoint, at least at the moment, right? At least until some AI bubble potentially crashes or something else happens there. So, you know, with that said, I think that this interview with Karen, and I think the work that she has done, the writing she has done. And obviously her book are very important means of understanding what is going on with the generative AI industry right now, the broader impacts that it has.
Starting point is 00:03:57 And, you know, maybe where we might go from here as we seek to explore the foundations that really created the moment that we continue to be in right now and have been in for more than three years. So with that said, if you do enjoy this conversation, make sure to leave a five-star view on your podcast platform of choice. You can share the show on social media or with any friends or colleagues who you think would learn from it. And if you do want to support the work that goes into making tech won't save us every single week. So we can keep having these critical in-depth conversations, conversations that rely on the support of listeners like you for me to be able to keep having, you know, to keep educating people around these issues. The beginning of a new
Starting point is 00:04:33 year is a great time to become a supporter over on patreon.com slash tech won't save us, which will allow you to gain access to add free episodes to a backlog of premium content. And there will be more of that coming this year, and you'll also be able to get stickers if you support at a certain level. So thanks so much and enjoy this conversation with Karen Howe. Karen, welcome to Tech Won't Save Us. Thank you so much for having me, Paris. I'm thrilled to have you on the show. Obviously, I've been following your reporting for years because it's been giving us such fantastic looks into AI in the broader industry around it. And now, of course, you had this new book, Empire of AI, which has a lot of people talking for the right reasons, I think, because it is just a
Starting point is 00:05:13 stunning book. Because you have been doing this reporting on open AI in the AI industry more broadly for all of these years, when did you decide it was time to expand that into a deeper investigation that you've done in this book? I started thinking about it in early 2020, right after I had finished publishing a series at MIT Technology Review called AI colonialism. And that was based on work that I had been doing, just looking at the impact of the commercialization of AI on. society and I had done some traveling to different places and I'd found significant evidence to suggest that the way that the AI industry was operating was perpetuating former colonial dynamics. Then right as I was in the middle of working on a proposal for that, chat GPT came out and my
Starting point is 00:06:03 agent at the time asked me, you know, how does chat GPT change things? How does opening? I change things. And I was like, oh, it massively accelerates everything that I was talking about. And he was like, well, then you have to write about opening eye, and you have to tell this story through this company and the choices that they made in order to help readers be grounded in the real scenes and details of how this technology and this current manifestation of AI came to be. So by early 2023, I had fully conceptualized the version of the book that you see now. That's awesome. And you can really see that come through, because as we were talking about before we started recording, you know, there are corporate books where there are just hints of
Starting point is 00:06:46 kind of issues in the background of a broader hagiographic story of the company, but your book doesn't pull any punches. Like, you're very clear about the orientation that you take toward this, the issues with Open AI itself and the particular model of AI development that it has really popularized and pushed throughout the industry. Did it make you a bit nervous to take that approach with it, thinking about how people might respond or were you clear from the very beginning that, you know, this was the way you were going to approach it even if it pissed people off. I was absolutely nervous because I wanted to tell the story of opening eye with high fidelity and I wanted to be truthful and honest to my position on the matter, which is that we absolutely
Starting point is 00:07:28 need to think of this company as a new form of empire and to look at these colonial dynamics in order to understand ultimately how to build technology that is more beneficial for humanity. And I thought that I might lose people with that argument, but the thing that has been really amazing is there have been a category of people that have said, you know, I don't agree with your politics and I don't agree with your argument, but I really loved your reporting and appreciated both the inside story and the focus on communities that are being impacted by the ripple effects of this company's decisions. And so they're still recommending and widely sharing the book to their friends and co-workers. It ended up working out. But there was that initial tension that
Starting point is 00:08:13 I felt of, I know that I need to tell the story this way, but like if it's going to lose people, that that is a cost I'm going to have to take. Yeah. No. And the fact that you even are getting that response to it shows the quality of the work that is really in the book because it really is fantastic. And I know there's awards in the future for this one. There's no question about it. But you know, you're talking about Open AI, right? You know, the book is really framed around Open AI as a way to tell this story. The company was founded in 2015 with these really, you know, high ideals. You visited the company for the first time in 2019, if I have that correctly from the book, you know, when you were reporting on it. What did you make of the company at that time?
Starting point is 00:08:49 And had it already become clear to you by then that, you know, some of these original ideals that were shared at the time that it was founded were not really holding through to the way the company was really operating. Yeah, absolutely. So the reason why I even decided to profile opening I back then, I was an AI reporter at MIT Technology Review, focused on very much fundamental AI research, cutting-edge ideas that didn't necessarily have commercial potential.
Starting point is 00:09:15 And so Open AI was on my radar because they said that they didn't have any commercial intent, and they were just focused on that basic research. And in 2018, 2019 was when there were a series of announcements at the organization that suggested that there were changes, underway. And it made me wonder, because they already had some sway in the AI world and also sway in the policymaking world, that whatever changes happens there would then have these effects in the way that the public would understand the technology. Policymakers would
Starting point is 00:09:46 understand it. And also how AI might ultimately be introduced to a much broader audience. And those announcements were, Open AI had said that they would be fully transparent and open source all of the research. And in 2019, they started withholding research. Elon Musk left the organization and the company restructured to put a for-profit arm within the nonprofit organization. And then Sam Altman became CEO. And right as I confirmed with the company that I wanted to embed within the company to profile them, they then announced that they had a $1 billion new deal from Microsoft, new investment for Microsoft. And so it just seemed like there were quite a lot of things
Starting point is 00:10:31 that were pivoting in the opposite direction from how the company had originally conceived of itself when it was just purely a nonprofit. And then I found when I was at the organization, I mean, I came in genuinely wondering, like this does seem like a unique premise to have a nonprofit focused on these fundamental questions and explicitly stating that they want to do things with the benefit of all humanity.
Starting point is 00:10:58 So I really went in with questions of, okay, let's believe in this premise. And I want to ask them to articulate to me how they see this playing out. What are they doing? What research are they focused on? Why are they investing so much money in this idea of so-called artificial general intelligence? And I quickly realized that there was such a fundamental lack of articulation of what they were doing and why. and also that there was this disconnect between how they positions themselves and how they actually operated. So they said they were transparent. They were highly secretive. They said they were
Starting point is 00:11:32 collaborative, but executives explicitly underscored to me again and again that they had to be number one in their progress in order to fulfill their mission, which is inherently competitive. And they clearly had some kind of rumblings of commercial intent starting because they needed to ultimately give Microsoft back that investment. And so that is what I then wrote in my profile for MIT Technology Review, which came out in 2020. And ever since then, I've had a very tenuous relationship with the company because they were quite disappointed
Starting point is 00:12:05 by my portrayal. Yeah, you can understand why, you know, to a certain degree for a company that likes to control the narrative like that. I was really struck in reading the book. This was not something that I was familiar with before, how there are even documents and conversations that you can see from like early on in the
Starting point is 00:12:20 organization where they're basically acknowledging that it's not going to remain open for very long, you know, that these ideals are clearly things that they don't even really have the intention to stick to. And like all of that is there. Yes. So I was quite lucky in that not only was I able to get a lot of documents from my sources, but that also in the midst of writing this whole history, Elon Musk sued opening eye. And it opened up a lot of these early documents. And the thing is what I realized over time as I was doing this reporting is OpenAI and Altman, Sam Altman specifically, has been very strategic throughout the organization's history and identifying what the bottleneck is for that particular era of the organization's goals
Starting point is 00:13:06 and then figured out a way to overcome that bottleneck with certain things. So initially, the bottleneck was talent. When OpenAI started, Google actually had a monopoly on most of the top AI research talent and opening eye didn't have the ability to compete with Google on salaries so the bottleneck was what can we offer to attract talent that's not just money and basically the nonprofit mission was a perfect answer to that question give people a sense of higher purpose and altman ended up using that as a very effective recruiting tool for some key scientists and and people to affiliate with the organization in the very beginning, including the chief scientist, Ilya, Sutskever,
Starting point is 00:13:54 who at the time was already quite renowned within the AI world. And he said, Altman said to Sutskever, don't you want to do something that is ultimately more than just building products for a for-profit company? And that was part of the reason Sutskever ended up buying into the premise of leaving Google in the first place and then led to the snowball effect of more and more AI researchers coming to the organization for the purpose of receiving mentorship from Sutskever. But when the bottleneck shifted in one and a half years into the organization, they realized
Starting point is 00:14:30 in order to be number one, we want to scale the existing techniques within AI research aggressively pump more data and build larger supercomputers than I've ever been seen before to train these technologies. Then the bottleneck shifted to capital. And that is when they decided to create some kind of fundraising vehicle, this for-profit arm for raising that capital. And then it was easy to shed the other things that had helped them accrue the talent because that wasn't the bottleneck anymore and to then shift to accruing the capital. And that is kind of been the story of Open AI throughout its decade-long history and why it's still, to this day, so confusing,
Starting point is 00:15:18 what is this company doing and why? Because it keeps changing every couple of years depending on what it's going after. I wanted to talk a bit more about, you know, that process of developing this AI technology, generative AI, as we know it today. And kind of the approach that open AI took to that. In the book, you talk about a difference between symbolic AI and connective AI. Can you tell us a bit what the distinction is there and, you know, how you would define AI generally, you know, this term that we hear all the time, but can seem so difficult to actually pin down. Yeah. So AI originally, it's a term that is a very long history. It originally was coined in 1956 by a Dartmouth assistant professor, John McCarthy. And decades later, he explicitly said,
Starting point is 00:16:03 I invented the term artificial intelligence because I needed some money for a summer study. So he said this was a marketing term. And it was actually to draw attention to research that he was already doing under a different name. And that name was originally intamata studies. But the thing that happened when he decided to reconceive of his research under this brand of artificial intelligence, was it pegged the discipline to the idea of recreating human intelligence. And the problem with that is all the way up until present day, we still have no scientific consensus around where human intelligence comes from. And so there have been significant debates over the decades over how to build AI rooted in disagreements over what human intelligence is. So the original
Starting point is 00:16:53 disagreement was between the connectionists and the symbolists. And the symbolists believed human intelligence comes from the fact that we have knowledge. So if we want to recreate it in computers, we should be building databases that encode knowledge. And if we pump a lot of resources into encoding larger and larger databases of knowledge, we will eventually have intelligent systems emerge. The connectionists believe human intelligence emerges from our ability to learn. When you observe babies, they're exploring the world,
Starting point is 00:17:26 they're very rapidly accumulating experience, and then they grow up and become more intelligent over time. So that branch then believed in building so-called machine learning systems, software that can learn from data, data being the equivalent of human experience, and that ultimately then narrows into another sub-branch called deep learning, which is essentially machine learning, but using especially powerful software called neural networks that are loosely modeled after the human brain. And originally, symbolists were the ones that really dominated people's conception of how
Starting point is 00:18:03 to achieve AI. But at some point, we got the internet, which meant that they were, was a lot more data. It became a lot cheaper to collect digital data rather than collecting it in the physical world. And computers started advancing quite rapidly. And companies started becoming much more interested in AI development. And so there's the convergence of all of these different trends that then led a shift from the symbolism vision of AI development towards the connectionism vision that ultimately leads us all the way to present day with Silicon Valley dominating our imagination of what AI can be by defining it as massive models trained on internet loads of data with tens of thousands, hundreds of thousands of
Starting point is 00:18:52 computer chips consuming extraordinary amounts of energy in freshwater. Yeah. I feel like when I talk to people who are more open to this notion that AGI is around the corner, and of course, you know, this is going to happen because all these companies are saying that they'll often point to some. someone like Jeffrey Hinton and say, you know, this is this scientist. He's won these awards. He says this is on the horizon. So he must be right, right? Because, you know, he's this really talented researcher and whatnot. This always frustrates me, of course, knowing who Jeffrey Hinton is.
Starting point is 00:19:25 But in the book, you talk a lot about how AI research changed a lot in 2013 with this push toward commercialization and how neural nets in particular and, you know, this more connective AI was in part propelled forward by the fact that it was much easier to commercialize than this symbolic AI that was there before. So can you talk about the role that someone like Jeffrey Hinton played in this and how you see connective AI as enabling this commercialization and this process that you were just saying we have been on more recently? I have a lot of respect for Hinton and the work that he did to create deep learning is remarkable. There have been many benefits that we have derived from deep learning. But the thing to understand about Hinton is that he has a very
Starting point is 00:20:08 fundamental belief that human intelligence is computable. And a lot of people who believe that AGI is around the corner, it's not based on their belief of what software can do. It is inherently based on their belief of what humans and our intelligence is. So he believes it's fundamentally computable. And therefore, inevitably, once you have enough data and enough computational resources, you will be able to recreate it. And based on that premise, Then you start to wonder, well, that would be crazy if we had digital intelligences that were just as good as humans and then could quickly, rapidly elevate their intelligence to being beyond humans. And that's why Hinton often then says, we desperately need to be thinking about this possible future because there's never been a species in the history of the universe that, or an inferior species, but it has been able to control a superior species. And so he's very much now part of the so-called Duma ideology
Starting point is 00:21:12 that believes that AI can develop consciousness, go rogue, and ultimately could destroy humanity. So he had this scientific idea that led him to pursue this particular path. But his scientific idea was also inherently aligned with kind of the incentive structures of companies, large companies, in that these large companies in the previous internet era before we reached the AI era were already accumulating massive trails of data through surveillance capitalism. They were already at significantly advancing their computational hardware to do parallel
Starting point is 00:21:51 processing at scale in order to train their ad targeting machinery. And those two elements then made it extremely easy for them to adopt deep learning and continue to accelerate and supercharge that idea that originally came from a particular scientific philosophy about where human intelligence might come from. One of the things that I often try to point out is it gives companies an automatic competitive advantage to design the rules of the game such that there are very few competitors. Most people and organizations are locked out of that game. And when you make AI into a big data, big computational resources game, then only the wealthiest organizations at the top can actually play. And so, of course, they would be naturally
Starting point is 00:22:48 attracted to pursuing something that gives them that competitive advantage by default. And that is ultimately why I think there has been this dramatic shift towards these large-scale deep learning systems at the detriment of all of these other rich ideas around AI research, AI progress. And another element of that is that because these companies are so well-resourced, they have also developed monopolies on AI talent. And so most of the AI research in the world today is very much driven by what is good for these companies because there are very few independent academics or independent researchers that aren't being funded by these organizations anymore. And that is also driven this fundamental collapsing of the diversity
Starting point is 00:23:39 of research within the AI space. Have you ever browsed in incognito mode? It's probably not as incognito as you think. Google recently settled a $5 billion lawsuit after being accused of secretly tracking users in incognito mode. Google's defense, incognito does not mean invisible. In fact, all your online activity is still 100% visible to a ton of. of third parties unless you use ExpressVPN. Without ExpressVPN, those third parties can still see every website you visit, even in incognito mode, your internet service provider, your mobile network provider, even the admins of your Wi-Fi network. ExpressVPN reroutes 100% of your traffic through secure encrypted servers so third parties can't see your browsing history.
Starting point is 00:24:22 It's easy to use. You just fire up the app and click one button to get protected. Plus, it works on all your devices, phones, laptops, and tablets. And right now it's at its lowest price ever with plans starting at just 3.49 a month. I travel a lot, so I find it important to use a VPN, like ExpressVPN, to stay safe while going online in unfamiliar places. Secure your online data today by visiting expressvpn.com slash TWSU. That's E-XP-R-E-S-V-N.com slash TWSU to find out how you can get up to four extra months.
Starting point is 00:24:52 ExpressVPN.com slash TWSU. There are a few things that I want to pick up on there because there are some important points. And I want to start with that point on data because I feel like most people have heard of this term surveillance capitalism. Most people would recognize that these companies are collecting a lot of data on us. But can you talk about why it is that they are like structurally incentivized to actually collect all of this data and what the consequence of that really is? The thing about AI that is I think the best way to actually understand it is that the current conception of AI, it's a statistical engine that allows corporates to extract behaviors from people and from the world
Starting point is 00:25:35 that continues to perpetuate their monopolistic practices. And the more data that they can accrue and the larger these models, the more patterns they can extract and the more that they can get that advantage. And so that's kind of the reason why there is this natural desire to build, these colossal AI models because it enables the fortification of whatever they're doing. And so that's ultimately the idea of surveillance capitalism is like you're harvesting, surveilling the broader user base, the broader global population to get that valuable material, the raw resource for continuing to fuel your business model, which is ultimately
Starting point is 00:26:21 that still hasn't changed in the AI area. You know, opening eye is now talking. about monetizing the free tier through ads. So because of that particular model that now the tech industry has been running on for a really long time, the end game is to just continue mining, so-called mining for that raw resource, the behavioral futures that Shoshana Zuboff talks about in her book, The Age of Surveillance Capitalism. And because AI is just also incredibly expensive or these large-scale deep learning models are incredibly expensive and there are only so many computer chips in the world and only so much data in the world and only so much water resources in the world if these companies can operate in this as I call it an imperial-esque way where they can
Starting point is 00:27:11 just dominate and aggregate those resources and squat on those resources that in and of itself gives them the competitive advantage if they can also convince everyone that this is. is the only way to create AI progress. Yeah, I think that's really well put, right? And the book really lays out how Open AI saw scale in particular as like a key part of its competitive advantage, right? It was going to stay ahead by embracing scale quicker than other companies and continuing to scale up even faster. Can you talk to us about how they determine that scale was going to be so essential here? And do we actually see their goal playing out? You know, the goal being that they are going to continue to scale up. And as, you know, the scale expands, the models are going to get
Starting point is 00:27:54 better and better and better. Is that actually what we're seeing with these things? Yeah. So originally, they identified scale because it was sort of a confluence of several different ideologies among the executives that were at open air at that particular moment in time. So one of them was Ilyas Satskever, as I mentioned, the chief scientist, who he's a protege of Jeffrey Hinton. And he has a similar belief that ultimately human intelligence is fundamentally computable. And so he actually within the scientific community at the time, he had a very extreme view that scaling could work. Most people within the AI research community believed that there needed to be new
Starting point is 00:28:33 fundamental techniques that would have to be invented in order for us to achieve more AI progress. And now we're actually seeing a return to that, which I'll get back to in a bit. but he thought, we already have certain techniques and we just need to blow them up. We need to maximize them to their limits. At the same time, Sam Altman, he's of a Silicon Valley background. He was the president of Y Combinator,
Starting point is 00:28:59 the most prestigious startup accelerator in the valley. And his whole career was about adding zeros to a startup's user base, adding zeros to the fundraising round. You know, it was always about, Let's just continue thinking orders of magnitude more. How do we get orders of magnitude better? How do we continue expanding?
Starting point is 00:29:21 And he himself used the language of empire. Like he said at the end of his YC tenure, I'm really proud of having built an empire. And so he also really loved this idea of, yeah, let's scale. Let's just see what happens. And Greg Brockman, who was the chief scientist or chief technology officer at the time, also Silicon Valley Guy, was very much gung. ho about the same thing. So it was sort of like a confluence of all these things that led them to say,
Starting point is 00:29:47 let's just grab the largest supercomputer that we can get, which ultimately was built by Microsoft, and then see what happens. And that then led to what they saw as they did see a dramatic leap in certain types of AI capabilities that could be extremely commercializable, or at least they thought would help them turn a profit. Now it's not so clear if it's ever going to turn a profit. But at the time, they thought, well, these large language models, now that they're able to speak in a way that seems fluent and coherent, it seems like they can understand users, I mean, what a compelling product that we can now put into the world, start making some money, and eventually give a return to our investors and continue to fortify
Starting point is 00:30:34 our own business model. That was the decision that led to the scaling. But the thing is, now open AI is at a point where they've actually run out of their scaling rope. And this is one of the reasons why we're seeing a lot of companies, Anthropic, Google, meta, all reaching a point where they realize the so-called scaling paradigm is no longer giving them the same gains that it used to. And arguably, you know, the AI progress that these companies say that they've been making under the scaling paradigm is also something that should be scrutinized. You know, these models have certainly gotten better and better at appearing to speak in more and more fluid sentences, but it still breaks down significantly when you speak to it in non-English languages,
Starting point is 00:31:21 when you try to do certain tasks like mathematics, physics, and other things like that, even as companies have pretended that they're making huge gains in that direction. And so recently, there was this New York Times article written by Cape Mets, one of the very long-time AI reporters where the headline was why we are not getting to artificial general intelligence anytime soon. And it cited this stat from a survey of long-time AI researchers in the field saying 75% of them believe that we still do not yet have the techniques for artificial general intelligence. So we've come like full circle from where we were when Open AI made that scaling pitch to themselves and decided to go for this approach.
Starting point is 00:32:12 Like now we've run the experiment at colossal social, environmental, and labor costs. We're seeing actually it still has not gotten us over the hump that many AI researchers believe needs to be jumped over in order to actually get more sustainable, robust progress in these technologies. Yeah, you know, the goal of scale at all costs is not. being achieved. But as you write about it in the book, scale is key to so many of the harms that have come of these technologies as well, right, that you know, that you outlined so well in presenting open AI as this empire and pursuing this empire model. So what have been the consequences
Starting point is 00:32:53 of the effort at scale at all costs that we have seen over the past number of years? There's so many different costs. And I highlight two of them in depth in the book, but just to name some of them. Like, there's a huge cost to data privacy. There's a huge cost to intellectual property erosion, the perpetuation of surveillance. There is a huge environmental cost, huge labor exploitation costs, and many more costs in terms of then, like, ultimately, when these technologies are deployed, this scaling paradigm leads to a lack of understanding among the public about how to actually use these technologies effectively. So that in and of itself creates a lot of harm. But the two that I focus on in the book in depth are the labor
Starting point is 00:33:38 exploitation and the environmental harms. When Open AI first decided to go for the scale, the norm within the research field, actually the trend that was really catching on was to use curated clean, small data sets for training AI models. There was this realization through some research happening at the time that you can actually get away with teeny tiny data sets for quite powerful models if you go through the curation and cleaning process. And that actually enables AI to be diffused more widely through the economy because most industries are actually data poor. It's only the internet scale giants that are data rich to the point that they can actually
Starting point is 00:34:19 operate in this giant deep learning scaling paradigm. So when OpenAI chose the scaling thing, they shifted completely away from tiny curated data sets to massive polluted data sets. They decided, let's scrape the English language internet. And once you're working with data sets at that size, you cannot do a good job of cleaning it. They clean it through automated methods, which means that there's still a whole lot of gunk that gets pumped into these models. And so I quote this one, the executive of this platform called Appen, which is a middleman firm that orchestrates the contracting of workers for, AI companies in the global south or in economically vulnerable communities to ultimately do the
Starting point is 00:35:05 data cleaning and data preparation and content moderation work for these AI models. And he said in the previous era, it was all about cleaning inputs. And now all of the inputs are fed in and it's about controlling the outputs. And this is where the labor exploitation comes in. I interviewed workers in Kenya who were contracted by open AI to quote unquote control the outputs by developing a content moderation filter that would wrap around all of opening eyes technologies, including what it ultimately became chat chbt to prevent a model that is designed to generate text about anything from spewing racist, harmful, and abusive speech to users once it's placed in the hands of millions of users. And what that meant was these Kenyan workers had to go through reams of text,
Starting point is 00:35:56 of the worst text on the internet as well as AI-generated text where Open AI was prompting its own models to imagine the worst text on the internet and these workers had to then put that text into a detailed taxonomy of is this hate speech, is this harassment, is this violent content, is this sexual abuse,
Starting point is 00:36:15 how violent is this content? Is it graphically violent? Is this sex content involving the abuse of children? And ultimately, we see return to the way that content moderators of the social media era experienced this harm, which is that these workers were deeply traumatized by this work and the relentless exposure to this toxic content. And it not only unraveled their mental sanity, it also unraveled their families and their communities. So I talk about this man, Mo Fato Kinie, who's one of
Starting point is 00:36:50 the Kenyan workers' Open Eye contracted, who, by the way, did not actually know he was working for Open AI originally. He only found out because of a leak from one of his superiors. And when he started doing the work on the sexual content team, his personality completely changed. He wasn't able to explain to his wife at the time why it was changing, because he didn't know how to say to her, I'm reading sex content all day. That does not sound like a real job. Chat, GPT, he hadn't come out yet. there was no conception of what that means. And so one day she texts him and says, I want fish for dinner. He buys three, one for him, one for her, and one for her daughter, his stepdaughter, who he called his baby girl. And by the time he got home, their bags had been packed. They were
Starting point is 00:37:39 completely out of the apartment. And she texted him and said, I don't understand the man you've become and I'm not coming back. It is so key to understand that this is not a necessary form of labor. Silicon Valley will pretend that this work is necessary, but it is only necessary based on their premise of scaling these models using polluted data sets. The second harm that I highlight in the book is the environmental one. Now we're talking about extraordinary massive expansion of data centers and supercomputers to train these models at scale. And so there was a recent report out of McKinsey projecting that based on the current pace of AI computational infrastructure expansion, we will need to add two to six times the amount of energy consumed annually
Starting point is 00:38:31 by the state of California to the global grid. In the next five years, most of that will be serviced by fossil fuels. We're already seeing reports of coal plants having their lives extended. Elon Musk constructed his massive supercomputer called Colossus in Memphis, Tennessee, and is powering it based on around 35 unlicensed methane gas power plants that are pumping thousands of tons of air pollutants into these communities. So it's a climate crisis, it's a public health crisis, and it's also a freshwater crisis because many of these data centers move into communities and need to be cooled with fresh water,
Starting point is 00:39:13 not any other kind of water, because it could lead to the corrosion of the equipment and lead to bacterial growth. And most often, it's actually serviced by public drinking water, because that's the infrastructure that has already been laid to deliver fresh water to buildings and businesses. And I talk about this one community in Montevideo Uruguay, which was literally facing a historic drought to the point where the Montevideo government started mixing toxic water into the public drinking water supply, simply to have something come out of people's taps. And people who were too poor to buy bottled water had to just drink that toxic water and women were having miscarriages. And it was in the middle of that that Google decided to put a data center into the
Starting point is 00:40:01 Montevideo area and proposed to take the freshwater resources that the public was not receiving. And so we are just seeing the amplification of so many intersecting crises with the perpetuation of this scaling at all costs paradigm. Yeah, there are absolutely horrible stories, right? And there are more in your book and more that I'm sure people have been reading about what is going on here. But, you know, to hear the story of the Kenyan content moderator and just, you know, that's one person of so many that have been affected by this technology in really
Starting point is 00:40:37 harmful ways whose stories don't often get told. And as you're talking about how, you know, there has been this explicit decision to pursue this form of development that relies on a lot of data, regardless of whether that is actually necessary, which has not only the consequences, these human consequences, but also requiring these massive data centers in order to process all this stuff. It just makes me think about the decision of so many governments, you know, I think specifically about the government in the UK that is looking at, you know, kind of tearing apart copyright legislation, allowing huge data centers to be built against community opposition. But that's just one example of so many
Starting point is 00:41:15 governments around the world who feel they need to get their little piece of this AI investment and are just trampling over rights and concerns. And it feels like, based on what you're talking about, at the end of the day, this isn't even going to deliver. But there's going to be so many harms that come of it regardless. Exactly. I mean, these companies talk about how we're going to see, you know, massive economic gains from this technology. And we have not seen that at all. In fact, we're seeing entry-level jobs right now disappearing. And this was a highly predictable effect. of technologies that are inherently being designed to be labor automating. You know, opening eyes definition of artificial general intelligence
Starting point is 00:41:53 is highly autonomous systems that outperform humans at most economically valuable work. It is on the label. They are out for people's jobs. And the thing that happened in the first wave of automation in factories was that companies always say, some jobs will be lost, but new jobs will be created, but they never talk about which jobs are lost and which jobs are created. What happened in the manufacturing era was the entry-level jobs were lost, and then there were lower-skilled jobs created and higher-skilled jobs created.
Starting point is 00:42:22 But the career ladder breaks. So anyone who successfully got into the industry before that happened, they're able to access the higher-skilled jobs, but anyone that wasn't, they end up in the lower-skilled jobs, and the chasm between the have-and-have-nots widens. And we are now seeing this replay out in real time with now. digital automation of white collar work with law firms, with the finance industry, with journalism. And the other thing to add is the automation is not happening simply because these technologies are able to actually fully automate these jobs. Like ultimately, the people that are
Starting point is 00:43:01 laying off workers are executives that believe they can replace their human workers with these technologies. And recently, Klarna had a really funny oopsie where they laid off all these workers and that they would use AI instead, and then they realized the AI was crap. So then they had to rehire all of those workers. And so it's labor exploitation at its finest in that the technology doesn't even do the job that well half the time, but executives are being persuaded into the value proposition of, well, does it do it good enough that I can sort of continue to lower my costs and continue to make shareholders happy and continue to brand myself as an innovative firm
Starting point is 00:43:46 by destroying a bunch of jobs and using AI services instead. Yeah, and I feel like one of the key pieces of that Klarna story as well is the, I guess it was the CEO kind of said that he wanted to make sure these new customer service jobs were like an Uber style job, right? So really changing the type of work that it is on the other side of this attempted AI implementation. Obviously, you know, a lead character in the book and, you know, through this conversation and through these changes in the AI industry has been Sam Alman for understandable reasons.
Starting point is 00:44:17 It quickly becomes clear in your book just how manipulative of a person who he is, you know, as a leader, and how this shows throughout his career at Loop, at Y Combinator, at Open AI in particular, and how he is able to shape relationships and events in his favor. How does he do this? And when did it become obvious to you that this was the way this man, how he was kind of proceeding in the world? Alman is an incredibly good storyteller. He's really good at painting these sweeping visions of the future that people really want to become a part of. And the reason why that second part of the sentence, people want to become part of it, is because he also was able to tailor
Starting point is 00:44:58 the story to the individual. He has a loose relationship with the truth. And he can just say what people want to hear. And he's very good at understanding what people want to hear. And so that is ultimately what allows him to be, you know, a once-in-a-generation fundraising talent. He's an incredible talent recruiter, and he's able to mass all of these resources towards whichever direction he wants then deploy them. One of the things that I discovered over time, I mean, he is such a polarizing figure because you ask some people, and they say he's the steep jobs of our generation, and then you ask other people, and they say he's a manipulative liar. And I realize that it really depends on whether that particular person has a vision that aligns with what Altman is doing or not.
Starting point is 00:45:46 So if you align with the way that Altman is generally heading, then he's the greatest asset in the world because of his persuasive abilities. He is the one that's knocking down obstacles and greasing the wheels for that future to come into place, to come to fruition. But if you disagree with his vision, then he becomes one of the most threatening. people possible because now his persuasive power is being directed at doing something fundamentally against your values. And so the way that I ultimately figured out that he, you know, has a loose relationship with the truth and does this kind of tailoring of his story is I started asking people, instead of telling me to characterize, you know, do you think he is honest or do you think he's a
Starting point is 00:46:37 liar or whatever. I started asking people, what did Sam say to you at this era of the company in this meeting about what he believed and why the company was doing what it was doing? And because I interviewed a lot of people, I interviewed over 90 opening eye people, I was able to interview groups, you know, like enough people at every era of the company to realize that he was telling different people different things. So one of the dynamics that I talk about in the book is that there's these kind of quasi-religious movements that have developed within Silicon Valley of people who believe that artificial general intelligence is possible, but then one faction, the boomers that believe it'll bring us to utopian, the other faction, the doomers, that AGI will
Starting point is 00:47:23 destroy humanity. And when I asked boomers, do you think Altman's a boomer, they would say yes. And when I asked Dumers, do you think Altman's a doomer? They would say yes. And so that's when I started realizing, wait a minute, people think that he believes what they believe. And that is ultimately how he's able to push everyone forward in whatever direction he wants them to. Yeah, I feel like you could really see that when he was trying to shape the regulatory conversation, when he would be using the doomer arguments and the boomer arguments. And it was like, where does this guy really stand? But he was wielding it really effectively. There are a lot of things I could ask you about Altman. But, you know, one of the key threads, one of the key stories in the book,
Starting point is 00:48:06 is his attempted ouster or, you know, he was ousted, but then able to come back. And it really struck me how in the public, as this was happening, we had a particular narrative that like Sam Altman was done wrongly. He shouldn't have been pushed out. There were key people in his camp who are pushing this, including some journalists like Kara Swisher, most notably. And then we are increasingly getting, you know, the tale of what actually happened, which your book really helps to flash out for us. So what do you make of the difference in the narratives that we were hearing there and what the actual story tells us about Sam Altman himself. Alvin throughout his career, he's been incredibly media savvy and has known how to drip feed
Starting point is 00:48:48 tidbits to reporters and sort of see different narratives in the public discourse, ultimately in his favor. So I think part of the disconnect is that he was at work trying to shape the public discourse towards something that led people to believe that, you know, he was wronged. And I don't necessarily like side with the board and saying that they did they absolutely did the right thing i mean clearly they also made a lot of missteps along the way and and had fundamentally a lack of transparency around what they did and why they did it of course they were also constrained in certain ways that led to that opacity but the thing that i did realize is you know the board crisis happened because of two separate phenomenon one was the clashing between the boomers and the doomers and the other one was
Starting point is 00:49:36 Altman's polarizing nature and the kind of large unease that he leaves many people with of where does this guy actually stand and can we actually trust when he's saying that he's leading us one way, that he's actually leading us that way. And so it was a collision of both of these forces. And, you know, Altman's not unique in being a storyteller that has a loose relationship with the truth. Like there are many of these types in Silicon Valley. But I, I think within the context of an ideological clash that was framed as this is going to make or break humanity, suddenly those Silicon Valley-esque quirks become a lot more high stakes. I realized your mind reporting is that in order to understand what is happening, we cannot just understand
Starting point is 00:50:28 this through the lens of money. We also have to understand this through the lens of ideology. and ultimately, irrespective of how the board crisis could have played out, all of the different variations, the thing that stayed, that would stay invariant through each of these different instantiations, possible paths, is that it was ultimately just a handful of people that were making profoundly consequential decisions. And that in and of itself is something that we should be questioning rather than whether Altman should have stayed or not stayed, whether he was wronged or not wronged. Yeah, no, really well put. And I just have one quick final question for you before we end off. You've talked about how because of all this focus on generative AI, because of all the money that has been pushed into it, that other, you know, research on other forms of AI have been getting much less attention in recent years, you know, how this effort to scale at all costs really isn't delivering in the way that these companies and executives expected or at least told us they expected. And how, you know, because of so much money that has gone in here, there is a lot of expectation from these companies as to,
Starting point is 00:51:32 the returns that they expect. So considering all of that and how, you know, it doesn't seem clear that AGI is on the horizon. Do you think that we're in for another AI winter in the near future? And what might that mean for the industry if so? The amount of money that they've pumped into this means that there are only so many industries that they can go to to try and recoup that investment. And that means they are naturally going to go to the oil and gas industry. They're naturally going to go to the defense industry. They're naturally going to go to other extremely lucrative industries that are not necessarily within the public's best interest to continue perpetuating and fortifying with these technologies.
Starting point is 00:52:09 What I have increasingly advocated for based on my reporting is not everything machines or whatever technology spin out of a quest to try and develop everything machines, but to develop AI systems that are task-specific and well-scoped. And there are benefits both in the sense that we lose all of the, massive scaling harms that come from trying to build everything machines, but we also allow consumers to have a much better understanding of where to apply these technologies. And thirdly, we end up in a place where the companies themselves are able to develop the technologies more responsibly because when you're trying to develop everything machines, you know, opening
Starting point is 00:52:51 our researchers told me themselves, we cannot anticipate how people are going to abuse and ultimately harm themselves with these technologies. And therefore, we just have to release it into the world and see what happens and shore up the challenges retroactively. But when you develop a well-scoped system that's bounded, then you actually can anticipate all the ways that it might fall apart in advance and shore them up before you start unleashing it as an experiment on the broader population. And this is where the conflict with commercialization and profit at all costs, you know, conflicts with that vision that you're laying out. Karen, it's a fantastic book.
Starting point is 00:53:30 Keep up the great work. Thanks so much for coming on the show. Thank you so much, Ferris. Karen Howe is an award-winning journalist and the author of Empire of AI. Tech Won't Save Us is made in partnership with The Nation magazine and is hosted by me, Paris Marks. Production is by Kyla Houston. Tech Won't Save Us relies on the support of listeners like you to keep providing critical perspectives on the tech industry. You can join hundreds of other supporters by going to Patreon.com slash TechWon't Save Us and making a pledge of your own.
Starting point is 00:53:55 Thanks for listening and make sure to come back next week. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.