Tech Won't Save Us - We All Suffer from OpenAI’s Pursuit of Scale w/ Karen Hao
Episode Date: June 12, 2025Paris Marx is joined by Karen Hao to discuss how Sam Altman’s goal of scale at all costs has spawned a new empire founded on exploitation of people and the environment, resulting in not only the los...s of valuable research into more inventive AI systems, but also exacerbated data privacy issues, intellectual property erosion, and the perpetuation of surveillance capitalism.Karen Hao is an award-winning journalist and the author of Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Kyla Hewson.Also mentioned in this episode:Karen was the first journalist to profile OpenAI.Karen has reported on the environmental impacts and human costs of AI.The New York Times reported on Why We're Unlikely to Get Artificial General Intelligence Anytime Soon. Support the show
Transcript
Discussion (0)
we absolutely need to think of this company as a new form of empire and to look at these colonial
dynamics in order to understand ultimately how to build technology that is more beneficial for humanity.
Hello and welcome to Tech Won't Save Us, made in partnership with The Nation magazine. I'm your host, Paris Marks, and this week my guest is Karen Howe.
Karen is an award-winning journalist who has written for MIT Technology Review, The Wall
Street Journal, and The Atlantic, and is now the author of Empire of AI, Dreams and Nightmares
in Sam Altman's Open AI.
It is a fantastic book that digs into not just this company
that we have all become so familiar with,
not just this executive, this man in Sam Altman
and how he has shaped not just this tech industry,
but so much of our world over the past few years,
or could it look very different
and have much fewer of the harmful impacts
that we have come to associate with generative AI over the past few years, or could it look very different and have much fewer of the harmful impacts that we have come to associate with generative AI over the past few years? I think you will
know the answer to that. I am a huge fan of Karen's new book. I think it is fantastic.
And it echoes a lot of the things that other guests on the show have been saying for the
past number of years about AI, about generative AI, about this quest that OpenAI has been
on along with these other tech companies to build these massive models,
to despoil the environment in the process and not to really care about the consequences of the decisions that they're making.
Karen describes really well how the quest for generative AI has really been shaped for commercialization and ideology over anything else, and how artificial intelligence does not actually have to be the way that these companies have set out to create it,
but could look very different, and as a result would not have the degree of harms or the degree of environmental impact that we are seeing
with the rollout of these technologies in the way that Sam Altman and OpenAI feel that they should be developed. It's no wonder that Sam Altman has even tweeted
about Karen's book to try to discourage people
from taking to heart the criticisms
that she's making about the company.
Because while this is a book that certainly looks
at a company, looks at an executive,
Karen is uncompromising in holding true to her values,
holding true to how she feels and the criticisms
that result from that of the way
that artificial intelligence has been approached for a number of years.
So I can't recommend it enough. I was so happy to have Karen on the show.
We had such a great conversation digging into some of the key issues in the book.
But honestly, there was not nearly enough time to get to so many of the interesting points and great arguments that she makes.
So, you know, once again, I would highly recommend it.
If you do enjoy our conversation, make sure to leave a five star review on your podcast platform of choice.
You can share the show on social media or with any friends or colleagues who you think would learn from it.
And if you do want to support the work that goes into making tech won't save us every single week so we can keep having these critical
in-depth conversations about technology and the tech industry, you can join supporters like Meadowhawk from Virginia, Megan in Grass Valley, California, Van Presley in Copenhagen, Eduardo from Portugal,
Maria from Tallahassee, Florida, Martin from Belgium,
Michael in Berlin, and Miles in Cardiff
by going to patreon.com slash tech won't save us,
where you can become a supporter as well.
Thanks so much and enjoy this week's conversation.
Karen, welcome to Tech Won't Save Us.
Thank you so much for having me Paris.
I'm thrilled to have you on the show.
Obviously I've been following your reporting for years because it's been giving us such fantastic looks Karen, welcome to Tech Won't Save Us. Thank you so much for having me, Paris. I'm thrilled to have you on the show.
Obviously, I've been following your reporting for years
because it's been giving us such fantastic looks into AI
and the broader industry around it.
And now, of course, you have this new book, Empire of AI,
which has a lot of people talking for the right reasons,
I think, because it is just a stunning book.
Because you have been doing this reporting on open AI
and the AI industry more broadly for all of these years, when did you decide it was time to expand that into a deeper investigation that you've done in this book?
I started thinking about it in early 2022, right after I had finished publishing a series at MIT Technology Review called AI Colonialism. And that was based on work that I had been doing just looking
at the impact of the commercialization of AI
on society.
And I had done some traveling to different places,
and I found significant evidence to suggest
that the way that the AI industry was operating
was perpetuating former colonial dynamics.
Then, right as I was in the middle of working on a proposal
for that, ChatGBT came out and my agent at the time asked me, you know, how does ChatGBT change
things? How does OpenAI change things? And I was like, oh, it massively accelerates everything
that I was talking about. And he was like, well, then you have to write about OpenAI
and you have to tell this story through this company and
the choices that they made in order to help readers be grounded in the real scenes and details of how
this this technology and this current manifestation of AI came to be. So by early 2023, I fully
conceptualized the version of the book that you see now. That's awesome. And you can really see that come through because as we were talking about before we started recording,
there are corporate books where there are just hints of kind of issues in the background of a broader hagiographic story of the company.
But your book doesn't pull any punches. Like, you were very clear about the orientation that you take toward this, the issues with OpenAI itself and the particular model of AI development that it has really popularized
and pushed throughout the industry.
Did it make you a bit nervous to take that approach with it, thinking about how people
might respond?
Or were you clear from the very beginning that, you know, this was the way you were
going to approach it, even if it pissed people off?
I was absolutely nervous because I wanted to tell the story of open AI with high fidelity
and I wanted to be truthful and honest to my position on the matter, which is that we
absolutely need to think of this company as a new form of empire and to look at these
colonial dynamics in order to understand ultimately how to build technology that is more beneficial for humanity.
And I thought that I might lose people with that argument,
but the thing that has been really amazing
is there have been a category of people that have said,
I don't agree with your politics
and I don't agree with your argument,
but I really loved your reporting
and appreciated both the inside story
and the focus on communities
that are being impacted by the ripple effects of this company's decisions.
And so they're still recommending and widely sharing the book to their friends and coworkers.
It ended up working out, but there was that initial tension that I felt of, I know that
I need to tell the story this way, but like if it's going to lose people, that is a cost I'm going to have to take. Yeah. No. And the fact that you even are getting that
response to it shows the quality of the work that is really in the book because it really is
fantastic. And I know there's awards in the future for this one. There's no question about it.
But you're talking about OpenAI, right? The book is really framed around OpenAI as a way to tell
the story. The company was founded in 2015 with these really high ideals. You visited the company for the first time in
2019, if I have that correctly from the book, when you were reporting on it. What did you
make of the company at that time? And had it already become clear to you by then that
some of these original ideals that were shared at the time that it was founded were not really
holding through to the way the company was really operating.
Yeah, absolutely.
So the reason why I even decided to profile OpenAI back then, I was an AI reporter at MIT
Technology Review focused on very much fundamental AI research, cutting edge ideas that didn't
necessarily have commercial potential.
And so OpenAI was on my radar because they said that they didn't have any commercial intent
and they were just focused on that basic research.
And in 2018, 2019 was when there were a series
of announcements at the organization that suggested
that there were changes underway.
And it made me wonder because they already had some sway
in the AI world and also sway in the policymaking world, that whatever changes
happened there would then have these effects in the way that the public would understand
the technology, policymakers would understand it, and also how AI might ultimately be introduced
to a much broader audience.
And those announcements were, OpenAI had said that they would be fully transparent and open
source all of their research. And in 2019, they started withholding research. Elon Musk left the
organization and the company restructured to put a for-profit arm within the nonprofit organization.
And then Sam Altman became CEO. And right as I confirmed with the company that I wanted to embed within the company to profile them,
they then announced that they had a $1 billion
new deal from Microsoft, new investment from Microsoft.
And so it just seemed like there were quite a lot of things
that were pivoting in the opposite direction
from how the company had originally conceived of itself
when it was just purely a nonprofit had originally conceived of itself when it was
just purely a nonprofit. And then I found when I was at the organization, I mean, I came in
genuinely wondering, like, this does seem like a unique premise to have a nonprofit
focused on these fundamental questions and explicitly stating that they want to do things
with the benefit of all humanity.
So I really went in with questions of, okay, let's believe in this premise.
And I want to ask them to articulate to me how they see this playing out.
What are they doing?
What research are they focused on?
Why are they investing so much money in this idea of so-called artificial general intelligence?
And I quickly realized that there was such a fundamental
lack of articulation of what they were doing and why. And also that there was this disconnect
between how they positioned themselves and how they actually operated. So they said they were
transparent, they were highly secretive, they said they were collaborative, but executives explicitly
underscored to me again and again that they had to be number one in AI
progress in order to fulfill their mission, which is inherently competitive. And they clearly had
some kind of rumblings of commercial intent starting because they needed to ultimately
give Microsoft back that investment. And so that is what I then wrote in my profile for MIT
Technology Review, which came out
in 2020.
And ever since then, I've had a very tenuous relationship with the company because they
were quite disappointed by my portrayal.
Yeah, you can understand why, you know, to a certain degree for a company that likes
to control the narrative like that.
I was really struck in reading the book.
This is not something that I was familiar with before how there are even documents and
conversations that you can see from like early on in the organization
where they're basically acknowledging that it's not going to remain open for very long,
you know, that these ideals are clearly things that they don't even really have the intention
to stick to. And like all of that is there.
Yes. So I was quite lucky in that not only was I able to get a lot of documents from my sources,
but that also in the midst of writing this whole history, Elon Musk sued OpenAI and it
opened up a lot of these early documents.
And the thing is, what I realized over time as I was doing this reporting is OpenAI and
Altman, Sam Altman specifically, has been very strategic throughout
the organization's history and identifying
what the bottleneck is for that particular era
of the organization's goals,
and then figured out a way to overcome
that bottleneck with certain things.
So initially the bottleneck was talent.
When OpenAI started, Google actually had a monopoly on most of the top AI research talent,
and OpenAI didn't have the ability to compete with Google on salaries. So the bottleneck was,
what can we offer to attract talent that's not just money? And basically, the nonprofit mission
money. And basically, the nonprofit mission was a perfect answer to that question, give people a sense of higher
purpose. And Altman ended up using that as a very effective
recruiting tool for some key scientists and people to
affiliate with the organization in the very beginning,
including the chief scientist, Ilya Sudskiver, who at the time
was already quite renowned within the AI world. And he said, Altman said to Sudskiver, who at the time was already quite renowned within the AI world.
Altman said to Sudskiver, don't you want to do something that is ultimately more than just building products for a for-profit company? That was part of the reason Sudskiver ended up buying
into the premise of leaving Google in the first place and then led to the snowball effect of more
and more AI researchers coming to the organization
for the purpose of receiving mentorship from Sutscover.
But when the bottleneck shifted
in one and a half years into the organization,
they realized in order to be number one,
we want to scale the existing techniques
within AI research aggressively, pump more data, and build larger
super computers than have ever been seen before to train these technologies, then the bottleneck
shifted to capital.
And that is when they decided to create some kind of fundraising vehicle, this for-profit
arm for raising that capital.
And then it was easy to shed the other things
that had helped them accrue the talent
because that wasn't the bottleneck anymore
and to then shift to accruing the capital.
And that is kind of in the story of OpenAI
throughout its decade long history
and why it's so still to this day, so confusing.
What is this company doing and why?
Because it keeps changing every couple of years, depending on what it's going after.
I wanted to talk a bit more about, you know, that process of developing this AI
technology, generative AI as we know it today, and kind of the approach that
open AI took to that in the book, you talk about a difference between
symbolic AI and connective AI.
Can you tell us a bit what the distinction is there
and how you would define AI generally
as this term that we hear all the time
but can seem so difficult to actually pin down?
Yeah, so AI originally,
it's a term that has a very long history.
It originally was coined in 1956
by a Dartmouth assistant professor, Don McCarthy.
And decades later, he explicitly said,
I invented the term artificial intelligence
because I needed some money for a summer study.
So he said this was a marketing term
and it was actually to draw attention to research
that he was already doing under a different name.
And that name was originally Automata Studies.
But the thing that happened when he decided to reconceive of his research
under this brand of artificial intelligence
was it pegged the discipline to the idea of recreating human intelligence.
And the problem with that is all the way up until present day, we still have no scientific
consensus around where human intelligence comes from.
And so there have been significant debates over the decades over how to build AI rooted
in disagreements over what human intelligence is.
So the original disagreement was between the connectionists and the symbolists.
And the symbolists believed human intelligence comes from the fact that we have knowledge.
So if we want to recreate it in computers, we should be building databases that encode
knowledge.
And if we pump a lot of resources into encoding larger and larger databases of knowledge,
we will eventually have intelligent systems emerge. The connectionists believe human intelligence emerges from our ability to learn.
When you observe babies, they're exploring the world, they're very rapidly accumulating
experience and then they grow up and become more intelligent over time.
So that branch then believed in building so-called machine learning systems, software that can learn from data,
data being the equivalent of human experience,
and that ultimately then narrows into another sub-branch
called deep learning, which is essentially machine learning
but using especially powerful software
called neural networks that are loosely modeled
after the human brain.
And originally, symbolists were the ones
that really dominated people's conception
of how to achieve AI,
but at some point, we got the internet,
which meant that there was a lot more data.
It became a lot cheaper to collect digital data
rather than collecting it in the physical world,
and computers started
advancing quite rapidly.
And companies started becoming much more interested in AI development.
And so it was the convergence of all of these different trends that then led a shift from
the symbolism vision of AI development towards the connectionism vision that ultimately leads
us all the way to present day with Silicon
Valley dominating our imagination of what AI can be by defining it as massive models
trained on internet loads of data with tens of thousands, hundreds of thousands of computer
chips consuming extraordinary amounts of energy and fresh water.
Yeah.
I feel like when I talk to people who are more open to this notion that AGI is around
the corner and of course, you know, this is going to happen because all these companies
are saying that they'll often point to someone like Jeffrey Hinton and say, you know, this
is the scientist, he's won these awards, he says this is on the horizon.
So he must be right, right?
Because you know, he's this really talented researcher and whatnot.
This always frustrates me, of course, knowing who Jeffrey Hinton is.
But in the book, you talk a lot about how AI research changed a lot in 2013 with this
push toward commercialization and how neural nets in particular and this more connective
AI was in part propelled forward by the fact that
it was much easier to commercialize than the symbolic AI that was there before. So can you
talk about the role that someone like Jeffrey Hinton played in this and how you see connective
AI as enabling this commercialization and this process that you were just saying we have been
on more recently? I have a lot of respect for Hinton and the work that he did to create deep learning
is remarkable.
There have been many benefits that we
have derived from deep learning.
But the thing to understand about Hinton
is that he has a very fundamental belief
that human intelligence is computable.
And a lot of people who believe that AGI is around the corner,
it's not based on their belief of
what software can do.
It is inherently based on their belief of what humans and our intelligence is.
So he believes it's fundamentally computable and therefore, inevitably, once you have enough
data and enough computational resources, you will be able to recreate it.
And based on that premise, then you start to wonder,
well, that would be crazy if we had digital intelligences
that were just as good as humans
and then could quickly, rapidly elevate their intelligence
to being beyond humans.
And that's why Hindenhoff then says,
we desperately need to be thinking
about this possible future
because there's never been a species in the history of the universe that, or an inferior
species that has been able to control a superior species.
And so he is very much now part of the so-called Doomer ideology that believes that AI can
develop consciousness, go rogue, and ultimately could destroy humanity.
So he had this scientific idea that led him to pursue this particular path.
But his scientific idea was also inherently aligned with kind of the incentive structures
of companies, large companies, in that these large companies in the previous internet era before we
reached the AI era were already accumulating massive troves of data through surveillance
capitalism. They were already significantly advancing their computational hardware to do
parallel processing at scale in order to train their ad targeting machinery.
And those two elements then made it extremely easy
for them to adopt deep learning
and continue to accelerate and supercharge
that idea that originally came
from a particular scientific philosophy
about where human intelligence might come from.
One of the things that I often try to point out is,
it gives companies an automatic competitive advantage
to design the rules of the game
such that there are very few competitors.
Most people and organizations are locked out of that game.
And when you make AI into a big data,
big computational resources game,
then only the wealthiest organizations at the top can actually play.
And so, of course, they would be naturally attracted to pursuing something that gives
them that competitive advantage by default. And that is ultimately why I think there has been this dramatic shift towards these large-scale deep learning systems at the detriment of all of these other rich ideas around AI research, AI progress.
And another element of that is that because these companies are so well-resourced, they have also developed monopolies on AI talent.
And so most of the AI research in the world today is very much driven by what is good for these companies,
because there are very few independent academics or independent researchers that aren't being funded by these organizations anymore.
And that is also driven this fundamental collapsing of the diversity of research within the AI space.
There are a few things that I want to pick up on there because there are some important points.
And I want to start with that point on data because I feel like most people have heard of
this term surveillance capitalism. Most people would recognize that these companies are collecting
a lot of data on us. But can you talk about why it is
that they are like structurally incentivized to actually collect all of this data and what the
consequence of that really is? The thing about AI that is I think the best way to actually
understand it is that the current conception of AI, it's a statistical engine that allows
corporates to extract behaviors from people and from the world that continues
to perpetuate their monopolistic practices.
And the more data that they can accrue and the larger these models, the more patterns
they can extract and the more that they can get that advantage.
And so that's kind of the reason why there is this natural desire to build these colossal
air models because it enables the fortification of whatever they're doing.
And so that's ultimately the idea of surveillance capitalism is like you're harvesting, surveilling
the broader user base, the broader global population, to get that valuable material,
the raw resource for continuing to fuel your business model, which is ultimately ads.
That still hasn't changed in the AI era.
OpenAI is now talking about monetizing the free tier through ads.
Because of that particular model that now the tech industry has been running on for
a really long time, the end game is to just continue mining, so-called mining for that
raw resource, the behavioral futures that Shoshana Zuboff talks about in her book, The
Ages of Fairness Capitalism.
And because AI is just also incredibly expensive, or these large-scale deep learning
models are incredibly expensive, and there are only so many computer chips in the world
and only so much data in the world and only so much water resources in the world, if these
companies can operate in this, as I call it, an imperial-esque way where they can just dominate and aggregate those resources
and squat on those resources, that in and of itself gives them the competitive advantage
if they can also convince everyone that this is the only way to create AI progress.
Yeah, I think that's really well put, right?
And the book really lays out how OpenAI saw scale in particular as like a key part of its competitive advantage,
right? It was going to stay ahead by embracing scale quicker than other companies and continuing
to scale up even faster. Can you talk to us about how they determine that scale was going to be so
essential here? And do we actually see their goal playing out? You know, the goal being that
they are going to continue to scale up and as you know, the scale expands, the models are going to get better and better and better.
Is that actually what we're seeing with these things?
Yeah, so originally they identified scale because it was sort of a confluence of several
different ideologies among the executives that were at open air at that particular moment
in time.
So one of them was Ilya Setskever, as I mentioned, the chief scientist, who
he's a protege of Jeffrey Hinton.
And he has a similar belief that ultimately,
human intelligence is fundamentally computable.
And so he actually within the scientific community
at the time, he had a very extreme view
that scaling could work.
Most people within the AI research community
believed that there needed to be new fundamental techniques
that would have to be invented in order for us to achieve more AI progress.
And now we're actually seeing a return to that, which I'll get back to in a bit.
But he thought we already have certain techniques and we just need to blow them up.
We need to maximize them to their limits.
At the same time, Sam Altman,
he's of a Silicon Valley background.
He was the president of Y Combinator,
the most prestigious startup accelerator in the Valley.
And his whole career was about adding zeros
to a startup's user base,
adding zeros to the fundraising round.
It was always about, let's just continue thinking
orders of magnitude more.
How do we get orders of magnitude better?
How do we continue expanding?
And he himself used the language of empire.
Like he said at the end of his YC tenure,
I'm really proud of having built an empire.
And so he also really loved this idea of,
yeah, let's scale, let's just see what happens. And Greg Brockman, who
was the chief scientist, or chief technology officer at the
time, also Silicon Valley guy was very much gung ho about the
same thing. So it was sort of like a confluence of all these
things that led them to say, let's just grab the largest
supercomputer that we can get, which ultimately was built by Microsoft,
and then see what happens.
And that then led to what they saw as,
they did see a dramatic leap
in certain types of anti-capabilities
that could be extremely commercializable,
or at least they thought would help them turn a profit.
Now it's not so clear if it's ever going to turn a profit.
But at the time they thought, well, these large language
models, now that they're able to speak in a way that
seems fluent and coherent and it seems like they
can understand users, I mean, what a compelling product
that we can now put into the world,
start making some money, and eventually give a return
to our investors
and continue to fortify our own business model.
That was the decision that led to the scaling.
But the thing is, now OpenAI is at a point where they've actually run out of their scaling
rope.
And this is one of the reasons why we're seeing a lot of companies, Anthropic, Google, Meta, all reaching a point where they
realize the so-called scaling paradigm is no longer giving them the same gains that
it used to.
And arguably, the AI progress that these companies say that they've been making under the scaling
paradigm is also something that should be scrutinized.
These models have certainly gotten better and better at appearing to speak in more and more fluid sentences,
but it still breaks down significantly
when you speak to it in non-English languages,
when you try to do certain tasks like mathematics,
physics, and other things like that,
even as companies have pretended
that they're making huge gains in that direction.
And so recently there was this New York Times article written by Kate Metz, one of the very
long time AI reporters, where the headline was, why we are not getting to artificial
general intelligence anytime soon.
And it cited this stat from a survey of long time AI researchers in the field saying 75% of them believe that we still
do not yet have the techniques for artificial general intelligence. So we've come like full
circle from where we were when OpenAI made that scaling pitch to themselves and decided to go for
this approach. Like now we've run the experiment at colossal social,
environmental and labor costs.
We're seeing actually, it still has not gotten us over
the hump that many AI researchers believe needs to be
jumped over in order to actually get more sustainable,
robust progress in these technologies.
Yeah, you know, the goal of scale at all costs is not being achieved.
But as you write about it in the book, scale is key to so many of the harms that have come
of these technologies as well, right?
That you outline so well in presenting open AI as this empire and pursuing this empire
model.
So what have been the consequences of the effort at scale at all costs that we have seen over the past number of years?
There's so many different costs and I highlight two of them in depth in the book, but just to name some of them,
there's a huge cost to data privacy, there's a huge cost to intellectual property erosion, the perpetuation of surveillance.
There is a huge environmental cost, huge labor exploitation costs, and many more costs in terms of then, like ultimately when these technologies are deployed, this scaling paradigm leads to a
lack of understanding among the public about how to actually use these technologies effectively.
So that in and of itself creates a lot of harm. But the two that I focus on in the book in depth are the labor exploitation
and the environmental harms. When OpenAI first decided to go for the scale, the norm within the
research field, actually the trend that was really catching on, was to use curated, clean, small data sets
for training AI models.
There was this realization through some research happening
at the time that you can actually
get away with teeny tiny data sets for quite powerful models
if you go through the curation and cleaning process.
And that actually enables AI to be diffused more widely
through the economy, because most industries are actually data poor.
It's only the internet scale giants that are data rich
to the point that they can actually operate
in this giant deep learning scaling paradigm.
So when OpenAI chose the scaling thing,
they shifted completely away from tiny curated datasets
to massive polluted datasets.
They decided, let's scrape the English language internet.
And once you're working with datasets at that size, you cannot do a good job of cleaning it.
They clean it through automated methods, which means that there's still a whole lot of gunk
that gets pumped into these models. And so I quote this one, the executive of this platform called Appen,
which is a middleman firm that orchestrates the contracting of workers
for AI companies in the global south or in economically vulnerable communities
to ultimately do the data cleaning and data preparation and content
moderation work for these AI models.
And he said in the previous era, it was all about cleaning the inputs,
and now all of the inputs are fed in and it's about controlling the outputs. And this is where the
labor exploitation comes in. I interviewed workers in Kenya who were contracted by OpenAI
to quote unquote control the outputs by developing a content moderation filter that would wrap around all of OpenAI's
technologies, including what ultimately became ChatGBT, to prevent a model that is designed
to generate text about anything from spewing racist, harmful, and abusive speech to users
once it's placed in the hands of millions of users.
And what that meant was these Kenyan workers had to go through reams of text,
of the worst text on the internet,
as well as AI-generated text,
where OpenAI was prompting its own models
to imagine the worst text on the internet.
And these workers had to then put that text
into a detailed taxonomy of, is this hate speech?
Is this harassment? Is this violent content? Is this sexual abuse? How of is this hate speech? Is this harassment?
Is this violent content?
Is this sexual abuse?
How violent is this content?
Is it graphically violent?
Is the sex content involving the abuse of children?
And ultimately, we see a return to the way
that content moderators of the social media era
experienced this harm, which is that these
workers were deeply traumatized by this work and the relentless exposure to this toxic
content. And it not only unraveled their mental sanity, it also unraveled their families and
their communities. So I talk about this man, Mo Fatokhine, who's one of the Kenyan workers
OpenAI contracted, who, by the way, did not one of the Kenyan workers OpenAI contracted, who by the way,
did not actually know he was working for OpenAI originally. He only found out because of a
leak from one of his superiors. And when he started doing the work on the sexual content
team, his personality completely changed. He wasn't able to explain to his wife at the time why it
was changing because he didn't know how to say to her, I'm reading sex content
all day. That does not sound like a real job, chat, but he hadn't come out yet.
There was no conception of what that means. And so one day she texts him and
says, I want fish for dinner. He buys three, one for him, one for her, and one
for her daughter, his stepdaughter,
who he called his baby girl.
And by the time he got home, their bags had been packed.
They were completely out of the apartment.
And she texted him and said,
I don't understand the man you've become,
and I'm not coming back.
It is so key to understand
that this is not a necessary form
of labor.
Silicon Valley will pretend that this work is necessary,
but it is only necessary based on their premise
of scaling these models using polluted data sets.
The second harm that I highlight in the book
is the environmental one.
Now we're talking about extraordinary massive expansion of data centers
and supercomputers to train these models at scale. And so there's a recent report out of McKinsey
projecting that based on the current pace of AI computational infrastructure expansion,
we will need to add two to six times the amount of energy consumed annually by the state of California to the
global grid in the next five years. Most of that will be serviced by fossil fuels. We're
already seeing reports of coal plants having their lives extended. Elon Musk constructed
his massive supercomputer called Colossus in Memphis, Tennessee, and is powering it based on around
35 unlicensed methane gas power plants that are pumping thousands of tons of air pollutants
into these communities.
So it's a climate crisis, it's a public health crisis, and it's also a freshwater crisis
because many of these data centers move into communities and need to be cooled with fresh water, not
any other kind of water because it could lead to the corrosion of the equipment and lead
to bacterial growth.
And most often, it's actually serviced by public drinking water because that's the infrastructure
that has already been laid to deliver fresh water to buildings and businesses.
And I talk about this one community in Montevideo, Uruguay,
which was literally facing a historic drought
to the point where the Montevideo government
started mixing toxic water
into the public drinking water supply,
simply to have something come out of people's taps.
And people who were too poor to buy bottled water
had to just drink that toxic water
and women were having miscarriages.
And it was in the middle of that, that Google decided to put a data center into the Montevideo
area and propose to take the freshwater resources that the public was not receiving. And so we are
just seeing the amplification of so many intersecting crises with the perpetuation
of this scaling at all costs paradigm.
Yeah, they're absolutely horrible stories, right?
And there are more in your book and more that I'm sure people have been reading about what
is going on here.
But, you know, to hear the story of the Kenyan content moderator and just that's one person of so many that have been affected
by this technology in really harmful ways whose stories don't often get told.
As you're talking about how there has been this explicit decision to pursue this form
of development that relies on a lot of data, regardless of whether that is actually necessary,
which has not only the consequences,
these human consequences, but also requiring these massive data centers in order to process
all this stuff. It just makes me think about the decision of so many governments, I think
specifically about the government in the UK that is looking at tearing apart copyright
legislation, allowing huge data centers to be built against community opposition. But that's just one example of so many governments around the world who feel
they need to get their little piece of this AI investment and are just
trampling over rights and concerns. And it feels like based on what you're
talking about, at the end of the day, this isn't even going to deliver, but
there's going to be so many harms that come of it regardless.
Exactly. I mean, these companies talk about how we're going to see, you know,
massive economic gains from this technology, and we have not seen that at
all. In fact, we're seeing entry level jobs right now disappearing. And this
was a highly predictable effect of technologies that are inherently being
designed to be labor automating, you know, opening eyes definition of
artificial general intelligence
is highly autonomous systems that outperform humans
at most economically valuable work.
It is on the label.
They are out for people's jobs.
And the thing that happened in the first wave of automation
in factories was that companies always say,
some jobs will be lost, but new jobs will be created.
But they never talk about which jobs are lost
and which jobs are created.
What happened in the manufacturing era
was the entry-level jobs were lost.
And then there were lower-skilled jobs created
and higher-skilled jobs created.
But the career ladder breaks.
So anyone who successfully got into the industry
before that happened, they're able to access
the higher-skilled jobs.
But anyone that wasn't, they end up in the lower skilled jobs.
And the chasm between the have and have nots widens.
And we are now seeing this replay out in real time with now digital automation of white collar work,
with law firms, with the finance industry, with journalism.
And the other thing to add is the automation is not happening
simply because these technologies are able to actually fully automate these jobs. Like
ultimately, the people that are laying off workers are executives that believe they can
replace their human workers with these technologies. And recently, Klarna had a really funny oopsie
where they laid off all these workers and
said they would use AI instead and then they realized the AI was crap.
So then they had to rehire all of those workers.
And so it's labor exploitation at its finest in that the technology doesn't even do the
job that well half the time.
But executives are being persuaded into the value proposition of, well, does it do
it good enough that I can sort of continue to lower my costs and continue to make shareholders
happy and continue to brand myself as an innovative firm by destroying a bunch of jobs and using
AI services instead?
Yeah. And I feel like one of the key pieces of that Klarna story as well is the,
I guess it was the CEO kind of said that he wanted to make sure these new customer service jobs were like an Uber style job, right?
So really changing the type of work that it is on the other side of this attempted AI implementation.
Obviously, you know, a lead character in the book and, you know, through this conversation and through these changes in the AI industry has been Sam Altman for understandable reasons.
It quickly becomes clear in your book just how manipulative of a person who he is as a leader and how this shows throughout his career at Loop, at Y Combinator, at OpenAI in particular, and how he is able to shape relationships and events in his favor.
How does he do this?
And when did it become obvious to you
that this was the way this man,
how he was kind of proceeding in the world?
Altman is an incredibly good storyteller.
He's really good at painting these sweeping visions
of the future that people really wanna become a part of.
And the reason why, that second part of the sentence,
people want to become part of it,
is because he also was able to tailor the story
to the individual.
He has a loose relationship with the truth,
and he can just say what people want to hear,
and he's very good at understanding
what people want to hear.
And so that is ultimately what allows him to be,
you know, a once in a generation fundraising
talent. He's an incredible talent recruiter and he's able to amass all of these resources towards
whichever direction he wants then deploy them. One of the things that I discovered over time,
I mean, he is such a polarizing figure because you ask some people and they say he's the Steve Jobs
of our generation and then you ask other people and they say he's a manipulative liar.
And I realized that it really depends on whether that particular person has a vision that aligns
with what Altman is doing or not.
So if you align with the way that Altman is generally heading, then he's the greatest
asset in the world because of his
persuasive abilities. You know, he is the one that's knocking down obstacles and greasing the wheels
for that future to come into place, to come to fruition. But if you disagree with his vision,
then he becomes one of the most threatening people possible because now his persuasive power is being directed at
doing something fundamentally against your values.
And so the way that I ultimately figured out that he has a loose relationship with the
truth and does this kind of tailoring of his story is I started asking people, instead of telling me to characterize,
you know, do you think he is honest or do you think he's a liar or whatever, I started asking
people, what did Sam say to you at this era of the company in this meeting about what he believed and
why the company was doing what it was doing? And because I interviewed a lot of people, I interviewed over 90 opening eye people,
I was able to interview groups,
like enough people at every era of the company
to realize that he was telling different people
different things.
So one of the dynamics that I talk about in the book
is that there's these kind of quasi religious movements
that have developed within Silicon Valley of people who believe that artificial general intelligence is possible, but then
one faction, the boomers that believe it'll bring us to Utopia and the other faction,
the do-mers, that AGI will destroy humanity.
And when I asked boomers, do you think Altman's a boomer?
They would say yes.
And when I asked do-mers, do you think Altman's a doer? They would say yes. And when I asked doomers, do you think Altman's a doomer? They would say yes. And so and so that's when I started realizing, wait a minute,
people think that he believes what they believe. And that is ultimately how he's able to push everyone forward in
whatever direction he wants them to.
Yeah, I feel like you could really see that when he was trying to shape the regulatory conversation,
when he would be using the Doomer arguments
and the Boomer arguments, and it was like,
where does this guy really stand?
But he was wielding it really effectively.
There are a lot of things I could ask you about Altman,
but one of the key threads,
one of the key stories in the book is his attempted ouster,
or he was ousted, but then able to come back.
And it really struck me how in the
public as this was happening we had a particular narrative that like Sam Altman was done wrongly,
he shouldn't have been pushed out, there were key people in his camp who were pushing this
including some journalists like Kara Swisher most notably and then we are increasingly getting you
know the tale of what actually happened which your book really helps to flash out for us.
So what do you make of the difference in the narratives that we were hearing there and
what the actual story tells us about Sam Altman himself?
Altman, throughout his career, he's been incredibly media savvy and has known how to drip feed
tidbits to reporters and sort of see different narratives in the public
discourse, ultimately in his favor. So I think part of the disconnect is that he
was at work trying to shape the public discourse towards something that led
people to believe that, you know, he was wronged. And I don't necessarily like side
with the board and saying that they did, they absolutely did the right thing. I
mean, clearly they also made a lot
of missteps along the way and had fundamentally a lack of transparency around what they did and why
they did it. Of course, they were also constrained in certain ways that led to that opacity. But
the thing that I did realize is, you know, the board crisis happened because of two separate
phenomena. One was the clashing between the boomers and the Doomers and the other one was Altman's polarizing nature
and the kind of large unease that he leaves many people with of where does
this guy actually stand and can we actually trust when he's saying that
he's leading us one way that he's actually leading us that way. And so it was a collision of both of these forces.
And Altman's not unique in being a storyteller
that has a loose relationship with the truth.
Like there are many of these types in Silicon Valley,
but I think within the context of an ideological clash
that was framed as this is going to make or break humanity.
Suddenly, those Silicon Valley-esque quirks become a lot more high stakes.
I realized from my reporting is that in order to understand what is happening,
we cannot just understand this through the lens of money.
We also have to understand this through the lens of money. We also have to understand this through the lens of ideology. And ultimately, irrespective of how the board crisis could have played
out all of the different variations, the thing that would stay invariant through each of
these different instantiations, possible paths, is that it was ultimately just a handful of
people that were making profoundly consequential decisions. And that in and of itself is something that we should be questioning,
rather than whether Altman should have stayed or not stayed, whether he was wronged or not wronged.
Yeah, not really well put. And I just have one quick final question for you before we end off.
You've talked about how because of all this focus on generative AI, because of all the money that has been pushed into it,
that other research on other forms of AI
have been getting much less attention in recent years.
How this effort to scale at all costs
really isn't delivering in the way
that these companies and executives expected,
or at least told us they expected.
And how, because of so much money that has gone in here,
there is a lot of expectation from these companies
as to the returns that they expect. So considering all of that and how, you know, it doesn't seem clear that
AGI is on the horizon. Do you think that we're in for another AI winter in the near future? And what
might that mean for the industry if so? The amount of money that they've pumped into this
means that there are only so many industries that they can go to to try and recoup that investment.
And that means they are naturally going to go to the oil and gas industry. They're naturally going to go to the
defense industry. They're naturally going to go to other extremely lucrative industries that are not
necessarily within the public's best interest to continue perpetuating and fortifying with these
technologies. What I have increasingly advocated for based on my reporting is not everything machines
or whatever technology spin out of a quest to try and develop everything machines, but
to develop AI systems that are task specific and well scoped.
There are benefits both in the sense that we lose all of the massive scaling harms that come from trying to build everything machines,
but we also allow consumers to have a much better understanding of where to apply these
technologies.
And thirdly, we end up in a place where the companies themselves are able to develop the
technologies more responsibly, because when you're trying to develop everything machines,
you know, opening hour researchers told me themselves,
we cannot anticipate how people are going to abuse and ultimately harm
themselves with these technologies.
And therefore we just have to release it into the world and see what happens and
shore up the challenges retroactively.
But when you develop a well-scoped system that's bounded,
then you actually can anticipate all of the ways that it might fall apart
in advance and shore them up before you start unleashing it as an experiment on the broader
population. And this is where the conflict with commercialization of profit at all costs,
you know, conflicts with that vision that you're laying out. Karen, it's a fantastic book. Keep up
the great work. Thanks so much for coming on the show. Thank you so much, Paris.
Karen Howe is an award-winning journalist and the author of Empire of AI. Tech Won't
Save Us is made in partnership with The Nation Magazine and is hosted by me, Paris Marks.
Production is by Kyla Houston. Tech Won't Save Us relies on the support of listeners
like you to keep providing critical perspectives on the tech industry. You can join hundreds
of other supporters by going to patreon.com slash tech won't
save us and making a pledge of your own.
Thanks for listening and make sure to come back next week.