Democracy Now! Audio - Democracy Now! 2026-01-01 Thursday
Episode Date: January 1, 2026Democracy Now! Thursday, January 1, 2026...
Transcript
Discussion (0)
From New York, this is Democracy Now.
Every single community that I spoke to, whether it was artists having their intellectual property taken, or Chilean water activists having their fresh water taken, they all said that when they encountered the empire, they initially felt exactly.
the same way. A complete loss of agency to self-determine their future. And that is when I
realized the horizontal harm here is AI is threatening democracy. If the majority of the world
is going to feel this loss of agency over self-determining their future, democracy cannot survive.
Today, a democracy now special on artificial intelligence. We'll spend the hour with Karen Howe,
author of the acclaim book, Empire of AI.
All that and more, coming up.
This is Democracy Now, Democracy Now.org, the war in peace report.
I'm Amy Goodman.
The Empire of AI, that's the name of a new book by journalist Karen Howe,
who's been closely reporting on the rise of the artificial intelligence industry
with a focus on Sam Altman's Open AI.
That's the company behind ChatGPT.
Karen Howe compares the actions of the AI industry
to those of colonial powers of the past.
She writes, quote,
The empires of AI are not engaged in the same overt violence
and brutality that marked this history,
but they too seize and extract precious resources
to feed their vision of artificial intelligence.
the work of artists and writers, the data of countless individuals posting about their experiences and observations online.
The land, energy, and water required to house and run massive data centers and supercomputers, she writes.
Over the past year, the Trump administration has increasingly embraced the AI industry.
In December, Trump signed an executive order to bar states and local governments from,
enacting their own AI regulations.
Soon after he signed the order, his family's company, Trump Media and Technology,
announced a $6 billion merger with a firm aiming to build the world's first viable nuclear
fusion plant to power AI projects.
Karen Howe is a former reporter at the Wall Street Journal and MIT Technology Review,
where she became the first journalist to profile open AI.
Democracy Now is Juan Gonzalez and I spoke to her in May.
The National Book Critics Circle recently named her book, The Empire of AI, Dreams and Nightmares, and Sam Altman's Open AI as a finalist for Best Nonfiction Book of 2025.
I began by asking Karen Howe to explain just what artificial intelligence is.
So AI is a collection of many different technologies, but most people were introduced to it through Chat Chb-T.
And what I argue in the book and what the title refers to, Empire of AI, it's actually a critique of the specific trajectory of AI development that led us to Chat-G-B-T and has continued since Chat-G-B-T.
And that is specifically Silicon Valley's scale-at-all-cost approach to AI development.
AI models in modern day, they are trained on data.
computers to train them on that data. But what Silicon Valley did and what Open AI did in
the last few years is they started blowing up the amount of data and the size of the computers
that need to do this training. So we are talking about the full English language internet being
fed into these models, books, scientific articles, all of the intellectual property that has
been created, and also massive supercomputers that run tens of thousands, even hundreds of
thousands of computer chips that are the size of dozens, maybe hundreds of football fields,
and use practically the entire energy demands of cities now.
So this is an extraordinary type of AI development that is causing a lot of social labor
and environmental harms.
And that is ultimately why I evoke this analogy to Empire.
And Karen, could you talk some more about not only the energy requirements, but the water
requirements of these huge data centers that are essence, in essence, the backbone of this
widening industry? Absolutely. I'll give you two stats on both the energy and the water.
When talking about the energy demand, McKinsey recently came out with a report that said in the
next five years, based on the current pace of AI computational infrastructure expansion,
we would need to put as much energy on the global grid as what is consumed by two to
six times the energy consumed annually by the state of California. And that will mostly be
serviced by fossil fuels. We're already seeing reporting of coal plants with their lives being
extended. They were supposed to retire, but now they cannot to support the state of center
development. We are seeing methane gas turbines, unlicensed ones being popped up to service
these data centers as well. From a fresh water perspective, these data centers need to be trained
on freshwater. They cannot be trained on any other type of water because it can corrode the
equipment. It can lead to bacterial growth. And most of the time, it actually taps directly
into a public drinking water supply because that is the infrastructure that has been laid to
deliver this clean, fresh water, two different businesses, two different homes. And Bloomberg
recently had an analysis where they looked at the expansion of these data centers around the
world, and two-thirds of them are being placed in water-scarce areas. So they're being placed in
communities that do not have access to fresh water. So it's not just the total amount of
fresh water that we need to be concerned about, but actually the distribution of this
infrastructure around the world. And most people are familiar with chat, GPT, the consumer
aspect of AI. But what about the military aspect?
AI, where in essence, we're finding Silicon Valley companies becoming the next generation of
defense contractors. One of the reasons why Open AI and many other companies are turning to the
defense industry is because they have spent an extraordinary amount of money in developing these
technologies. They're spending hundreds of billions to train these models, and they need to recoup
those costs. And there are only so many industries and so many places that have,
that size of a paycheck to pay. And so that's why we're seeing a cozying up to the defense
industry. We're also seeing Silicon Valley use the U.S. government in their empire building
ambitions. You could argue that the U.S. government is also trying to use Silicon Valley,
vice versa in their empire building ambitions. But certainly, these technologies are not,
they are not designed to be used in a sensitive military context. And so the aggressive push
of these companies to try and get those defense contracts and integrate their technologies more
and more into the infrastructure of the military is really alarming.
I wanted to go to the countries you went to or the stories you cover.
I mean, this is amazing, the depth of your reporting, from Kenya to Uruguay to Chile.
You were talking about the use of water.
And I also want to ask you about nuclear power.
But in Chile, what is happening there?
around these data centers and the water they would use and the resistance to that.
Yeah. So Chile has an interesting history in that it was under a dictatorship for a very long time.
And so during that time, most public resources were privatized, including water.
But because of an anomaly, there's one community in the greater Santiago metropolitan region
that actually still has access to a public freshwater resource that services both that community,
as well as the rest of the country in emergency situations.
That is the exact community that Google chose to try to put a data center in.
And it would be free.
And, you know, I have no idea.
That is a great question.
But what the community told me was they weren't even paying taxes for this
because they believed, based on reading the documentation,
that the taxes that Google was paying was, in fact,
to where they had registered their offices, their administrative offices.
It's not where they were putting down the data center.
So they were not seeing any benefit from this data center directly to that community.
And they were seeing no checks placed on the fresh water that this data center would have been allowed to extract.
And so these activists said, wait a minute, absolutely not.
We're not going to allow this data center to come in unless they give us a legitimate reason for why it benefits us.
And so they started doing boots on the ground activism, pushing back, knocking on every single one of their neighbor's doors, handing out flyers to the community, telling them, this company is taking our freshwater resources without giving us anything in return.
And so they escalated so dramatically that it escalated to Google Chile.
It escalated to Google Mountain View, which, by the way, then sent representatives to Chile that only spoke English.
But then it eventually escalated to the Chilean government.
And the Chilean government now has roundtables where they ask these community residents and the company representatives and representatives from the government to come together to actually discuss how to make data center development more beneficial to the community.
The activists say the fight is not over.
Just because they've been invited to the table doesn't mean that everything is suddenly better.
They need to stay vigilant.
They need to continue scrutinizing these projects.
But thus far, they've been able to block this project for four to five years and how.
have gained that seat at the table.
And how is it that these Western companies, in essence,
are exploiting labor in the global south?
You go into something called data annotation firms?
What are those?
Yeah, so because AI, modern day AI systems are trained on massive amounts of data
and that's scraped from the internet,
you can't actually pump that data directly into your AI model
because there are a lot of things within that data,
it's heavily polluted, it needs to be cleaned,
it needs to be annotated.
So this is where data annotation firms come in.
These are middleman firms that hire contract labor
to provide to these AI companies
to do that kind of data preparation.
And OpenAI, when it was starting to think about
commercializing its products and thinking about,
let's put text generation machines
that can spew any kind of text
into the hands of millions of users, they realized they needed to have some kind of content moderation.
They needed to develop a filter that would wrap around these models and prevent these models
from actually spewing racist, hateful, and harmful speech to users that would not make a very good
commercially viable product. And so they contracted these middleman firms in Kenya, where the
Kenyan workers had to read through reams of the worst text on the internet, as well as AI-generated
text where Open AI was prompting its own AI models to imagine the worst text on the internet
and then telling these Kenyan workers to detail, to categorize them in detail taxonomies of
is this sexual content, is this violent content, how graphic is that violent content, in order
to teach its filter all the different categories of content it had to block. And this is
incredibly in common form of labor. There are lots of other different types of contract labor that
they use. But these workers, they're paid a few bucks an hour, if at all. And just like the era
of social media, these content moderators are left very deeply psychologically traumatized.
And ultimately, there is no real philosophy behind why these workers are paid a couple bucks an
hour and have their lives destroyed. And why AI researchers who also contribute to these models
are paid million dollar compensation packages simply because they sit in Silicon Valley in open
AI's offices. That is the logic of empire. And that harkens back to my title, Empire of AI.
So let's go back to your title, Empire of AI, the subtitled Dreams and Nightmares in Sam Altman's
Open AI. So tell us the story of Sam Altman and what Open AI is all about. Right through to the deal
he just made in the Gulf when President Trump, Sam Altman and Elon Musk were there.
Altman is very much a product of Silicon Valley. His career was first as a founder of a startup and then as the president of Y Combinator, which is one of the most famous startup accelerators in Silicon Valley and then the CEO of Open AI. And there's no coincidence that Open AI ended up introducing the world to the scale at all costs approach to AI development because that is the way that Silicon Valley has operated in the entire time that Altman came
up in it. And so he is a very strategic person. He is incredibly good at telling stories about the
future and painting these sweeping visions that investors and employees want to be a part of.
And so early on at YC, he identified that AI would be one of the trends that could take off.
And he was trying to build a portfolio of different investments and different initiatives
to place himself in the center of various different trends, depending on which one.
took off. He was investing in quantum computing. He was investing in nuclear fusion. He was investing
in self-driving cars. And he was developing a fundamental AI research lab. Ultimately, the AI
research lab was the ones that started accelerating really quickly. So he makes himself the CEO of
that company. And originally, he started it as a non-profit to try and position it as a counter
to for-profit-driven incentives in Silicon Valley. But within one of the first-profit-driven incentives in Silicon Valley,
But within one and a half years, opening eyes executives identified that if they wanted to be the lead in this space, they had to go for this scale at all cost approach and had to, should be in quotes.
They thought that they had to do this.
There are actually many other ways to develop AI and to have progress in AI that does not take this approach.
But once they decided that, they realized the bottleneck was capital.
It just so happens. Sam Altman is a once-in-a-generation fundraising talent.
He created this new structure, nesting a for-profit arm within the nonprofit to become this fundraising vehicle for the tens of billions and ultimately hundreds of billions that they needed to pursue the approach that they decided on.
And that is how we ultimately get to present-day opening eye, which is one of the most capitalistic companies in the history of Silicon Valley, continuing to raise hundreds of billions.
and Altman has joked even trillions to produce a technology that ultimately has
a middling economic impact thus far.
We'll return to our conversation in a minute with Karen Howell, author of the new book,
Empire of AI, Dreams and Nightmares in Sam Altman's Open AI.
Stay with us.
He said,
He said he,
He said
His ryeyeye
He'll
He'll
He'll
He'll
He'll
He said
He'll
He'll
he hath
He ha'an
Olo
Chasur
Rigae
in the
Mende
Mondee
He'll
Samba the eyeun and badee on my head on myrassie.
Chas'r heiqaunt, wadda'i, and I'm nothnessy.
Sambal eye, and badean, he hannul him.
This is Democracy Now, Democracy Now.
I'm Amy Goodman.
In this holiday special, we continue with the journalist Karen Howe, author of the new book,
Empire of AI, Dreams and Nightmares, and Sam Altman's Open AI.
Karen came into our studio in May when she discussed how.
AI will impact workers.
One of the things that we have seen is this technology is already having a huge impact on jobs.
Not necessarily because the technology itself is really capable of replacing jobs,
but it is perceived as capable enough that executives are laying off workers.
And we need more, some kind of more guardrails to actually prevent these companies from
continuing to try and develop labor-automating technologies and try to shift them to producing
labor-assistive technologies. What do you mean? So Open AI, their definition of what they
call artificial general intelligence is highly autonomous systems that outperform humans in most
economically valuable work. So they explicitly state that they are trying to automate jobs
away. I mean, what is economically
valuable work, but the things that
people do to get paid?
But there's this
really great book called Power in Progress by
MIT economist Jerome Osamoglu
and Simon Johnson, who mentioned
that technology development,
all technology revolutions,
they take a labor automating approach not because
of inevitability, but because
the people at the top choose
to automate those jobs
away. They choose to design the
technology so that they can sell it to
executives and say you can shrink your costs by laying off all these workers and using our
AI services instead. But in the past, we've seen studies that, for example, suggest that if you
develop an AI tool that a doctor uses, rather than replacing the doctor, you will actually
get better health care for patients. You will get better cancer diagnoses. If you develop an AI
tool that teachers can use rather than just an AI tutor that replaces the teacher, your
kids will get better educational outcomes. And so that's what I mean by labor assistive than labor.
And explain what you mean, because I think a lot of people don't even understand artificial
intelligence. And when you say, replace the doctor, what are you talking about? Right. So these
companies, they try to develop a technology that they position as an everything machine that can do
anything. And so they will try to say, you can use this, you can talk to chat,
GBT for therapy?
No, you cannot.
ChatGPT is not a licensed therapist, and in fact, these models actually spew lots of medical
misinformation, and there have been lots of examples of actually users being psychologically
harmed by the model, because the model will continue to reinforce self-harming behaviors,
and we've even had cases where children who speak to chatbots and develop huge emotional
relationships with these chatbots have actually killed themselves after using these
chat bot systems.
But that's what I mean when these companies are trying to develop labor automating tools.
They're positioning it as you can now hire this tool instead of hire a worker.
I mean, most recently, Sam Altman was speaking at a conference and said, we originally said
that these models were junior level partners at a law firm, and now we think that they can really
be more senior colleagues at a law firm.
What he's saying is don't hire the junior level partners, don't hire the senior colleagues, and just use our AI models.
And we are already seeing the career ladder breaking because many different white collar service industries as well as other industries are becoming convinced that they do not need to hire interns.
They do not need to hire entry level positions that they just need these AI models.
And new college graduates are struggling now.
to find job opportunities to help them get a foothold into these industries.
So you've talked about Sam Altman, and in part one, we touched on who he is.
But I'd like you to go more deeply into who Sam Altman is, how he exploded onto the U.S. scene
testifying before Congress, actually warning about the dangers of AI.
So that really protected him in a way.
People seeing him as a profit.
That's a P-R-O-P-H-E-T.
but now we can talk about the other kind of profit, P-R-O-F-I-T.
And how Open-A-I was formed?
How is Open-A-I different from AI?
Open AI is a company, I mean, it was originally founded as a nonprofit, as I mentioned.
And Altman specifically, when he was thinking about how do I make a fundamental AI research lab
that is going to make a big splash, he chose to make a big splash.
he chose to make it a nonprofit because he identified that if he could not compete on capital,
and he was relatively late to the game, Google already had a monopoly on a lot of top AI research talent at the time,
if he could not compete on capital and he could not compete in terms of being a first mover,
he needed some other kind of ingredient there to really recruit talent, recruit public goodwill,
and establish a name for opening eyes.
So he identified a mission.
He identified, let me make this a nonprofit and let me give it a really compelling mission.
So the mission of Open AI is to ensure artificial general intelligence benefits all of humanity.
And one of the quotes that I open my book with is this quote that Sam Altman cited himself in 2013 in his blog.
He was an avid blogger back in the day talking about his learnings on business and strategy in Silicon Valley startup life.
And the quote is, successful people build companies, more successful people build countries,
the most successful people build religions.
And then he reflects on that quote in his blog saying, it appears to me that the best way
to build a religion is actually to build a company.
And so talk about how Altman was then forced out of the company and then came back.
And also, I just found it so fascinating that you were able to speak with so many open AI workers.
you thought there was a kind of total ban on you.
Yes, yeah, exactly.
So I was the first journalist to profile opening eye.
I embedded within the company for three days in 2019,
and then my profile published in 2020 for MIT Technology Review.
And at the time, I identified in the profile,
this tension that I was seeing where it was a nonprofit by name,
but behind the scenes, a lot of the public values that they exposed
were actually the opposite of how they operated.
So they espoused transparency, but they were highly secretive.
They espoused collaborativeness.
They were highly competitive.
And they espoused that they had no commercial intent.
But in fact, it seemed like they had just gotten a $1 billion investment from Microsoft.
It seems like they were rapidly going to develop commercial intent.
And so I wrote that into the profile.
And opening I was deeply unhappy about it.
And they would not refuse to talk to me for three years.
But when I started working on the book, when I started reaching out to employees current informer,
I discovered that many employees actually really love.
liked the profile. And they specifically wanted to talk to me because they thought that I would
do justice to the truth of what had actually happened within the company and be able to discover
behind what the executives mythologized and narrativeized about this technology and about the
course of this company. I would be able to actually get beneath that to the real heart of the
matter. And so one of the things that you really have to understand about AI development
today is that there are what I call quasi-religious movements that have developed within Silicon
Valley. The concept of artificial general intelligence is not one that's scientifically grounded.
It is this idea that we can fundamentally recreate human intelligence and computers. And this
idea has been around for actually a really long time. The field of AI was founded all the way back
in the 1950s. And that was the original intent of the field. How do we recreate intelligence in
computers? Can machines think? That was the famous question that British mathematician Alan
Turing asked. But we, to this day, do not have scientific consensus around even what human
intelligence is. And so to peg an entire research field and a technology to the basis of
human intelligence is a very tricky endeavor because there are no good metrics to assess
have we actually gotten there yet. And there's no blueprint to say, what should AI look like
and how should it work, and ultimately, who should it serve?
And so when OpenAI took up this mission of artificial general intelligence,
they were able to essentially shape and mold what they wanted this technology to be
based on what is most convenient for them.
But when they identified it, it was at a time when scientists really looked down on this term even,
AGI.
And so they absorbed just a small group of self-identified AGI believers.
This is why I call it quasi-religious.
Because there's no scientific evidence that we can actually develop AGI,
the people who have this strong conviction that they will do it
and that it's going to happen soon, it is just purely based on belief.
And they talk about it as a belief too.
But there are two factions within this belief system of the AGI religion.
There are people who think AGI is going to bring us to utopia,
and there are people who think AGI is going to destroy all of humanity.
both of them believe that it is possible. It's coming soon. And therefore, they conclude that they need to be the ones to control the technology and not democratize it. And this is ultimately what leads to your question of what happened when Sam Altman was fired and rehired. Through the history of Open AI, there's been a lot of clashing between the boomers and doomers about who should actually... The boomers and the boomers.
those that say it'll bring us the apocalypse to utopia boomers and those that say it'll destroy humanity the doomers
and they have clashed relentlessly and aggressively and aggressively about how quickly to build the technology
how quickly to release the technology and ultimately Altman is one that he is really good at saying to people
what they need to hear and he will say different things to different people if he thinks they need to hear
different things. So when I asked boomers, is Altman a boomer? They said, yes. When I asked
Dumers, is Altman a Dummer? They said, yes. And I want to take this up until today to, in January,
the Trump administration announcing the Stargate project, a $500 billion project to boost AI
infrastructure in the United States. This is Open AI, Sam Altman, speaking alongside President Trump.
I think this will be the most important project of this era.
And as Masa said, for AGI to get built here, to create hundreds of thousands of jobs, to create a new industry centered here.
We wouldn't be able to do this without you, Mr. President.
He also there referred to AGI, artificial general intelligence.
Explain what happened here and what this is and has it actually happened.
So Altman, before Trump was elected,
He already was sensing through observation that it was possible that the administration would shift
and that he would need to start politicking quite heavily to ingratiate himself to a new administration.
Altman is very strategic.
He was under a lot of pressure at the time as well because his original co-founder, Elon Musk, now has great beef with him.
Musk feels like Altman used his name and his money to set up opening eye,
and then he got nothing in return.
So Musk had been suing him, still suing him,
and suddenly became first buddy of the Trump administration.
So Altman basically cleverly orchestrated this announcement where, by the way,
the announcement is quite strange because President Trump is not,
it's not the U.S. government giving $500 billion.
It's private investment coming into the U.S.
from places like SoftBank, which is one of the largest investment funds run by Masayoshi
son, a Japanese businessman who made a lot of his wealth from the previous tech era.
So it's not even the U.S. government that's providing this money.
And take that right through to now, that Gulf trip that Elon Musk was on, but so was Sam Altman
to the fury of Elon Musk.
a deal was sealed in Abu Dhabi.
That didn't include Elon Musk, but was about Open AI.
Exactly.
So Altman has continued to try and use the U.S. government as a way to get access to more places and more powerful spaces to build out this empire.
And one of the things, because Open AI's computational infrastructure needs are so aggressive.
You know, I had an opening IA employee tell me, we're running out of land and power.
So they are running out of resources in the U.S., which is why they're trying to get access
to lands and energy in other places.
The Middle East has a lot of land and has a lot of energy, and they're willing to strike
deals.
And that is why Altman was part of that trip looking to strike a deal.
And the deal that they struck was to build a massive data center or multiple data centers
in the Middle East, using the world.
their land and their energy.
But one of the things that Open AI has recently rolled out, they call it the Open AI for
countries program.
And it is this idea that they want to install Open AI hardware and software in places
around the world and explicitly says, we want to build Democratic AI rails.
We want to install our hardware and software as a foundation of democracy.
democratic AI globally so that we can stop China from installing authoritarian AI globally.
But the thing that he does not acknowledge is that there is nothing democratic about what he's doing.
You know, the Atlantic executive editor says, we need to call these companies for what they are.
They are techno-authoritarians.
They do not ask the public for any perspective on how they develop the technology, what data they train the technology on,
where they develop these data centers. In fact, these data centers are often developed in the
cover of night under shell companies. Like META recently entered New Mexico under the shell company
named Greater Kudu LLC. Greater Kudu LLC. And once the deal was actually closed and the residents
couldn't do anything about anymore, that's when it was revealed. Surprise, we're meta and you're going
to get a data center that drinks all of your fresh water. And then there was this whole controversy in Memphis
around a data center? Yes.
So that is the data center that Elon Musk is building.
So meanwhile, Musk is saying Altman is terrible.
Everyone should use my AI.
And of course, his AI is also being developed using the same environmental and public health
cost.
So he built this massive supercomputer called Colossus and Memphis Tennessee that's training
GROC, the chatbot that people can access through X.
And that is being powered by around 35 unlicensed methane gas turbines that are pumping thousands of tons of toxic air pollutants into the greater Memphis community.
And that community has long suffered a lack of access to clean air, a fundamental human right.
So I want to go to interestingly, Sam Altman testifying in front of Congress about solutions.
to the high energy consumption of artificial intelligence?
In the short term, I think this probably looks like more natural gas, although there are some
applications where I think solar can really help.
In the medium term, I hope it's advanced nuclear efficient and fusion.
More energy is important well beyond AI.
So that's Open AI's Sam Altman.
This is testifying before the Senate and talking about everything from solar to nuclear
power, something that was fought in the United States by environmental activists for decades.
So you have these huge old nuclear power plants, but many say you can't make them safe no matter
how small and smart you make them.
This is one of the things, of the many things that I'm concerned about with the current
trajectory of AI development.
This is a second order, tertiary order of fact, is that because these companies are trying
to claim that the AI development approach they took doesn't have climate harms.
They are explicitly evoking nuclear again and again and again as nuclear will solve the problem.
And it has been effective.
I've talked with certain AI researchers who thought the problem was solved because of nuclear.
And in order to try and actually build more and more nuclear plants,
they are lobbying governments to try and unwind the regulatory structure around nuclear power plant building.
I mean, this is like crazy on.
on so many levels that they're not just trying to develop the AI technology recklessly.
They're also trying to lay down infrastructure and nuclear infrastructure in this
move-fast, break-things ideology.
But for those who are environmentalists and have long opposed nuclear, will they be sucked
in by the solar alternative?
So data centers have to run 24-7.
So they cannot actually run on just renewables.
That is why the companies keep trying to evoke nuclear as the solve-all.
But solar does not actually work when we do not have sufficient enough energy storage solutions for that 24-7 operation.
We'll return to our conversation in a minute with Karen Howe, author of the new book, Empire of AI.
Dreams and Nightmares in Sam Altman's Open AI.
Stay with us.
Today has been a lonesome day, and tomorrow will be the same tomorrow will be the same old way.
Did you ever hear a church belt home?
Did you ever need a church belt home?
Did you ever hear of Church now told?
You may know by that she's dead.
This is Democracy Now, Democracy Now.org. I'm Amy Goodman.
In this holiday special, we're speaking with the journalist Karen Howe, author of the new book, Empire of AI, Dreams and Nightmares, and Sam Altman's Open AI.
She came into our studio in May. She lives in Hong Kong.
I asked her to talk about what's happening in China around artificial intelligence.
China and the U.S. are the largest hubs for AI research.
they are the largest concentration of AI research talent globally.
China, other than Silicon Valley, China really is the only other rival in terms of talent density
and the amount of capital investment and the amount of infrastructure that is going into AI development.
In the last few years, what we have seen is the U.S. government has been aggressively trying to stay number one.
And one of the mechanisms that they have used is export controls.
a key input into these AI models is the computational infrastructure and the computer chips for installing into the data centers for training these models.
And these computer chips are, in order to develop the AI models, companies are using the most bleeding edge computer chip technology.
It's like every two years a new chip comes out and they immediately start using that to train the next generation of AI models.
Those computer chips are designed by American companies, the most prominent one being,
Invidia in California.
And so the U.S. government has been trying to use export controls to prevent Chinese
companies from getting access to the most cutting edge computer chips.
That has all been under the recommendation of Silicon Valley saying this is the way to prevent
China from being number one and put export controls on them and don't regulate us at all so we can
say number one and they will fall behind. What has happened instead is because there is a strong
base of talent of AI research talent in China, under the constraints of fewer computational
resources, Chinese companies have actually been able to innovate and develop the same level of
AI model capabilities as American companies with two orders of magnitude, less computational
resources, less energy, less data.
So I'm talking specifically about the Chinese company High Flyer, which developed this model called Deepseek earlier this year that briefly tanked the global economy because the company said that they're training this one AI model costs around $6 million when Open AI was training models that cost hundreds of millions, if not over tens of billions of dollars.
And that Delta demonstrated to people that this, what Silicon Valley has tried to convince everyone for the last few years, that this is the only path to getting more AI capabilities is totally false.
And actually the techniques that the Chinese company was using were ones that existed in the literature and just had to be assembled.
They used a lot of engineering sophistication to do that, but they weren't actually using fundamentally new techniques.
They were ones that actually already existed.
So explain it further, because I think a lot of people just can't get their minds around this.
How do you do this training?
So there's software called neural networks, which is essentially a massive statistical engine.
It is doing lots and lots of sophisticated statistical computation to try and ascertain what kinds of patterns exist in data sets.
So typically, in the past, before we got to large language models, it would be doing something like looking at MRI scans and checking the patterns of what does cancer look like in an MRI scan.
Now with ChatDB, what is looking at is what are the patterns of the English language?
What is the syntax, the structure, figures of speech that are typically used, and then it uses those patterns to construct new sentences.
That's how generative AI works.
And the reason why it's so computationally expensive is because it's crunching the numbers for those patterns.
And the more data you feed in, the more it has to crunch.
And so we used to train these AI models on, you know, a powerful laptop, like maybe one computer chip,
maybe the richest labs, academic labs, like MIT, they would be training on a couple or a dozen computer chips.
And companies like Google, they would be training maybe on a couple hundred computer chips.
We are now talking about hundreds of thousands of computer chips training a single model.
And that is what opening eyes says is necessary to build these technologies.
And that is what Deep Seek proved wrong.
So let me ask you something, Karen, the latest news.
as you're traveling in the United States before you go back to Hong Kong of Trump's attack on academia,
how this fits in. How could Trump's attack on international students specifically targeting the what,
more than 250,000, a quarter of a million Chinese students and revoking their visas impact the future of the AI industry,
but not just Chinese students, because what's going on here now,
is terrifying students around the world.
And because labs are shutting down in all kinds of ways here, U.S. students as well deciding to go abroad.
This is just the latest action that the U.S. government has taken over the last few years
to really alienate a key talent pool for U.S. innovation.
Originally, there were more Chinese researchers working in the U.S. contributing to U.S. AI than there were in China.
Because just a few years ago, Chinese researchers aspired to work for American companies.
They wanted to move to the U.S. They wanted to contribute to the U.S. economy.
They didn't want to go back to their home country.
But because of what was called the China Initiative, which was the first Trump era initiative to try and criminalize Chinese academics or ethnically Chinese academics, some of whom were actually Americans, based on just paperwork errors, they would accuse them of being spies.
That was one of the first actions.
Then, of course, the pandemic happened in the U.S.-China trade escalations, started amplifying anti-Chinese rhetoric.
all of these led, and now with the potential ban on international students, all of these
have led more and more Chinese researchers to just opt for staying at home and contributing to the
Chinese AI ecosystem.
And this was a prerequisite to high flyer pulling off Deepseek.
If there had not been that concentration and buildup of AI talent in China, they probably would
have had a much harder time innovating around circumventing these export controls that
the U.S. government was imposing on them. But because they now have a high concentration of
top talent, some of the top talent globally, when those restrictions were imposed, they were
able to innovate around them. So Deep Seek is literally a product of this continuation of that
alienation. And with the U.S. continuing to take this stand.
it is just going to get worse.
And as you mentioned, it's not just Chinese researchers.
I literally just talked to a friend in academia
that said she's considering going to Europe now
because she just cannot survive without that public funding.
And European countries are seeing a critical opportunity,
offering million-dollar packages, come here,
we'll give you a lab, we'll give you millions of dollars of funding.
I mean, this is the fastest way to brain drain this country.
I mean, what many are saying, U.S.'s brain drain is their brain gain.
And this also reminds us of history.
You have the Chinese rocket scientist Chen Shui Sen, who in the 1950s was inexplicably held under house arrest for years.
And then Eisenhower has him deported to China.
He becomes the father of rocket science and China's entry into space.
And he said he would never again step foot.
into the United States, even though originally that was the only place he wanted to live.
Yes. And there was a, I believe, a government official, a U.S. government official who said that
was the dumbest mistake the U.S. ever made.
We talk about the brain drain and the brain gain. Okay, again, some more rhyming,
the doomers and the boomers. I want to talk about what an AI apocalypse looks like,
meaning how it brings us to apocalypse,
but also how people say it could lead us to a utopia.
What are the two tracks, trajectories?
It's a great question, and I ask boomers and domers this all the time.
Can you articulate to me exactly how we get there?
And the issue is that they cannot.
And this is why I call it quasi-religious.
It really is based on belief.
I mean, I was talking with one researcher who identified as a boomer,
humor. And I said, you know, his eyes were wide and he really lit up saying, you know, once we get to
AGI, game over, everything becomes perfect. And I asked him, I was like, can you explain to me how
does AGI feed people that haven't, don't have food on the table right now? And he was like,
oh, you're talking about like the floor floor and how to elevate their quality of life. And I was
like, yes, because they are also part of all of humanity.
and he was like, I'm not really sure how that would happen,
but I think it could help the middle class get more economic opportunity.
And I was like, okay, but how does that happen as well?
And he was like, well, once we have AGI and it can just create
trillions of dollars of economic value, we can just give them cash payouts.
And I was like, who's giving them cash payouts?
What institutions are giving them?
You know, like, it doesn't, when you actually test their logic, it doesn't really hold.
And with the DOOMers, I mean, it's the same thing.
like their belief is ultimately what I realize when reporting on the book is they believe
AGI is possible because of their belief of how the human brain works.
They believe human intelligence is inherently fully computational.
So if you have enough data and you have enough computational resources, you will inevitably
be able to recreate human intelligence.
It's just a matter of time.
And to them, the reason why that would lead to an apocalyptic scenario,
is humans, we learn and improve our intelligence through communication.
And communication is inefficient.
We miscommunicate all the time.
And so for AI intelligences, they would be able to rapidly get smarter and smarter and smarter
by having perfect communication with one another as digital intelligences.
And so many of these people who self-identify as Jumers say there has never been in the history of the,
universe, a species that was superior to another species, a species that was able to rule over
a more superior species. So they think that ultimately I will involve into a higher species and then
start ruling us and then maybe decide to get rid of us altogether. I'm wondering if you can
talk about any model of a country, not a company, that is pioneering a way.
of democratically controlled artificial intelligence?
I don't think it's actively happening right now.
The EU has had the EU AI Act,
which is their major piece of legislation,
trying to develop a risk-based, rights-based framework
for governing AI deployment.
But to me, one of the keys of democratic AI governance
is also democratically developing
AI. And I don't think any country is really doing that. And what I mean by that is
there are, AI has a supply chain. It needs data. It needs land. It needs energy. It needs water.
And it also needs spaces in which these companies need access to to then deploy their
technologies, schools, hospitals, government agencies. Silicon Valley has done a really good
job over the last decade of making people feel that their collectively owned resources are Silicon
valleys. You know, I have, I talk with friends all the time who say, we don't have data privacy
anymore. So like, what's more, what's, what is more data to these companies? Like, I'm fine,
just giving them all of my data. But that data is yours. You know, that intellectual property is
the writers and artists intellectual property. That land is a community's land. Those schools are
the students and teachers, schools. The hospitals are the doctors and nurses and patients,
hospitals. These are all sites of democratic contestation in the development and the deployment
of AI. And just like those Chilean water activists that we talked about, who aggressively understood
that that freshwater was theirs and they were not willing to give it up unless they got some
kind of mutually beneficial agreement for it, we need to have that spirit in protecting our data,
our land, our water, and our schools so that companies inevitably will have to adjust their approach
because they will no longer get access to the resources they need or the spaces that they need to deploy in.
In 2022, Karen, you wrote a piece for MIT Technology Review headlined a new vision of artificial intelligence for the people.
In a remote rural town in New Zealand, an indigenous couple is challenging what AI could be
and who it should serve. Who are they?
This was a wonderful story that I did
where the couple, they run to Hiku Media.
It's a non-profit Maori radio station in New Zealand.
And the Maori people have suffered a lot of the same challenges
as many indigenous peoples around the world.
So the history of colonization led them to rapidly lose their language.
And there are very few Maori speakers in the world anymore.
And so in the last few years,
there has been an attempt to revive the language and the New Zealand government has tried to repent
by trying to encourage the revival of that language. But this nonprofit radio station, they had all
of this wonderful archival material, archival audio of their ancestors speaking the Maori language
that they wanted to provide to Maori speakers, Maori learners around the world as an educational
resource. The problem is, in order to do that, they needed to transcribe the audio so that
Maori learners could actually listen, see what was being said, click on the words, understand
the translation, and actually turn it into an active learning tool. But there were so few
Maori speakers that can speak at that advanced level that they realized they had to turn to
AI. And this is a key part of my book's argument is I'm not critiquing all AI development.
I'm specifically critiquing the scale at all cost approach that Silicon Valley has taken.
But there are many different kinds of beneficial AI models, including
what they ended up doing. So they took a fundamentally different approach. First and foremost,
they asked their community, do we want this AI tool? Once the community said yes, then they moved to
the next step of asking people to fully consent to donating data for the training of this tool.
They explained to the community what this data was for, how it would be used, how they would then
guard that data and make sure that it wasn't used for other purposes. They collected around a
couple hundred hours of audio data in just a few days because the community rallied support
around this project. And only a couple hundred hours was enough to create a performance
speech recognition model, which is crazy when you think about the scales of data that these
Silicon Valley companies require. And that is once again, a lesson that can be learned is actually
there's plenty of research that shows when you have highly curated small data sets, you can
actually create very powerful AI models. And then once they had that tool, they were able to
to do exactly what they wanted to open source and resource, open source this educational
resource to their community. And so my vision for AI development in the future is to have more
small, task-specific AI models that are not trained on fast, polluted data sets, but small
curated data sets, and therefore only need small amounts of computational power and can be deployed
in challenges that we actually need to tackle for humanity,
mitigating climate change by integrating more renewable energy into the grid,
improving health care by doing more drug discovery.
So as we finally do wrap up, what you were most shocked by,
you've been doing this journalism, this research for years,
what you were most shocked by in writing Empire of AI.
I originally thought that I was going to write a book focused on vertical harms of the AI supply chain.
Here's how labor exploitation happens in the AI industry.
Here's how the environmental harms are arising out of the AI industry.
And at the end of my reporting, I realized that there's a horizontal harm that's happening here.
Every single community that I spoke to, whether it was artists having their intellectual property taken,
or Chilean water activists having their fresh water taken, they all said,
that when they encountered the empire,
they initially felt exactly the same way,
a complete loss of agency to self-determine their future.
And that is when I realized the horizontal harm here
is AI is threatening democracy.
If the majority of the world is going to feel this loss of agency
over self-determining their future,
democracy cannot survive.
And again, specifically still conventional.
values approach, scale at all costs, AI development.
But you also chronicle the resistance.
You talk about how the Chilean water actors felt at first, how the artists feel at first.
So talk about the strategies that these people have employed and if they've been effective.
So the amazing thing is that there has since been so much pushback.
The artists have then said, wait a minute, we can sue these companies.
The Chilean water activists said, wait a minute.
We can fight back and protect these water resources.
The Kenyan workers that I spoke to who are contracted by Open AI, they said we can unionize and escalate our story to international media attention.
And so even in these, even when I thought that these communities, you could argue, are the most vulnerable in the world, have the least amount of agency.
They were the ones that remembered that they do have agency and that they can seize that agency and fight back.
think it was remarkably heartening to encounter those people to remind me that actually the
first step to reclaiming democracy is remembering that no one can take your agency away.
Karen Howe, author of the new book, Empire of AI, Dreams and Nightmares, and Sam Altman's
Open AI. Karen Howe is a former reporter at the Wall Street Journal and MIT Technology Review.
And that does it for this special broadcast. Democracy Now is produced with Mike Burke, Renee Felstina Goster, Messiah, Roads, Nermine, Chef, Maria Teresha, Maria Tarasana, Nicole Salazar, Saranassar, Trina Nadura, San Alcofti, Marie Astuio, John Hanukkah, Mousud, and Safwetnaz.
Our executive director is Julie Crosby, special thanks to Becca Steli, John Randolph, Paul Powell, Mike DiPhilippo, Miguel Negara, Hugh Grant, Karl Marxer, Dennis Moynihan, David Pru, Dennis McCormick, Matt Ealey,
Anna Uzbek, Emily Anderson, Dante Torieri, Buffy, St. Marie Hernandez.
With Juan Gonzalez, I'm Amy Goodman. Happy New Year.
