Factually! with Adam Conover - A.I. Companies Believe They're Making God with Karen Hao
Episode Date: May 28, 2025EXCLUSIVE NordVPN Deal ➼ https://nordvpn.com/adamconover Try it risk-free now with a 30-day money-back guarantee!Silicon Valley has started treating AI like a religion. Literally. This week..., Adam sits down with Karen Hao, author of EMPIRE OF AI: Dreams and Nightmares in Sam Altman’s OpenAI to talk about what it means for all of us when tech bros with infinite money think they’re inventing god. Find Karen's book at factuallypod.com/books--SUPPORT THE SHOW ON PATREON: https://www.patreon.com/adamconoverSEE ADAM ON TOUR: https://www.adamconover.net/tourdates/SUBSCRIBE to and RATE Factually! on:» Apple Podcasts: https://podcasts.apple.com/us/podcast/factually-with-adam-conover/id1463460577» Spotify: https://open.spotify.com/show/0fK8WJw4ffMc2NWydBlDyJAbout Headgum: Headgum is an LA & NY-based podcast network creating premium podcasts with the funniest, most engaging voices in comedy to achieve one goal: Making our audience and ourselves laugh. Listen to our shows at https://www.headgum.com.» SUBSCRIBE to Headgum: https://www.youtube.com/c/HeadGum?sub_confirmation=1» FOLLOW us on Twitter: http://twitter.com/headgum» FOLLOW us on Instagram: https://instagram.com/headgum/» FOLLOW us on TikTok: https://www.tiktok.com/@headgum» Advertise on Factually! via Gumball.fmSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Transcript
Discussion (0)
This is a HeadGum Podcast. I don't know the truth. I don't know the way. I don't know what to think.
I don't know what to say.
Yeah, but that's all right.
That's okay.
I don't know anything.
I don't know anything.
I don't know anything.
I don't know anything.
I don't know anything.
I don't know anything.
I don't know anything. I'm Adam Conover.
Thank you so much for joining me on the show again.
This week on the show, we are once again talking about a Sam Altman company.
You might've joined me last week when I did a whole episode on my ill-fated decision to
do a video for his company, World.
Very funny fuck up on my part that I very much regret and I discussed it extensively. Check out that video.
I'm sure you'll enjoy it more than I did at any rate.
But this week we're talking about Sam Altman's main gig,
Open AI, because this company is simply put,
one of the most important things happening
on earth right now.
And by important, I don't necessarily mean good.
See, Sam Altman is out there telling Congress,
podcast interviewers, the press, anyone who will listen,
that he is in the process of building
an artificial general intelligence
that is going to destroy the world,
and yet he needs as much money as possible
to build it as quickly as possible.
He has raised more money than any tech company
ever has in history, and he's allied himself
with the Trump administration
in an attempt to get even more government support to build his AI even more quickly.
Now, if you're a skeptical person, you might listen to that and think,
this sounds ridiculous.
I mean, can this man actually believe that he's creating a god out of silicon chips?
Well, actually, as my guest today on the show is going to argue,
yes, the AI industry from the inside actually
resembles in many ways a religious movement, a religious
ideology about the future. And whether or not that ideology is true, even more importantly on a material level,
these companies are
transforming our world and she compares them to the imperial powers of the late 19th century, that these AI companies are transforming our world and she compares them to the imperial powers of the late 19th century that these AI
companies are literally empires unto themselves. This interview was absolutely
fascinating and gripping. I know you're gonna love it. Before we get to it
I just want to remind you that you want to support this show you can do so on Patreon. Head to patreon.com slash Adam Conover.
Five bucks a month gets you every single one of these episodes ad free,
helps us bring these wonderful interviews to you week in and week out.
And of course, if you want to come see me on the road, I'm doing stand-up comedy.
Head to adamconover.net for all my tickets and tour dates.
Coming up soon, I'm headed to Oklahoma, Washington state.
We're adding new dates all the time, adamconover.net.
And now let's get to this week's interview with Karen Howe.
Karen has simply put one of the best reporters working today
to cover open AI and the entire AI industry.
And she has a blockbuster new book out now
called Empire of AI, Dreams and Nightmares
in Sam Altman's Open AI.
Please welcome back for her third time on the show,
Karen Howe.
Karen, thank you so much for being on the show again.
Thank you for having me back.
It was wonderful to talk to you.
I think it was a little over a year ago,
maybe a little bit more, we were talking about
the crisis at the top of the leadership of OpenAI.
That's right.
Since then, Sam Altman has retaken the reins.
The company has only gotten bigger and more powerful.
What is its place right now in the tech world
and frankly in America at large?
Yeah, that's a great question.
I mean, it's interesting because within Silicon Valley,
I think its position in terms of being a leader
in research has weakened.
It no longer really retains its dominance
as in terms of the cutting edge nature of
its models.
There are a lot more competitors in the space.
They're keeping up quickly.
There's also a lot of open source models that are rapidly catching up.
But in terms of OpenAI's position in the US and the world as both an economic and political power, it has certainly grown because of Sam Altman's ability to very strategically
maneuver himself into positions of power and align himself with other people in power.
So he is very effectively aligned himself
with the Trump administration and President Trump himself.
Most recently was in the UAE with President Trump,
by a side striking deals in the Gulf States to try and continue getting more capital and
building more data centers around the world.
So from that perspective, it is truly elevated itself to a new echelon of power.
And how has Altman done that and what is he able to do that other tech executives are not?
He is a once in a generation storytelling talent.
He is just able to really paint a persuasive vision
of the future and to get people to really want
a piece of that future.
And he also has a loose relationship with the truth.
So when he's meeting with individuals,
he, what comes out of his mouth is more tightly correlated
with what they wanna hear
than what he necessarily needs to say.
And I think this is incredibly effective with President Trump.
I mean, it's effective in general with many, many people,
but it is-
Yeah, but Trump loves to be told
what he wants to hear specifically.
Yes, it is effective with President Trump.
And I think he essentially sold Trump on this idea
that the Stargate Initiative,
having huge gobs of investment come into the US
for building out compute infrastructure
and also building out compute
and bringing American AI infrastructure all around the world
could be part of his presidential legacy.
And so I think that is what's enabled him to facilitate
this very tight coordination with the government,
the US government.
But you're saying, I wanna drill down a little bit,
you're saying that he's a really good liar,
basically, that he is really good at convincing people
to do what he says and give him money.
That's a little bit different from being,
say, a once in a generation product talent or engineering talent
or, you know, like a Steve Jobs figure.
I'm sure Steve Jobs was also very persuasive, right?
But he also had a talent for product design
and that sort of thing.
And Steve Jobs also had a talent for storytelling
and for not necessarily engaging in the truth.
And I think Altman very much worships jobs in that regard
and a lot of Silicon Valley worships that kind of ability
to craft extremely persuasive visions of the future.
And so I do think Altman is very much a product
and a pinnacle of Silicon Valley.
Yeah, but in some ways it seems to be,
like when I look at Sam Altman,
I see someone who is spinning a vision,
but I'm unsure how much reality is behind it.
And that's just the vibe that I get, you know?
Like, yeah, Steve Jobs was like a great salesman,
but he was holding an iPhone.
You know what I mean?
He was holding an iPod.
And a lot of my criticism of OpenAI has been,
hey, you know, chat GPT is like pretty useful.
What is the case for all of this massive investment though?
How much improvement has there actually been, et cetera?
It seems like it's more divorced the storytelling
from the reality of me.
But again, I don't dive into it nearly as closely as you do.
What does it look like to you?
I think you're hitting on a very,
very important observation here,
which is that unlike a physical product like a smartphone,
AGI or AI, artificial general intelligence,
this is so poorly defined as a term.
And so if you're going to make your objective, your objective
is to race towards this unknowable goal, yeah, there's going to be a lot of, a huge divorce
between narrative and reality because no one can really articulate what this goal is and
what it's going to look like and who it's going to serve. And I think this is really much a product of the fact that AI as a field,
even back when it first was founded in the 1950s,
it pegged its own objective on this idea that they wanted to recreate human intelligence.
And to this day, we scientifically have no consensus on what human intelligence is.
Right.
And so if you're trying to recreate something
that we still don't really understand,
yeah, you're gonna get a lot of hand waving.
You're gonna get a lot of future vision painting
without actually any grounding in concrete examples,
concrete details or concrete facts.
Yeah.
And so you get this effect where Altman goes on these
podcasts, goes before Congress and sort you get this effect where Altman goes on these podcasts, goes before Congress,
and sort of tells this story of AGI is coming
and it's going to do XYZ and, well,
what is that story that he is telling?
I'm sure you've heard him tell it many times.
What is the sort of core of it?
It has gotten more dramatic over time.
The more money that OpenAI needs to raise and the more they need to ward off regulation,
the more the stakes rise. So, you know, there are some core like tenants in the AGI mythology.
One of them is AGI is going to cure cancer. It's going to bring us super affordable, amazing
healthcare to everyone. It's going to solve climate change. It's going to bring us super affordable, amazing healthcare to everyone,
it's going to solve climate change, it's going to wave the wand and wipe away poverty.
But you know, he said in a blog either at the end of last year or the start of this
year that we are now entering the intelligence age and the things that will happen in this age are so profoundly utopic that we can't even imagine them.
So he was upping the ante saying, you know,
even curing cancer and solving climate change
is not sufficient to contain or describe
the sheer orders of magnitude of abundance
and prosperity and goodness that is going to come.
I mean, how is this not a religious cult, you know? sheer orders of magnitude of abundance and prosperity and goodness that is going to come.
I mean, how is this not a religious cult?
You know, like that sort of, yeah, in the future,
I can't even describe to you all the good things
that you're gonna get.
Like it's beyond the bounds of human language
to even begin to list all of the wonders
that AI will bring you.
I mean, this is, it's by definition sort of nonsensical
and yet people are doing what he says as a result of it.
Yeah, I think it is exactly right to think of this
as a quasi-religious movement.
And one of the biggest surprises, I think,
when I was reporting on the book was how
much of a quasi religious atmosphere is surrounding the AI development and sort of has gripped the
minds of people within Silicon Valley who are working on this thing. And there are two sides
of this religious movement. They're all within the religion of AGI, but there's one side that's saying AGI will bring
us to Utopia and the other one that's saying AGI will kill us all.
But ultimately, it is all kind of rooted in a belief.
It's rooted in belief.
There's not really evidence that they're pointing to.
It is their own imagination that is sort of projecting their fears, their hopes, their
dreams of what could happen.
And they also very much have this narrative when they paint this religion that because
the stakes are so high and really this is a make or break it moment for humanity, that
they alone are the ones that have the scientific and moral clarity to control our progression
into that future.
It's such a strange pitch though.
Like the pitch is there's a meteorite,
there's an asteroid coming for the earth
and it's gonna wipe out humanity.
And also I'm the one creating the asteroid.
Like I'm in charge of the asteroid.
And so I'm in control of how it's going to hit.
And so you want me to, you want to be on my good side
so that it doesn't hit your city?
Is that basically the idea?
Like it's so strange.
Yeah, it is.
I mean, it takes religious rhetoric to a different level
in that you don't believe in a God that is higher than you.
You believe you are creating the God.
And I, you know, the thing that was surprising was I thought this was originally rhetoric and it's not for many people.
For many people, it is a genuine belief that this is what they are doing.
This is their purpose.
a genuine belief that this is what they are doing, this is their purpose.
Um, especially for people in the, in the doomer category, the people who believe AGI will kill humanity, I interviewed people who had very sincere emotional
reactions when talking to me about the possibility that this could happen.
Their voices quivering, them having just a lot of anxiety and a lot of stress about
really viscerally feeling that this is a possibility.
And I think a lot of that step, I mean, that anxiety is a really core part of sort of understanding
how AI development is happening today and the thrash and the, all of the headlines and the drama and the board crisis.
Um, because when you put yourself in the shoes of people who genuinely think
that they are creating God or the devil, uh, that is a,
that is an enormous burden to bear.
And I think people really do kind of cave under that pressure.
Yeah. I mean,
do kind of cave under that pressure?
Yeah.
I mean, if I met anybody who said that their job was creating God or the devil
and trying to choose which was which, I would say you need psychiatric help. Like I would be, I would be concerned for the person, you know, because in everyday
human life, I don't really think it's possible to do so.
I know that these folks have intellectually
convinced themselves that this is the case,
but when you're saying this, it's part of what makes me go,
okay, is this entire industry not insane?
You know, that people believe this,
or am I really meant to take their side
and take their word for it, that this is what they are doing?
Or is this like a mass delusion
that's happening within the organ?
I mean, if you talk to people in Scientology, right?
They'll say, oh no, I really, we really have to, you know,
free Arthetans or, you know, Xenu is gonna come from,
like they believe it, right?
And they have a whole system of thought
and you can't really talk them out of it
and they can be very convincing when they talk about it,
but you have to take a step back and go,
hold on a second,
you're in this sort of mass delusional organization.
Is that what OpenAI seems like or what?
You know, not just OpenAI,
I think Silicon Valley has gone on a progression
in the last 20 to 30 years where it originally started as a
group of renegades that were thinking about, we can change the world, but without actually
evidence to substantiate that, just big bold ideas.
And then there was the era in which Silicon Valley companies did change the world.
And for a while people thought it was good and then people realized that it was not so good.
And now we're sort of entering another era
where all of the people within Silicon Valley
have already seen the profound impact
that their own creations can have.
So I think that's sort of what's happening is
you already have evidence that the actions
you take can have global impact.
And the stories they tell themselves about the morality they have to uphold or the responsibility
that they have to uphold in that kind of environment where there is a lot of evidence pointing
to how important their decisions are,
creates that kind of quasi religious fervor
around the whole thing.
Because in the past, Silicon Valley
has made massive disruptions to a way of life.
New communications technology, they've wiped out,
I don't know, taxi cabs, whatever,
we can go down the list of things.
But a lot of times when I'm looking at the promise
of AI slash AGI, it seems like they are trying to postulate.
Well, this is how we got all the money in the past
was by creating all this disruption.
So we've got to like postulate
the biggest disruption possible
and then we'll get the most money
because that's sort of like our fundamental sales pitch
to the US government, to Wall Street, to humanity.
It doesn't mean it's true.
There's lots of companies that, you know, I don't know,
Theranos or whatever, right, have said,
we're gonna change everything.
And it was just, they were just saying it
and it wasn't true.
And OpenAI and the AI industry more than other companies does sort of look like
they're playing out this sort of thought experiment of, okay, we're building some cool technology
now, but we're, you know, because of that, then B will happen, then C will happen, then
D will happen, then E will happen, then we'll have created God.
And it's, I mean, do you find it credible?
No.
We're seeing Silicon Valley evolve into the most extreme
version of itself, but no, we should not be buying
their word.
We have plenty of evidence from the past to know that we need
to be extremely skeptical of what they're
selling us because ultimately, as you said, they create these narratives because they
realize in the past that these are the narratives that help them make money. And we are now
in an era where it's not just money, there's also ideology, quasi-religious ideology that
is driving the whole thing.
But yeah, we still need to be deeply, deeply skeptical.
It's the same people that gave us social media and smartphones, and now we've pretty conclusively
determined that these are not actually being the most profoundly beneficial tools in our
society or to individuals or to kids.
And those are the same people that are creating AI. So we need to take a step back and recognize that.
Folks, let me share a secret with you.
I'm a very private person, and that's the only secret I'm going to share with you.
Because again, I'm a very private person.
When I'm browsing the internet or working online, I don't want anyone hanging over my shoulder,
breathing their hot swampy breath right into my ear
as they watch what I'm doing.
If you wanna keep your ears free
from that hot and sticky swamp breath,
you need to get yourself a virtual private network.
And that is why I recommend NordVPN,
a VPN to help mask your IP, your location,
and stop digital swamp breath in its tracks.
If you've never used a VPN before,
it does not get simpler than NordVPN.
Whether you use a Mac or a PC, an iPhone or an Android,
you can connect to NordVPN with one click
or enable Auto Connect for zero click protection.
Once you're connected, you'll find that you have
amazing speed and the ability to connect
to over 7,400 servers in 118 countries.
Traveling abroad?
Well, you can stay connected to your home country
to make sure you don't lose access to region locked content
on streaming services.
And all of this with the joy of knowing
that no one is leering over your shoulder.
So to get the best discount off your NordVPN plan,
go to nordvpn.com slash Adam Conover.
Our link will also give you four extra months
on the two year plan.
There is no risk with Nord's 30-day money-back guarantee.
The link is in the podcast episode description box.
Check it out.
Folks, you know, we've talked a lot on this show
about political polarization in America,
how we're stuck in media bubbles,
and how it's so hard to know
whether the information that you're getting
is accurate and unbiased.
Well, you know what I use to help me wade my way
through the thicket of American political media?
Ground news. Ground news is this awesome news aggregator.
They gather up all the news for you and give every single source a bias and a factuality rating.
So you know if the source you're reading is from the center right, the far left.
That doesn't mean that what's in it is false.
It just means you should know the perspective that they write from.
The same goes for the factuality rating, where ground news gives you an actual rating of how factual each source is,
so you can avoid misinformation
and know that you're getting the real deal.
We use Ground News on this show in our research process,
and I think you are gonna love it as well.
So if you wanna break out of your bubble
and make sure you're getting the real story,
you can get 40% off a membership
if you go to groundnews.com slash factually.
Once again, that's 40% off
if you go to groundnews.com slash factually. Once again, that's 40% off if you go to groundnews.com slash factually.
This episode of Factually is brought to you by Alma.
Do you get the feeling that life is just the brief moments
that happen between social media doom scrolling sessions?
You know, personally, I've had the feeling on occasion
that my life is just some kind of cruel,
perpetual motion machine that takes in a human experience
and outputs weapons-grade anxiety.
It's in times like this that I've realized that nothing,
nothing is more important than meaningful human connections.
That's why, if you're seeking some help in dark times,
I recommend looking at Alma.
They make it easy to connect with an experienced therapist,
a real person who can listen, understand,
and support you through whatever challenges you're facing.
I can tell you firsthand how much finding my therapist
who understands me actually helped me on my journey of mental health,
and you can find your person too.
With Alma, you can browse their online directory
and filter by what matters most to you.
And then you can book free 15 minute consultations
with therapists you're interested in.
Unlike other online therapy platforms
just match you with whoever's available,
Alma lets you choose someone you truly connect with
because the right fit makes all the difference.
With their help, you can start seeing real improvements
in your mental health.
Better with people, better with Alma.
Visit helloalma.com slash factually to get started
and schedule a free consultation today.
That's helloalma.com slash factually.
I mean, like there's good things
about social media and smartphones, but social media in
particular, it's just been a way to sell ads and centralize eyeballs.
It's not like it's, it's had some giant purpose at the end of the day.
It's just, it's just businesses connecting the world.
Right.
Exactly.
But at the end of the day, it's just some dumb asses with a lot of money, you know,
just trying to gobble up eyeballs and money, just like business people always have
and doing it in a profoundly disruptive way.
So, you know, what would be different about this?
But let's just stay on the quasi-religious piece of it for a second more.
Like, these folks genuinely believe that they are helping to usher in,
like, a new form of intelligence, a new god or devil,
then why are they doing it if they're afraid of it?
And how do they convince themselves
that that's what they're doing?
I think there is a very critical part of the narrative
where if we don't do it, somebody else will,
and not somebody else could be a very, very, very bad actor.
So the only way to ensure that we are going to get to some kind of positive outcome
is by doing it ourselves.
That's such a hubristic thing to think.
Everybody thinks they're a good actor.
Who's the worst actor than Silicon Valley?
I guess they go China sometimes.
It's like, yeah, China's bad in some ways, right?
In many ways, whatever.
Are you talking about Chinese corporations?
Are you talking about the government?
Yeah, plenty of criticisms of China.
But I also have plenty of criticisms of Silicon Valley.
Why should I accept that they're the good guys?
Why do they think they're the good guys?
Because they, I mean,
this is what Silicon Valley runs on, right?
Is self-belief.
And, you know, one of the interesting things
that I kind of reported on in my book is,
there are lots of enemies.
The bad guy evolves.
China is definitely one that reoccurs,
but within OpenAI,
the origin story of the company was they were trying to be
the antithesis to Google.
So Google at the time was the evil corporation
that's going to be developing AI with purely for-profit capitalistic motives,
and we need to be the nonprofit that's going to be a bastion of transparency
and do AI development in service of the public good.
And Google has continued, Google and DeepMind have continued to be very much a competitor
and upheld as a, we do not want to be this and this is why we are continuing
to pursue relentlessly, pursue this race to win
because we need to be before Google.
And there've been others like all of the AI companies now
all have sort of different anglings
where they imagined themselves as the best of the crop.
So Anthropic also, Anthropic was founded
by a group of ex-OpenAI people.
It was a fission, fissure in the original group
of OpenAI leadership where the Anthropic group then decided,
we think we can do this better and we need to be the
ones that create a different vision of what AI is to outmaneuver open AI.
We're the good guys, they're the bad guys.
And ultimately, what's interesting is that even as all of these companies have their
own self-defined narratives of their self-worth and value being higher than others,
they're all pursuing the same thing,
which is large language models, scale, scale, scale,
and growth at all costs.
I know that OpenAI initially started as a nonprofit of some kind.
You've written that OpenAI has become everything
that it said it would not be.
What do you mean by that?
So OpenAI's original founding story was
Sam Altman had this idea for an AI research lab.
He wanted to recruit Elon Musk to join forces with him.
And Elon Musk at the time had a particular thing
against Google and DeepMind,
Demis Hasabis.
And so Sam pitched him this idea,
why don't we create the anti-Google,
the anti-DeepMind, and we'll counter the way
that Hasabis is conducting himself
with a completely different approach.
And I touched on this earlier.
They then commit to being totally transparent, open
sourcing their research, not having any commercial objectives, and serving this higher purpose.
What I ultimately in the book called a civilizing mission, because I really think we need to
start understanding these companies as empires, we are working to ensure
that artificial general intelligence
will benefit all of humanity.
And essentially, if you look at what OpenAI is today,
I mean, it is so, it's still a complete 180.
It's a for-profit corporate.
I mean, it is still a nonprofit
with a for-profit corporate. I mean, it is still a nonprofit with a for-profit invested inside,
but it is the most capitalistic organization that you could point to in Silicon Valley today.
It just raised $40 billion, which is the largest fundraising round of private investment ever in the history of Silicon Valley,
and put the company at a $300 billion valuation, which makes it one of the most valuable startups ever.
That's something nonprofits normally do.
Yeah, right.
And it doesn't release research anymore.
And in fact, a lot of what it did
through the course of its history
was essentially reestablish new norms
within the entire industry,
the entire AI field, to stop releasing meaningful technical details
about AI systems at all.
So not only are they not transparent themselves,
they had turned the rest of the field in the industry
towards totally opaque norms.
And that, you know, they are pursuing commercial objective.
They are relentlessly releasing new products,
trying to growth hack to get more and more users
as an opening eye source very recently told me.
And they are basically the most Silicon Valley
of Silicon Valley companies now,
even though they originally portrayed themselves
as the opposite.
Wait, so you say they're growth hacking
to increase their users.
I remember when ChatGPT came out,
it was supposedly one of the biggest product launches
in tech industry history in terms of how many people
used it.
And of course the story has to be of incredible growth
of, you know, uptake of AI usage if they want to keep getting
the investment, why would they have to growth hack
in order to show growth if the product
is so transformative?
It's a great question.
You know, Facebook also did the same thing.
They also said that their product
was incredibly transformative, but they also,
I mean, they practically invented growth hacking as a company by creating a growth
team and turning it into the core of the company.
And that became a model in all Silicon Valley companies where all startups now have
growth teams. And that is a really important part of showing investors hockey stick
numbers. They want to keep showing this rapid rise in the number of users that are signing on
to the platform.
Altman said he had a testimonial in the Senate just a couple of weeks ago, and I think he
said that there were 300 million active users on OpenAI, ChatGPT today.
That is still, compared to other internet giants, low.
Yeah.
In absolute numbers and also in Altman's mind.
And so, you know, the Miyazaki stunt that they pulled.
Sure.
That, afterwards, OpenAI was super pleased that they were able to get a million new users from,
or it might've been more than that,
but they were able to get a ton of new users
from that particular feature that they added.
And that is basically-
The feature that let you create a selfie
that looked like a Studio Ghibli movie.
Exactly, yeah.
This was their big accomplishment.
I saw those selfies, but I'm like, who gives a shit?
Like that's not a transformative product, right?
It's just like, there have been little fads like that
for the past couple of years, even before chat GPT.
Oh, I did the watercolor AI of my face.
Like it.
Well, you know, it's so interesting.
I was in Europe and I was just, whenever I'm traveling,
I always will randomly ask people,
oh, have you heard of OpenAI?
Have you heard of ChatGBT?
Have you heard of AI?
Like, what are your thoughts on it?
And I spoke to a woman who was like,
I hadn't heard of it until recently
where I realized I could make a cartoon of myself
and that's super cool.
And, you know, it's a really effective tactic
for getting more users and getting them engaged
and reaching people that they haven't reached yet.
Okay, fair.
It's like a big wide funnel.
And then maybe those people say,
okay, now help me cheat on my math test
or whatever after they do the Miyazaki art.
But it also highlights a major criticism of this technology
that a lot of people have is, especially artists feel
that it's institutionalized theft
and the fact that their biggest sort of news moment
of the past couple months was, you know,
lifting the style of one of the most famous artists
in the world in an unauthorized fashion, I assume.
Hayao Miyazaki is not receiving a couple pennies
every time someone makes themselves look like Ponyo,
despite the fact that it is trained on his work.
I thought it was an odd stunt for that reason
because it highlights one of the main moral objections
that people have to this technology,
which is that it's built on the back of all of humanity
in a way that we are not being compensated for.
Absolutely, it is weird to say.
And I do think it kind of signals us a phase shift
in how OpenAI is now engaging.
There was a period in which I think they were more cautious
about trying to portray themselves as listening,
attentive, democratic in the
way that they were receiving feedback and adjusting themselves.
And I think they have now moved to a different era where they are just running and racing
and they're not as concerned anymore about the ripple, the negative ripple effects it can cause.
If it also allows them to do what they need, which is they need to monetize.
They are losing massive amounts of money. They are raising massive amounts of capital.
They need to figure out how those trains are not going to crash. Yeah.
And they, and, and so I think that is, yeah, the Miyazaki thing is definitely, uh,
uh, and, uh, exemplifies this pressure that the, the priorities that they have now as an organization.
How do they plan to monetize?
Well, it's interesting that they recently hired a new CEO of applications, Fiji Simo,
and she has a career where she has been, she has a lot of experience with advertising.
Altman has indicated publicly that they are, they need to figure out a plan for monetizing
the free tier of users.
So I think they're going to go the way of all Silicon Valley companies when they start
looking for some kind of cash cow is advertising, advertising off of the data that they're collecting.
And I was speaking to another opening, I source at at one point who mentioned that one of the best
business models that still has not been superseded within the Valley is search.
What he meant was advertising, like being able to get users information in exchange
for getting their information to then package out to
the people with the money. And so that is absolutely one thing that they're
exploring. They're also exploring subscriptions, but you know the price
tags that they're putting on these subscriptions now, hundreds of dollars
a month, they're considering thousands of dollars a month, is not going to be
appealing to the average user. So they have to balance it with also the majority of users,
how they're gonna monetize them for free.
But a business model where they're imagining
people are gonna go to chat GPT to ask questions
and chat GP is gonna give answers
and then also serve ads that are based
on the previous questions, that is Google, right?
It's Google with a different style of delivering the answer
and with a different sort of database
because it's based on a large language model
rather than like searching the internet.
But that's, so okay, they might supplant Google.
Google's a great big company,
one of the largest in the country.
It's not transforming the entire global economy
and replacing humanity.
It's just like, okay, so their aspiration is to be Google?
That doesn't sound as big as what they are describing to me.
Yeah, absolutely.
I think there has always been a divergence
between what they say and what they're doing.
Yeah.
And it has reached a new level now that money
is a much more pressing topic, the issue that they need to address
urgently.
What do you mean when you say we want to understand these companies as empires?
So what I write about in the book is when you think about the very long history of European
colonialism and the way that empires of old operated, there were sort of, there were, there were several different features for empires of old operated,
there were several different features for empires of old. One was they laid claim to resources
that were not their own and they designed rules
that made it seem like they were their resources.
You know, the Spanish caquiseros showed up in the Americas
and were like, actually based on our laws,
we own this land and these minerals and these resources.
They would exploit a lot of labor around the world, meaning they didn't pay workers, or
they paid them very little to continue building up and fortifying the empire.
They competed with one another.
There was this aggressive race of, we, the French Empire, are better than the British Empire. We, the British empire, are better than the British empire.
We, the British empire, are better than the Dutch empire.
And we need to continue to relentlessly race
and be number one, because we're the ones
that have the right civilizing mission
to bring modernity and progress to all of humanity.
Ah.
That is-
We are morally superior.
Exactly. And that is literally what is happening now with AI companies where they extract a lot
of resources, they claim to a lot of resources that are not their own, but they're trying
to position it such that it seems like it's their own.
This example, they're trying to make it sound like copyright
laws allow them to have fair use of artists and writers and creators' work to train their
models on. But ultimately, those models are creating very effective substitutes insofar
as it's taking economic opportunity away from those same artists, writers, and creators now,
they are exploiting a lot of labor, both in terms of the labor that they're contracting to do all of
the labeling and cleaning of the data before it goes into the models, and also in the fact that
they are ultimately building labor-replacing technologies. OpenAI's definition of AGI is a highly autonomous systems that outperform
most humans, outperform humans at most economically valuable work.
So there are building systems that will ultimately make it much harder for
workers to bargain for better rights when they're at the bargaining table.
And they're doing this in a race where they position themselves as morally superior to the other bad actors that they need to beat.
And they have this civilizing mission. If you join us and allow us to do this, if you give us all of the resources, all of the capital,
and just close your eyes to the enormous environmental, social, and labor impacts all around the world,
we will eventually bring modernity and progress to all of humanity.
And one of the things that I mentioned in the book is there is, you know, empires of old were deeply, deeply violent.
And we don't see this kind of overt violence with empires of AI today. But we also have to remember that modern day empires
are going to look different because we've had 150 years
of human rights progress and social norms have shifted.
And so what we need to recognize is the template
evolved into present day.
And all of the features of empire building are there.
And one of the analogies that I've started increasingly using
that I didn't originally put in the book,
but if you think about the British East India Company,
it originally started as a company
that was doing mutually beneficial economic agreements with India.
And at some point, an inflection point happened
where the company realized that they could start acting
completely in their self-interest with no consequences.
And that is when it dramatically evolved
into an imperial power and then eventually was,
became a state asset and the British Empire, the crown,
then turned India into an official colony.
And we are seeing that play out in real time where OpenAI and all of these empires of AI,
they are gaining so much economic and political leverage in the US and around the world. And they
are so aligned and backed by the Trump administration now that they have reached a point,
I think they have reached a point where they basically can act in their self interest
with no material consequence to themselves anymore.
And this is just,
if we allow this to continue,
I think it can be profoundly devastating.
I mean, what an incredible comparison
between OpenAI and the East India companies.
And one of the things that strikes me
is how it leverages the public hatred
for Silicon Valley companies.
15 years ago, we all loved these companies.
They were like bright shining beacons
in the American economy.
They were so warm and fuzzy.
And then gradually we start to go,
ah, Google's kinda fucking me.
Ah, Apple, I'm kinda pissed off at them. And oh, these are just, they're the new Wall Street, right?
The public discontent is growing.
And so these companies have sort of adopted
some of that language and sentiment,
say, yeah, yeah, they're all corrupt,
except for us, we're the good one.
We're the one who's gonna save you from the bad ones.
And they're all doing it.
Like Anthropic says it's about open AI, et cetera, et cetera.
Yeah.
But it's a tactic to gain power for themselves.
Yeah, exactly.
And the public discontent that has been rising
over the last decade really is based on the fact
that people feel like they're losing control
and agency over their lives.
And there's a reason for that is because these companies are gaining more control
and agency over your life.
Yeah.
They are taking your data and most people feel like they have, they, there's nothing
they can do about it.
You know, they, they just enter this, um, nihilism where they're like, well, we
don't have any privacy anyways, so whatever.
But they're left with this feeling of a lack of control,
a lack of self-determination.
And that is ultimately what I really hope
that readers can take away from the book,
is this is a continuation evolution
and the most extreme version of what we've ever seen before
in the way that Silicon Valley has eroded away and the most extreme version of what we've ever seen before
in the way that Silicon Valley has eroded away
our individual and institutional foundations
for self-determination.
Yeah.
When you talk about these companies as empires
that are extracting resources,
you know, I was just in Amsterdam on tour
and I went to a couple of museums
and it was just very apparent to me Amsterdam
as this like physical manifestation of Dutch empire, right?
That like, I went to the Rijksmuseum
and they just had one or two paintings
about like Dutch colonies.
They're like, oh, here are the indigenous people of Java
like planting sugar cane or whatever it was.
I went there in October last year.
Oh, amazing.
And there's like literally like one or two paintings, right?
And the whole music, the rest of it is like,
here's a beautiful, you know, Dutch,
this is worth $10 million.
And here's the super sophisticated mapping technology
that we developed in the compass
and navigation technology we developed.
And yeah.
But then there's just a little acknowledgement
because they know, but they can't really acknowledge fully,
this was all extractive, right?
And so as in the city going, man,
they extracted wealth and labor and blood from countries,
from other civilizations around the world.
They turned it into this physically gorgeous city.
It's a, what a wonderful,
everyone goes to Amsterdam says, what a beautiful place.
But it was taken from other places, right?
And it created there.
And now it's just, it's been there for, you know,
a couple hundred years at this point.
So we're familiar with that kind of extraction.
When, with this type of empire,
who are they extracting from and what are they extracting?
They're extracting from everyone.
They're extracting data from everyone,
but also they're extracting actual physical minerals
from the earth as well,
because in order to train these colossal AI models,
which is not an inevitable manifestation of AI,
it is very much a choice that Silicon Valley made
to build models that manifest the growth at all costs
mentality that they have.
They need an enormous amount of computational infrastructure, which data centers and supercomputers,
and that is built on minerals that come from somewhere. And so part of the book, I ended up
going to Chile, to the Atacama Desert, where it has long dealt with all kinds of extraction,
but that extraction has really accelerated
because of two things,
because of the electric car revolution,
the Atacama Desert has a lot of lithium,
and because of AI, they have a lot of copper,
and lithium is also needed in data centers.
And there are indigenous peoples there that are literally being displaced and literally
experiencing colonialism right now.
It is not a thing of the past for them.
They are having their lands taken.
They're having their economic opportunity taken.
They're having their spiritual grounds taken.
The place where they engage in their connection with the Earth.
And they said to me when I was interviewing
the indigenous communities there,
we have always, always, always been told these ideas
about this will bring everyone into the future.
This extraction, this hollowing out of our lands
is going to bring everyone into the future. And they're like hollowing out of our lands is going to bring everyone into
the future. And they're like, are you sure it's everyone? Like, who is this bringing into the
future? Because this is hurtling us to back, backwards in time, where we have less rights,
less resources, less economic opportunity than ever before.
economic opportunity than ever before. And tell me about the piece where OpenAI is becoming allied with the U.S. government,
because that's another really strong comparison to these colonial empire companies of the
past. When did that start happening? Was it specifically with the Trump administration
and how has Sam Altman made that happen?
I think the most symbolic moment happened
on day two of the Trump administration
when President Trump stood in front of an audience
at a podium next to Sam Altman
and announced the $500 billion Stargate initiative.
So this is an initiative that's going to invest,
it's private investment, 500 billion, into
building compute infrastructure.
And OpenAI has said that this is for it alone, itself alone.
And that was a very, very strategic and clever move by Altman because at the time what was
happening was OpenAI was in a bit of a fragile position where it was being sued left and right by lots of different
groups and most importantly by Elon Musk, original co-founder that then got
snubbed and has given a lot of grief to OpenAI in the recent year. And Elon Musk
had also bet on the right horse
and had gotten himself elevated
into an extremely prominent position in the administration.
The head of the department of whatever, Doge.
I don't remember any of this.
What was he, Doge, what is that?
Elon Musk is in the government.
I just, I guess I was asleep.
I somehow didn't.
Yeah, we all blacked out.
I don't think I made a dozen videos about that so far this year.
Uh, so go on please.
Yes.
Um, and so, opening, I was in this position of, oh, the, the man that wants us to not
do what we do is now extremely powerful.
extremely powerful.
And so what Altman did was he started negotiating behind, uh, closed doors to
get himself into basically the same position. The one person at the time that could protect him from Musk was Trump.
So if he allies himself with the president by striking up this thing of you take credit for your administration bringing in $500 billion of investment
for computational infrastructure that is going to keep America first in the AI race. You take credit for that and then in exchange Altman got a shield.
credit for that and then in exchange Altman got a shield. And so I think that is one of the most symbolic moments in how OpenAI has allied itself with, you could argue maybe the only power that
was higher than Silicon Valley in that moment, the US government, because Silicon Valley has more power than
basically every other government in the world now.
And Trump, the Trump administration has been all in since then in declaring, we don't want
anyone to talk about regulation.
You know, literally just this past week,
Republicans tried to slide in a specific line
within a tax bill that they're trying to pass
that says, that proposes to block all state regulation
on AI for 10 years.
Yeah.
So the Trump administration is doing,
is pulling out all the stops to try and make it
as frictionless as possible for these AI companies
to relentlessly drive forward.
And in fact, they're really putting AI
into the government itself,
a big part of Elon's Doge Initiative,
but also you see it echoed all different parts
and Republican administrations of the country
and the state and the various states is,
we're gonna fire all the government workers,
we're gonna replace them all with AI.
It's interesting to see the government be the first place
that is really affirmatively trying to do this,
whether or not it works, what do you make of that?
What a great way to turn what was public infrastructure into private infrastructure.
What were public workers that earned public money and operationalized what elected officials determine needs to be done and turn it into just automated AI systems
that are taking all of the public data,
government data, private citizens data,
and funneling it through company servers
to do supposedly the same thing,
but not really because these systems break down a lot.
Right, well, and their output is unpredictable
and they have weird hallucinations and everything else.
And, you know, maybe you fire a bunch of IRS workers
and run everyone's tax returns through AI
and suddenly it starts putting white genocide
into, onto everyone's tax forms, you know?
Like, well, that's, you know what?
That's its own story.
And that might be more of an Elon story than an AI story.
No, but no, I think it is a very effective, that's, you know what, that's its own story. And that might be more of an Elon story than an AI story. No, but no, I think it is a very effective, you know,
I think that moment was a great way to highlight
the fact that we don't have any checks on these companies
and how they are going to design their AI models
and what kinds of values they use this vehicle
to ferry out into the world.
Right.
And usually it's not so overt.
In this case it was, and it really showed what's actually under foot.
But that's a problem.
It usually is much more subtle, but OpenAI has said, you know, when the Trump,
and when president Trump came into power, they said, we are going to start relaxing.
Like, we don't, we don't want to be so heavy handed in content moderation.
You know, that, that's a political choice.
They are trying to, in more ways than one, align themselves with Trump by making sure
that their technologies are not going to spark the ire of the president and are
shifting with the political winds
of who's in power.
Even though though that it's really dedicated to Trump,
that they've aligned themselves with Trump,
so much of American society and business and punditry
has aligned themselves with AI, has swallowed it.
I'm thinking about, you know,
I interviewed a couple of weeks back,
Ezra Klein and Derek Thompson on the show
about their book, Abundance.
A lot of good things about the,
they make a lot of arguments in the book,
some of which I agree with.
There is a page or two in the book
where they're talking about the importance
of government investment in science generally.
Certainly agree with that.
Government is like, there's so many amazing innovations
we never would have had if the government
had invested in the basic research.
And then in the course of that argument,
they say, well, AI is the next big thing.
And the government could invest billions
and billions of dollars into AI data centers
and make sure that America has a lead in AI,
because that's where everything is going.
And I got to that part and I was like,
this is just, have you been listening
to a lot of Sam Altman
podcast interviews?
You know what I mean?
Like this is, basically that to me,
sounds like a handout to these imperial companies,
as you say.
Do you view it that way?
And why have they been so successful,
even as they're aligning themselves with Trump,
who has very little in common with Clyde Thompson
in terms of his objectives,
you know, liberals have also started espousing
the same argument.
Have they fallen for a bill of goods?
So I agree with Ezra Klein and Derek Thompson
on the first part that AI will be the next big thing.
Where I disagree is what kind of AI are we talking about?
And the kind of AI that I'm talking about doesn't actually need massive amounts of data
centers in computing infrastructure.
AI has been around for a long time.
There are many different types of technologies that are actually named AI.
And the things that I think can be transformative are smaller task-specific deep learning models
or other for maybe non deep learning AI systems
that are attack specific problems that we need
that are also greatly,
that lends themselves to the strengths of AI.
So an example is Alpha Fold,
like DeepMind created Alpha Fold
to solve a little bit in quotations, the protein folding problem.
That has nothing to do with large language models.
It has nothing to do with growth at all costs mentality.
It was a very specific problem.
Let's try and do this extremely computationally intensive task
that we previously didn't have the computational software for and unlock lots of different
types of potential new resources for scientists to do drug discovery and other kinds of really
interesting work.
I'm also talking about AI, like AI that can help integrate more renewables into the grid.
This is something that we really desperately need to do.
We need to continue transitioning our economy to a clean energy economy.
And one of the challenges of doing that is renewable energy is a very difficult to predict
source. Sometimes the sunshine, sometimes the wind blows, and sometimes they don't.
And in order to more effectively have more of that capacity in the grid,
there need to be better predictive AI systems that are figuring out what the generation capacity
will be in the short-term future and then optimizing who gets what energy.
capacity will be in the short term future and then optimizing who gets what energy.
Um, and that is optimization problems are incredibly, um, AI systems are incredibly effective at solving optimization problems.
And so there's all of these interesting problems in society that AI does
naturally lend itself to, but I think the way that we can get broad based benefit from AI technologies is by unwinding this scale and
growth at all costs mentality back towards, let's figure out
what are the specific problems that we need that are that are
sort of the linchpin issue that we need to crack, that also AI
is good at cracking, and then develop well-scoped AI systems
to tackle that very specific problem.
And that can be, I think, hugely transformative,
but that is absolutely not what we're doing right now.
Yeah, I mean, you described a few problems there.
Protein folding is an existing problem in biology
that I remember reading about at least over a decade ago.
There are various like distributed computing projects
you could join and like devote some of your CPU cycles
to like folding protein and like help out science, right?
And so if sure, a algorithm that we might call AI
is good at solving that, that's great.
That's a great advancement.
Why then are these companies,
perhaps you've already answered this question,
but I'd love to hear you just talk about it again.
Why did these companies not take that strategy, right?
Why is it massive growth at all costs?
We need more compute.
We're going for AGI.
It's sort of this giant blob approach.
It's going to transform everything. It's going to transform everything,
it's going to do everything, therefore we need everything
and nothing must stand in our way.
I always say that it's a result of three things,
money, power, and ideology.
If you take this approach,
you get to accumulate enormous amounts of money
and enormous amounts of power and enormous amounts of political and economic leverage.
And there is this deeper driving force, as we talked about, this quasi-religious force behind the whole thing, where there are people who genuinely believe that they are building God or the devil.
believe that they are building God or the devil. And that is, that constellation of things leads to basically really poor decision making.
Yeah.
Where it really is all consuming this kind of effort to advance, advance, advance and grow and grow and grow and consume and consume and consume without
recognition of what's happening in the present with all of the externalities that that cause causes.
Yeah, it seems to be optimized for growth rather than any kind of understanding of human society or humanity.
I'd actually love your take on this.
I don't know if you saw, over the last couple of weeks,
I was in a little internet firestorm of myself
because I did a promoted video for one of other Sam Altman's other companies called World.
I eventually canceled the gig and turned down the cash,
and I have a video about it coming out.
It'll probably be out by the time this interview airs.
Right now I'm working on it as we're speaking.
But you know, this is Sam Altman's company
where there's an orb that you gaze into
and it proves that you're a human supposedly.
And then you can use that to log into stuff.
It's also a crypto wallet
and it's also like an everything app, right?
Where you can chat and you can do like everything else
you might wanna do on the internet with the app.
I went to their keynote and I felt, I don't even know how to explain this to a person, right?
I don't know what the pledge is to a user.
I don't know why someone would sign up for this.
It's like written, the entire thing seems to be
created on this level where it's just meant to
get Marc Andreessen to give them
a couple more billion dollars every year, right?
Like if the pitch is to investors or to some sort of hazy notion of the future,
rather than to the public itself. Like, it's people...
It's been made by people who have not, like, talked to another human being
in a couple of years.
I'm curious if you share that view.
Like, are these people completely detached from human society?
Yes.
And also, to your question of who would sign up for this, I was just in Indonesia.
Indonesia gave Sam Altman the very first gold visa, which is an investment visa that they give,
but they also give it based on other criteria, so it's not clear if Altman actually invested.
I was talking with a bunch of civil society folks and journalists and Indonesia
about this and their number one concern was world. And they said, this company is coming
in and it doesn't matter what the premises, people are lining up out the door because
all they have to do is give up their biometric data for $50, $50 US dollars.
And in Indonesia, that is a huge deal.
And that happened in Kenya.
That happened in many, many other countries where that US dollar cash,
they don't need to know what it's for.
And I think this is what's so dangerous.
And also what I try to highlight in the book is like, there's so many conversations that
sometimes we have in the US where we just think about these technologies in the context
of the US, which is ultimately one of the wealthiest countries in the world.
And even the poorest people in our country are of a, I mean, they're not well off, but
compared to the poorest people in the poorest countries,
there is still a certain level of a floor there.
And to really understand how these technologies, how these visions that Altman or anyone else
has developed, you cannot just understand it within the US context and certainly not
within just the Silicon Valley context, you have to go to these most vulnerable populations in the world
to see what happens. And with world, what happened is all of these extremely poor,
poor people were willing to just give away their rights for a tiny morsel of
cash. And we see that with the impact that AI is having all around the world as well.
With the labor exploitation piece.
I mean, these companies, when they contract workers to work on these
technologies, to clean the data, the data and do content moderation, um, in the
same vein as, as content moderation in social media era, they are willing to do
vain as content moderation in social media era, they are willing to do psychologically traumatizing work for pennies because that is the thing that will allow them to, for
just a day, put food on the table for their kids.
And so that is, when we talk about, I think it, OpenAI's mission, as much as I criticize it,
is a noble one that could be taken seriously.
The idea that you could develop technology
for the benefit of all humanity should be taken seriously.
We should be doing that.
That is what I would define as genuine progress in society
if we can lift all boats and not just continue
to only lift the ceiling and the floor continues to bottom out.
Um, and the only way to truly understand how we might be able to get there is to
go to these places where the floor is bottoming out right now and to understand
why and correct for that.
Yeah.
And your point is well taken that that is where those companies are going.
That is where, uh where they are thinking globally.
And we very rarely do in the United States.
We rarely think about the existence of those countries
and the people who live in them
and what their lives are like.
And the fact that they're the vast majority
of lives on earth.
But people like Sam Altman are thinking about those places
and how they can extract from them
and how they can exploit them in order to create
an empire for themselves.
And that's what makes it a colonial empire.
You're really painting that picture really vividly.
Yeah, absolutely.
I don't think you can really start to understand
the full scope of the empire and the colonial nature of it
until you travel to places that are the farthest flung
from Silicon Valley.
Well, I think the problem facing us then is,
look, I think critics of AI have a problem,
which is that this industry is so massive.
It is so massive,
it has created so much power unto itself.
It is so driving the conversation every moment of the day
that sometimes when you write about it or talk about it,
like I do, you feel like you're still
just a passenger on the train.
You feel like you're still almost contributing to it
because you are having the conversation that they are determining.
You said earlier, if we don't stop it, if we don't think about what they're doing, if
we let them do this, right?
That stuck with me because they have so much power. How can we stop them?
When it feels like even the very terms of our conversation
about what they're doing are dependent on their actions.
So how do we think about that
and how do we begin to make progress in the face of that?
First of all, I think you're articulating something
that is also central to empire building
is empires make you feel like they're inevitable.
Right.
But throughout history, every empire has fallen.
And it comes to the fact that every empire as much as they feel
inevitable also do have weak foundations in the sense that they need to consume
so much in order to continue, to continue, that when there starts
to be resistance on all of the things they need to feed on to fortify the empire and perpetuate the
empire, it starts to crumble. And so the way that I think about it is there's a supply chain for AI
development. These companies need a lot of data, they need a lot of computational
resources.
And if you are to chip away at each of these, they will eventually need, they will be forced
to go a different direction and not continue this all-consuming path of AI development.
And so with data, you know, we're already seeing lots of movements of artists and writers starting
to sue these companies saying we need to figure out a much better way to either get compensation
and credit or to not have this in your training data sets at all.
We've also seen the way that artists have used tools like glaze and nightshade, which
is a thing that you can use to add a bit of a filter that the human eye can't see on your
artwork when you put it online in a portfolio.
But when the AI model tries to train on it, it starts to break down the AI model.
So there's all of these forms of protests that are bubbling up. And with labor, we're seeing Kenyan workers rising up and protesting their working conditions
and creating an international conversation around labor norms and trying to actually
guarantee them better wages, better working conditions.
We're seeing Hollywood writers rise up and demand certain stipulations to how AI can
be used, whether or not their work can be trained on it.
We're seeing lots and lots of communities also rise up to demand more transparency around
data centers that enter their communities and have ground rules around what kind of
resources they can take, whether it's energy or water,
or whether the data center should be there at all.
And so if we can all just remember
that we actually do have agency in this situation,
like if you are a parent of a kid and you go to your school,
like you can go to your kid's school and ask them,
what is their AI policy?
And can you actually create a coalition of parents to talk about what the AI policy
should be and contest whether or not AI tools should be in the classroom or what are the
guardrails around when they should be deployed.
You can go to your doctor's office, ask them the same questions about whether or not you
want AI to be used in your
healthcare journey.
And if we just remember that we have agency in all of these things and we
continue to assert what we want out of this technology and what the ground
rules are for how it impacts us and our lives, I think we will get to a much,
much better future.
I love that vision.
I think it's also, you highlight though,
how big of a battle it is.
Absolutely.
Because we are, you have convinced me
that it is Empire that we are up against.
And you know, the battles against the Empires of the past
took a couple hundred years, right?
Empires take a while to fall.
And these are just getting going.
And you know, we don't live in Star years, right? Everybody's take a while to fall. And these are just getting going. And, you know, we don't live in Star Wars, right?
Where it opens with the Rebel Alliance winning, right?
It, we live in a world where, you know,
it could be a more grinding battle than that,
but we have no choice but to fight.
And I think I love your emphasis on our agency that so often we have this tendency
to roll over for these people
and just accept the premises of what they say
and what we have to do and oh well it's coming
so might as well get with it and start using this,
might as well build a data center
because we got no choice and just the process
of questioning these people
is really so important.
And by the way, I think it's brave for you to do so
when you're a reporter who speaks to so many of them,
for you to take this tack,
because so many reporters in your position
end up exceeding to their framework, right?
Because they want the access
and they wanna be able to continue writing about it
and they sort of go native as it were.
And so the fact that you've remained a critical voice
while doing the incredible high level reporting you do
is really wonderful.
I thank you for doing it.
Thank you, thank you.
I mean, I've had a lot of mentors along the way
that have reminded me that ultimately your purpose
is to serve the public and to speak truth to power.
And so that is what I've tried to do consistently through my career.
And to your point about empires taking hundreds of years to fall, I mean, they
also originally took hundreds of years to create, but we are in a different time
when I think the rise and fall of empires is going to accelerate.
And we also in the past did not, there was no democracy before.
There was no taste of what was the alternative to empire.
We are now at a point in our progression as a human race where we
understand that there are other forms of governance and that we do not need to
capitulate to people who paint themselves as superior.
Well, I can't thank you enough for coming on
to spread the message with us and just tell us
about your incredible reporting.
The name of the book is Empire of AI.
Folks can pick up a copy at our special bookshop,
factuallypod.com slash books.
Where else can they find it and where can they find
your writing and work on the internet, Karen?
I am a freelancer now, so the best way to find me
is on my LinkedIn
or my other social media, Blue Sky, Twitter,
and through my website, KarenDHow.com.
Karen, thank you so much for coming on the show,
and I can't wait to have you back.
Thank you so much, Adam.
My God, thank you once again to Karen for coming on the show.
She's such an incredible guest.
If you want to pick up a copy of her book,
once again, that URL, factuallypod.com slash books.
Every book you buy there supports not just this show, but your local bookstore as well.
If you'd like to support the show directly, patreon.com slash Adam Conover.
Five bucks a month gets you every interview ad free.
For 15 bucks a month, I will put your name in the credits of the show and read it right now.
This week I want to thank Erin Harmody, Joseph Mode, Rodney Pattenham, Greg 0692, Marcella Johnson,
Matthew Bertelsen aka The Bunkmeister, Kelly Nowak, Anthony and Janet Barclay, David Sears,
VG Tank Guy, Damien Frank, Matthew, Robert Miller, Griffin Myers, and oh no not again.
If you'd like me to read your name or silly username on the show, once again,
patreon.com slash Adam Conover. Of course, you can find all my tour dates at adamconover.net.
I want to thank my producers, Sam Radman and Tony Wilson.
Everybody here at HeadGum for making the show possible.
Thank you so much for listening, and I'll see you next time on Factually.
That was a HeadGum Podcast.
Hey, I'm Jake Johnson, and I host the HeadGum Podcast. just to name a few. So do me a favor and come check out an episode and then bounce around our catalog.
We're over 150 episodes so far,
so there's plenty of stories for you to discover.
Subscribe to We're Here to Help on Spotify,
Apple Podcasts, Pocket Casts,
or wherever you get your podcasts.
New episodes drop every Monday,
and bonus episodes drop on Wednesdays.
Hi, I'm Jessi Klein.
And I'm Liz Feldman, and we're the hosts of a new
Headgum podcast called Here to Make Friends.
Liz and I met in the writer's room on a little
hit TV show called Dead to Me, which is a show about murder.
But more importantly, it's also about two women
becoming very good friends in their 40s.
Which can really happen, and it has happened to us.
It's true.
Because life has imitated ours.
And then it imitated life.
Time is a flat circle.
And now.
We're making a podcast that's about making friends.
And we're inviting incredible guests like Vanessa Barron.
Wow, I have so much to say.
Lisa Kudrow.
Good feelings.
They're a nuisance.
Nick Kroll.
I just wanted to say hi.
Matt Rogers.
I'm like on the verge of tears.
So good.
So good to join us and hopefully become
our friends in real life.
Take it out of the podcast studio and into real life.
Along the way, we are also going to talk about dating.
Yep.
Spousing.
True.
Parenting.
Career-ing.
Yeah.
And why we love film.
And Louisa is the greatest movie of all time.
Shouldn't need to be said.
No.
We said it.
It's just a true thing.
So please subscribe to Here to Make Friends on Spotify, Apple
Podcasts, Pocket Casts, or wherever you get your podcasts.
And watch video episodes on YouTube.
New episodes every Friday.
Hi, I'm Rachel Billson.
And I'm Olivia Allen.
And we host the podcast.
Broad Ideas.
Yes, that's now on HeadGum.
On our show, we chat with people like Brittany Snow,
Lucy Hale, Kristen Bell, Margaret Cho,
Jake Johnson, and so much more.
And we talk about all the things you would talk about with your best friend.
Like your periods.
And mental illness.
And the food you ate for lunch.
Most importantly.
Listen to broad ideas on Spotify, Apple podcasts, YouTube, or wherever you listen to your podcasts.