Bulwark Takes - Will Sam Altman and His AI Kill Us All?
Episode Date: May 24, 2025Tim Miller talks with Karen Hao, author of 'Empire of AI', about the unchecked rise of Sam Altman, the hidden costs of OpenAI’s rapid expansion, and the unsettling consequences of a future increasin...gly shaped by powerful, unregulated artificial intelligence. Read Karen Hao's book 'Empire of AI'
Transcript
Discussion (0)
Rural communities are being squeezed from every side.
From rising health care costs to crumbling hospitals, from attacks on public schools
to the fight for paid family and medical leave, farmers and small businesses are reeling from
the trade war.
And now, Project 2025 is back with a plan to finish what Elon Musk started. Trump and the Republicans won rural votes,
then turned their backs on us.
Join the One Country Project
for the Rural Progress Summit, July 8th through the 10th.
This free virtual event brings together leaders
like Senator Heidi Heitkamp, Secretary Pete Buttigieg,
Governor Andy Beshear, and others
for real talk and real solutions.
Together we'll tackle the most urgent issues
facing rural America.
Register today or learn more at ruralprogress.com.
Hey y'all, it's Tim Miller and I'm pumped to be here
with Karen Howe.
She's got a new book out called Empire of AI about Sam Altman and OpenAI and there's just so much news on
this front. And it gave me nightmares last night to read the halfway through that I've
read. And so I'm sure this conversation is going to make me feel better. Call me. I don't
know. Karen, how you doing?
Great. How are you doing, Tim?
Maybe not. I'm doing all right. You know, I started it, I don't know, kind of during the second half of the
basketball game last night, so when would that have been, like 10 o'clock?
And by midnight, I was like, oh my God, my concerns about AI apocalypse
increased about 30% the more I learned about Sam Altman.
So, that's not good.
We never overlapped.
So I've been gay and in the Bay Area together.
I was never invited to his dinner salon.
So your insight is the closest I've gotten to it.
But I want to start here.
So for folks who are watching this who have, I think it was just to be varying degrees
of interest in Sam and in AI.
Let's just start with like, why he is such an important figure and why OpenAI is so important when it comes to this brave new world that we are currently stepping into, with regards to artificial intelligence.
Oh, I think he's such an important figure because he is... OpenAI is a manifestation of him, and OpenAI led the entire AI industry into approaching a particular type of AI development that is now
kind of eating the world. And Alman is also very much a product of Silicon Valley. And so I don't
want to give him too much credit. But certainly the way that OpenAI introduced the world to AI through a chat GBT moment really pegged all of AI development,
most of AI development today to a particular conception of AI that is really built on this idea of scale at all costs.
And, you know, it's not a coincidence that Silicon Valley, which has been trying to scale at all costs for a while would end up developing AI in the same exact way. But
Allman is the conduit of that entire enterprise where he kind of took all the ideas that he
kind of grew up with as he was rising as an entrepreneur and then investor in Silicon
Valley and channeled it into this specific organization that became the firing shot,
the opening shot of the global AI race that we're in now.
Yeah. So I want to get into all this extra,
but just you said some that was interesting there
that also piqued interest when I was reading it,
which was, you have this notion that like,
the type of AI we've gotten to,
which is what we're trying to maximize
as much compute as possible and make these apps you know, apps, you know, as all knowing as possible, et cetera.
What was the result of a series of choices and then it didn't have to go this
way. And I can be alluded to that in that answer. Like, what do you mean by that?
Like what, what, what are some other ways that people who are early,
cause you've been covering this for a long time, who are early in the, you know,
AI research world or some other kind of proposed
routes that this could have gone. Yeah, so the AI research field has been around for a really long
time. It was found in the 1950s at Dartmouth University. And a side note to this that I think
is important to understand just generally about why there are so many different types of AI
technologies is
the term artificial intelligence was coined as a marketing phrase.
It was coined by this assistant professor at Dartmouth called John McCarthy.
And decades later, he said, I invented the term artificial intelligence because I wanted
more money for a summer study.
So essentially, he picked the name to rebrand research that he was actually already doing
under a different name.
And one of the reasons why AI has just become so confusing, but is also, as you mentioned, I argue that it is very much driven by human choice, is because this conception that these
scientists are trying to recreate human intelligence is inherently
flawed because there's no scientific consensus about what human intelligence is. And so throughout
all the decades of AI development, there have been lots of different debates about what AI should
look like, what it should do, and who it should serve, ultimately actually rooted in the fact
that different scientists have different answers for what human intelligence is. And so when I started covering AI in 2018, I mean,
there was just such interesting research happening across the board. Like there were people that were
exploring, how do we build AI systems without any data? Like, how do we build extremely powerful AI models that can run on device like on a smartphone?
How do we try and capture knowledge?
Researchers often call it common sense knowledge about the world
without actually just trying to extract it from human experience, but
hire experts to construct expert databases that contain this knowledge.
So there were so many different variations,
and all of that kind of died on the vine
when OpenAI started working on what ultimately became ChatGBT.
In the AI world, the ChatGBT moment was the GPT-3 moment,
when for the first time they unveiled this model that was trained on 10,000 chips.
And up until then, the largest models
were trained on maybe a couple hundred chips.
And those were already gargantuan models.
And when they did that and said, no, we
are going to go for colossal, we are
going to pump extraordinary amounts of data and
extraordinary amounts of computational resource into this. That's when all of
the other companies were like, oh we're gonna play this game too. The other
options that you just described, the other routes, using experts, paying
people, like that's all that's all hard. Stealing, I mean not that the opening
eyes not doing that hard, but, you know,
there's something to be said for stealing every, all the existing data out there,
you know, rather than using expertise.
Absolutely.
I mean, it absolutely it's, they took, they took the path that was
easy, most easily accessible to them.
Silicon Valley companies had been, they've already been sitting on massive
piles of data because they've been accumulating for a really, really long time.
So this is already their competitive advantage in how they could develop AI. Like you said,
why hire the experts? Why pay people when you've already got these troves that you can just tap
into? And really the only kind of bottleneck at the time when OpenAIRE started doing this approach was capital. They needed a
boatload of money to do this, gobs and gobs. And it just so happens that OpenAIRE has one of the
greatest fundraisers of our time right now as the CEO. And so, yeah, so it very much was explicit
intentional choices that led them to take this path. So this takes us to Sam, who is head of this. And for people, you know, just, I guess the
just really quick short backstory is Sam and Elon actually were kind of involved in the
origins of this and it was going to be a nonprofit. And we'll kind of get into that a little bit.
But just Sam himself, I don't under, I kind of assumed being an outsider before I started
reading the book, that he was like
really a really skilled engineer.
And he comes off a little bit non-social to me.
So I find it weird that he's such a good fundraiser.
So I watched his interviews and I'm like, I don't know, does this person make eye contact?
But I assumed that he was like a really good engineer or that he had founded some like
really successful tech thing that I hadn't heard, because it's like a niche tech thing.
And it's like, none of that.
Like he founded like a four square competitor that failed.
I liked four square.
Uh, I was a four square user or something called looped that didn't do any good.
And then he got Ben into VC.
So like, why is he the point person for like the future of humanity?
I really, I still can't understand.
Yeah. So yeah, so he did, he was at one point technical, he did study computer science,
but then he dropped out of Stanford before he finished his computer science degree to
found sleep. But it really comes back to, you know, the core skill set that he has that everyone says
is what makes him unique is he's really, really, really good at telling stories about the future. And he also has a loose relationship with the truth.
So when he's in a room with someone, he, the things that come out of his mouth to paint
this vision of the future are more correlated with what he thinks that person needs to hear to then want a piece
of that future than what he believes or the ground truth of the matter. That's what makes
him a really good fundraiser. And that's also what makes him really, really good at rallying
a lot of top talent towards a particular goal. And so those were the two main assets that he brought to OpenAI, was the capital
and the top talent. He was able to recruit first Elon Musk to the venture, then he was able to
recruit Ilya Sutskover, who became the chief scientist, who was at the time one of the most
famous AI researchers in the world because of a contribution he made as a grad student
that essentially set off the deep learning revolution and started turning AI into a
commercially viable technology. And he recruited other people all along the way that I documented
in the book that he recruited people from Google to then help with developing chat GPT and he recruited Microsoft to be the investor and main backer
and to develop the supercomputers that opening I wouldn't be able to develop.
So I think his skill is channeling, consolidating resource and
channeling it, channeling it in a specific direction.
But there's also a lot of frailty and weakness in the ventures that he creates
because he's really good at telling personalized stories to people when he's sitting with them
one on one. But once you have a company and you need everyone to be on the same page,
moving in the same direction, that's where Altman really falls short because he will tell different people,
different things based on what he thinks will motivate them.
And the shared picture, um, that would be the foundation of the
company starts to fall apart.
Right.
So the prime example of what you're talking about here, this where
Altman's good at mirroring what other people, what he thinks other people
want to hear is, is in his recruitment of Elon Musk initially.
And so Elon, for all of my concerns about him and a lot of other ventures around AI,
if you go back, he kind of still does this now, but less, which I want to get into.
But if you go back a half decade, Elon was kind of apocalyptic about AI, like very concerned
about worst case scenarios. Yeah about worst-case scenarios.
Yeah, so, yeah, so, we're concerned about worst-case scenarios. We need this being good hands,
you know, like we need to be considerate about this. That's why OpenAI, when it started, was
a nonprofit, you know, that we need this to be in the public interest and the public good. It's like
an Oppenheimer type situation. And so, Elon, as you tell it, Altman kind of reflects that same opinion back
to Elon as part of the recruitment process. But now, I guess, if things have gone along,
it's no longer a nonprofit. Altman, when I listen to interviews of him, sounds downright
optimistic. Like almost, what's the word? Like, you know, almost like there's a paradise waiting behind us
with AI.
So what happened there, do you think?
Was Altman lying to Elon to get him on board?
Has he changed his view?
Has something happened that made him change his view?
Yeah, I think there's two ways to answer this question.
One is just how Altman operates in the world, to which through the reporting of my book,
what I realized is he will bring in people and resources and create certain structures
based on what he needs, the objective that he needs to achieve at that time. And then
once those people, those resources lose their value because the objective changes, he then
sheds them and shifts. So the nonprofit, which
originally was a mutual idea that Musk and Altman sort of had, was something that was particularly
helpful for recruiting Musk to the venture and also for recruiting other talent. Because Altman
is very strategic. He plays the long game.
He likely understood at the time that he didn't have the capital
to compete with Google, which was the main monopoly on AI
talent.
So he couldn't wave around million and millions of dollars
in salary to poach Google researchers.
So the thing that he could compete on
was a sense of mission and purpose.
And by creating a nonprofit,
that was a really great way to highlight
why OpenAI was different from Google
and had that mission and purpose.
And that was what then hooked Musk in
and then hooked Ilya Statskova in
and many other researchers who moved from Google to OpenAI.
Once he had that talent and once Musk had already lended his brand name,
it became less necessary for Musk to be there.
And it also became less necessary for the nonprofit to stay a nonprofit.
The next objective at that point was how do you win?
How do you build a lab that is going to be number one and beat Google?
At that point, now you need capital.
And in order to raise capital, you can't stay a nonprofit.
You need to create some kind of for-profit fundraising vehicle.
And that's when he nests the for-profit and the nonprofit.
And Musk doesn't want to be part of it anymore.
And it's fine because he's already taken the value utility from Musk being part of
the project and doesn't need it anymore.
Musk is the principal person in the story.
The other thing during this period that is like that has changed is there's an email
from Altman to Musk that you have in the book.
That's obviously we comply with aggressively support all regulation.
This also speaks to like their concerns about, right?
Like about the downside risks of AI.
It should be right.
You know, it should be a nonprofit.
It also should be regulated.
Like now the whole like tech bro, Teal, Andreessen, like kind of orbit that
was pushing Trump through like the, like not
regulating AI is like one of their core precepts.
Yeah.
Yeah.
So what changed with that?
So that's, that sort of gets to the second way that I could answer your
question, which is the rhetoric that AI companies often engage in is it takes
one of two forms.
Either it is this technology is really dangerous,
or it is this technology is going to bring us to utopia.
But ultimately, they are two sides of the same coin because the conclusion
from both versions of the rhetoric is AI is extremely powerful,
and therefore we, the people who are saying this,
should be the one to control it.
So, you know, like when Musk was really deeply concerned about existential
risk, I don't think it was rhetoric in the sense that he believed.
That this was a problem, but also he leaned into it because ultimately
there's this deep seated desire to want control over the technology.
And Altman tapped into that, that desire, both the fear and that desire, when he proposed to Musk,
hey, I know you don't like the way that AI development is going right now.
And it seems to me that the best way to counteract that is by just building our own organization.
You know, like he taps into the fear, and then he taps into that desire of
wouldn't you want to run an organization that then gives you more control over this technology?
As the public discourse has sort of changed and shifted based on
what is on the minds of people and policymakers,
those different narratives get wheeled out in turn.
When people are very fearful, they wheel out the, yes, and you should be fearful and this is why
we're being so cautious and careful and this is why we don't want this technology to go into the
hands of China. When people are feeling more optimistic, they wheel out that and you cannot
imagine the prosperity that we will soon see.
We don't even have the words to articulate the profound positive transformation that's
going to happen.
And that's why you can't regulate us.
We need to put pedal to the metal to accelerate it.
It always goes back to the same goal, which is that they just need to continue moving
forward with no obstacles in their way.
Like one tangible example about this is the climate side of it.
Um, you had a funny exchange with a couple of the guys about how the
same paradox is working, right?
Where they're like, AI is going to fix the climate problem eventually, right?
Like the super intelligence, the AGI will figure something out that'll fix it.
Yeah.
Uh, but in the meantime, like we need to extract as much resources as possible that's going to
exacerbate the climate change problem in order to get there.
And there's this race between how much we exacerbate it versus when the brilliant supercomputers
are going to fix it.
What is your sense for, you talked to a lot of folks who are not spinning you, like the real level of concern there, like as far as the climate element to this,
and just, I guess, that whole kind of conversation.
There's no concern at the top at all. You know, multiple people told me-
About their impact on the climate?
Of their impact on the climate, yeah. Yeah.
Multiple people told me, OpenAI sources told me,
this has never once been mentioned in an all hands company meeting.
There are actually people who mentioned to me that on the policy side of OpenAI,
this came up as a, this will eventually like that shoe, the environmental shoe is going to drop.
People are going to realize that these have environmental consequences and we're going to have to deal with it as a PR crisis.
But it was never mentioned as a, maybe we should actually think about how we are developing these things to accelerate climate change
and accelerate other kinds of environmental issues like people's's access to fresh water resources, never once happened.
To the point where-
Do you feel an ethical issue with the climate side of it? It's hard for me to wrap my head around
how real the climate, should I not be asking Chachie BT to make me funny pictures because
I'm killing a tree?
It is becoming a huge, huge issue. There are projections
that say that at the current pace of data center development to support these AI ambitions, in five
years, at the end of the decade, we will need to slap the equivalent of two to six Californias of energy demand onto the global grid. And all of that energy demand,
the majority of it will be serviced by fossil fuels
because these data centers have to run 24 seven.
They cannot run on renewable energy.
The sun and the wind don't blow and shine all the time.
And Altman even said in his Senate testimony most recently that in the short term, this
will most likely be serviced by natural gas. And then of course he says, but you know,
in the long term, we'll figure out fusion. But the problem here is the AI industry always
does the same dance of there are real things happening now with concrete evidence
that you can point to of harms. And then they wave a wand of but in the future they will be fixed by
something that we don't actually have evidence will ever happen. So they're trying to use a figment of imagination to make okay a current reality.
So yeah, it's a huge problem and it's not just climate.
We are also facing a huge freshwater crisis globally now.
There are many communities around the world
that now have trouble accessing drinking water. Like,
when I was reporting this book, I traveled through Latin America. In the book, I read about Chile and
Uruguay, and I also went to Colombia. All of those countries are currently facing a megadrought,
a historic megadrought. So is the US, the Southwestern US, which has
become a huge hub of these data centers, is facing the worst drought in a thousand years,
according to research from Nature. In Colombia, I was literally visiting and tried to go to
the National Museum in Bogota, and the National museum was literally shut down because they could not service water to the bathrooms. And in Uruguay, the government is mixing toxic chemical water
into the drinking water supply because they just need something to feed the tap so that
when people open their taps, something comes out. And there was this-
Not great.
Yeah, no. And the last thing I'll say is there's this amazing Bloomberg story that just came out that
looked at all of the water consumption that's happening for these data centers all around the
world. And what they concluded is it's not just the total volume of fresh water that we need to
be concerned about, it's the distribution of these data centers, I think they said like two thirds of these data centers are
now in water scarce areas.
So, so it is a huge environmental and climate problem.
Uh, we can do so much.
And there's a lot there in the book.
People should go check it out.
Empire of AI, just two really quick things that caught my attention.
Uh, the sister, uh, he has a Sam Altman is like going to be the richest man in
the world, probably maybe the first trillionaire has a Sam Altman who's going to be the richest man in the world,
probably maybe the first trillionaire, has a sister who is in insecure housing, apparently.
Yeah. So Altman's oldest of four siblings and his two brothers, three brothers and a
sister, the two brothers followed him in his career. So when he struck out in Silicon Valley
and started doing really well for themselves, they joined in, he became an investor, they became investors, and they've all become very,
very rich from this career path. The sister was the black sheep of the family. She was the artsy
fartsy sibling. She really didn't want to go into tech. She wanted to be a writer. She wanted to be
a comedian. She wanted to do other types of art. And she
had always intended, when I interviewed her, what she told me was, you know, she had always
intended to support herself. She had never expected, you know, her rich brothers to support
her in any way. The problem is she then started having a series of really intense health problems, physical health, and she
struggled with mental health problems her entire life as well. And as she started having
more and more exacerbated health problems, which worsened after the death of their father,
who she was the closest to in the family, it made it extremely difficult for her to continue working.
And so she alleges that when she appealed to the family for support, and she provided me a lot of documentation showing email exchanges and text messages,
that they withheld money from her that should have been hers. She learned that her dad had left her pool of money and she tried to access
it to support unemployment and support the ability to heal her physical and mental health.
And her mother and brothers stepped in and they frame it completely differently. They dispute the
way that she characterized it. They said that
she already had some money and they were really worried about exacerbating her mental health
challenges by giving her access to more money. But essentially the effect was that she ended
up in a place where she had no money and she turned to sex work to start paying her bills. And she then ended up in a period of years where she just faced a lot
of housing insecurity, food insecurity, economic insecurity. And ultimately, one of the reasons why
I highlight her experience in the book is I think Annie is very much more representative of the way that the majority of the world lives than Sam is.
And her life is an interesting case study of the impact that AI has on people who live like Annie,
which is most people. And that when these companies talk about AI, they talk about it solving poverty. Like Altman has said,
AGI should solve poverty. And like AGI will cure cancer and bring accessible, affordable health care to everyone. And AGI is going to make all your economic problems go away. And the problem
is she was facing all these intersecting challenges,
health challenges, economic challenges, mental health challenges. And she wasn't getting any
benefit out of AI. And in fact, actually she was having trouble accessing economic opportunity
online. She was trying to monetize a podcast. She was trying to monetize a YouTube channel. And she consistently felt
like she was being shadow banned. And when I talked with researchers and experts about
this, this particular issue, they mentioned, you know, because she was involved in sex
work, that that's how the internet works. They use AI systems to track sex workers,
even in platforms that are completely
unrelated to their sex work to limit their distribution.
In some ways you're making this a policy thing, but I mean, there's a human element to this.
I mean, he's unbelievably rich.
I understand that these things are complicated, but like, I think it reflects a little bit
about how much he's going to think about the impacts, what he's doing on people that are
going through issues, how he thinks about this when it's his sister going through the issues. I'm sorry,
I'm running out of time, but I just really quick. So he partnered up with the guy that designed the
iPhone, Joni Ive, and they have new products that they're like teasing. Looks like it's like going
to be a smart broach. So we'll call it a super smart broach. Can you give us just like one minute on what you think they're planning with the broach?
Altman recently said in an interview, like his strategy is to try and do as many things
as possible to create as many products, as many surfaces through which people can interact
with their technologies.
So hardware is a very logical step under this strategy.
They've thus far been using the hardware
that people already have.
They're creating apps that install on your phone
and apps that are on your computer,
but they wanna move into wearables, maybe smart speakers.
I don't know, they wanna add more hardware to your life
for you to essentially create more service area
for them to collect data on you.
Like that is ultimately what they want to do.
Yeah, they want to listen to you all day so that you can then ask the smart broach, be
like, hey, I had a meeting earlier, like remind me what they said and it will have been recording
everything.
Yeah, exactly.
And you know, the way that Altman frames it is he has long really loved the movie Her
and people that I spoke to within the company said that the
reason he loves it or the reason he says he loves it is because it's this seamless AI experience that
just exists in your life. It's like constantly there gather, you know, like intimately understanding
you. But the cynical take is that it's just more ways to collect more data on you because ultimately
that is one of the key ingredients to training their larger and larger models.
Boy, I got to tell you, I'm like, some of the viewers won't believe this because there's
so much bad stuff happening.
I'm always negative.
But like I'm an optimist by nature.
Actually I was like a very excited tech person.
Like I was going to South by Southwest in the mid 2000s
and like so excited about all these new things
that were coming.
And I have like, my will towards techno optimism
has been stripped from me page by page in your book
and also in some other things that are happening
out there in the world.
But anyway, people should read it anyway. It is called Empire of AI. It's Karen Howe. Thank you so much for
spending time. Good luck in the book tour and let's stay in touch as I'm sure this stuff is
going to be in the news. Thank you so much for having me, Tim.