On with Kara Swisher - Sam Altman, OpenAI and the Future of Artificial (General) Intelligence
Episode Date: May 22, 2025Few technological advances have made the kind of splash –– and had the potential long-term impact –– that ChatGPT did in November 2022. It made a nonprofit called OpenAI and its CEO, Sam Altma...n, household names around the world. Today, ChatGPT is still the world’s most popular AI Chatbot; OpenAI recently closed a $40 billion funding deal, the largest private tech deal on record. But who is Sam Altman? And was it inevitable that OpenAI would become such a huge player in the AI space? Kara speaks to two fellow tech reporters who have tackled these questions in their latest books: Keach Hagey is a reporter at The Wall Street Journal. Her book is called “The Optimist: Sam Altman, OpenAI and the Race to Reinvent the Future.” Karen Hao writes for publications including The Atlantic and leads the Pulitzer Center’s AI Spotlight Series. Her book is called “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.” They speak to Kara about Altman’s background, his short firing/rehiring in 2023 known as “The Blip”, how Altman used OpenAI’s nonprofit status to recruit AI researchers and get Elon Musk on board, and whether OpenAI’s mission is still to reach AGI, artificial general intelligence. Questions? Comments? Email us at on@voxmedia.com or find us on Instagram, TikTok, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
I'm coming live.
I'm a rave.
In a bit of a rave situation.
Hi everyone from New York Magazine and the Vox Media Podcast Network.
This is On with Kara Swisher and I'm Kara Swisher.
I've been reporting on tech for decades and few advances have made the kind of splash
and had the potential long-term impact that ChatGPT did back in November 2022. It made
a nonprofit called OpenAI and its CEO Sam Altman known around the world. I met Sam actually
when he was a teenager when he had a company called Looped, which didn't last.
And I've watched him over the years grow as he went from company to company, including
at OpenAI, where he and Elon told me about the need to have an organization like this
to protect us against the tech giants themselves, Google and others, when AI came of age.
And it turned out these might have been the monsters we were scared of meeting in the
first place. I do like Sam. of age and it turned out these might have been the monsters we were scared of meeting in the first
place. I do like Sam, he's very charming and I can see why people think of him as manipulative,
but he really is an interesting character, more like Steve Jobs than anyone else I've ever
interviewed. And I wrote and spoke a lot about his sudden ouster and just a sudden reinstatement as
the CEO of OpenAI in the fall of 2023. And I should note that Vox Media,
like a lot of media companies,
has a licensing deal with OpenAI
that gives OpenAI access to its IP.
My guests today are two tech journalists
who have each come out with their own
very well-reported books about SAM and OpenAI,
including what was happening behind the scenes
during that crucial firing and rehiring,
the wide impact of generative AI,
and the potential for artificial general intelligence in the future.
Keach Hagee is a reporter at the Wall Street Journal. Her book is called The Optimist,
Sam Altman, Open AI, and the Race to Reinvent the Future. Karen Howe writes for publications,
including The Atlantic, and leads the Pulitzer Center's AI Spotlight Series. Her book is called Empire of AI,
Dreams and Nightmares in Sam Altman's Open AI.
Whether you're a fan of AI or fearful of it
or somewhere in between, this is an important episode.
So stay with us. Hey, it's Scott Galloway.
In today's marketing landscape, if you're not evolving, you're getting left behind.
In some ways, it's easier than ever to reach your customers, but cutting through the noise
has never been harder.
So we're going to talk about it on a special PropG Office Hour series.
We'll be answering questions from C-suite execs and business leaders about how to market
efficiently and effectively in today's chaotic world.
So tune into PropG Office Hour special series brought to you by Adobe Express.
You can find it on the PropG feed wherever you get your podcasts.
Karen and Keach, welcome.
Thanks for coming on on.
Thank you for having us, Kara.
Thanks for having me.
So you both have hefty books out this week about Sam Altman and OpenAI.
Congratulations.
There's some overlap, of course, but the two books are actually complementing each other
very well, I thought.
Let me read the titles.
Keach, your book is The Optimist, Sam Altman, Open AI and the Race to Invent the
Future. Karen, yours is Empire of AI, Dreams and Nightmares in Sam Altman's Open AI. Each of you,
I'd like you to talk about your titles and reflect on the perspective of what you are trying to do.
Keach, are you an optimist, a boomer, as they say, and Karen, are you a doomer?
And I guess there's the zoomers who are in the middle, but let's start with you, Keach.
Are you a doomer? And I guess there's the zoomers who are in the middle.
But let's start with you, Keech.
I am an optimist as a general person.
I don't know if I'm totally optimistic about AI, but I've maybe fallen more on that side
than the other side.
But the title really is about how Sam presents himself to the world.
So that's his brand.
That is what he goes into every room with, you know, investors acting the part of. Sure of. So that is sort of what he's projecting.
And what about you, Karen?
So I wouldn't say that I'm a doomer
and that I think boomers and doomers are actually
ultimately two sides of the same coin.
And I exist sort of in a third space,
which is the AI accountability space,
that really recognizes that there is an extraordinary amount
of power that's being concentrated within
companies that are developing AI, and we need to hold those
companies accountable. And so Empire of AI, that title is
really a nod to what I see as a global system of power that
these companies are entrenching, where we really need to
understand them as new forms of empire in terms of really
grappling with the sheer scope and scale of
what they are now doing and what they say they will continue to do over time.
So the launch of ChatGPT in November of 2022 is what most people consider to be the starting
gun for the AI race we're in now, very similar to Netscape releasing its browser that really
got people using the internet more than anything.
It also made Sam Altman a household name.
And neither of you books start with that launch.
Instead, you write in your prologues about the few days when Altman was fired from
his role as CEO.
I wrote quite a bit about that.
And then reinstated after a pushback from his executive team, the Open AI staff and
a lot of lobbying by other tech leaders.
It's being known inside of OpenAI as the blip.
Talk about why you thought it was significant.
Karen, you start with that because it was a moment.
A lot of these tech companies have their moments, but this was it for them.
For me, it really exemplified this specific question that I try to explore throughout
the book, which is how do we govern AI?
Because it is one of the most consequential technologies of our century. And this moment in which a few
small handful of people get to decide the future of a company that has had
such profound implications on the trajectory of AI development really
highlights the kind of controlling hands that are playing this game of shaping the most
consequential technology. And so one of the things that I advocate for in the book is
we really need to be shifting towards more democratic AI governance. But starting with
that anecdote, I really wanted to highlight where we are now, which is completely undemocratic
and everyone is sort of living in the whims and the thrash of people at the top that are deciding things behind closed doors.
Keech?
Yeah, so I started with the blip. Certainly it's the most dramatic, and that in some ways made Sam Altman truly a household name.
I mean, he was after a chat GPT, but I think the concentric circles of understanding really reached their maximum point that wild, wild day when he was fired. And I think it tells you something about Sam's mind and Sam's strange love
of eccentric structures that this happened to him.
He had been spinning up these strange structures throughout his whole career.
He did it at Y Combinator.
You see a little bit of it at Looped where he tries to invent new forms of
things that haven't
existed before and it really turned out to be his downfall.
Yeah, I mean, Looped was for people who don't know, it's where I met him actually, it was
a company that was kind of a failure.
He was very young at the time.
But when you say unusual structures, a lot of it had to do with the board losing members,
right?
I mean, it was a relatively balanced board, and then people dropped off who were sort
of the more central figures versus the left and the right, if you want to do it in a political
term.
Absolutely.
It was a board-powered struggle at some basic level, as well as concerns over safety, as
I think both of us have documented a combination of these things.
And Sam has always sort of figured, like, I got this, right?
I'm the ultimate networker, everyone knows me favors.
There's no situation that I cannot talk my way out of.
And it was extraordinary to watch one that he couldn't, at least for five days.
Except he did, right?
And so to talk about how it changed the trajectory of the company,
how do you think it impacted OpenA and Sam going forward?
Keach, why don't you start and then Karen.
Massively, we are still feeling the reverber't you start and then Karen. Massively.
We are still feeling the reverberations
from the blip today.
Explain that.
So I think, let's start with Microsoft, right?
It kind of looks like Microsoft rode in
and was supporting OpenAI during the blip.
And that's true, but they also lost confidence
in the solidity of OpenAI in the midst of that
and began to make backup plans.
For sure.
So we saw they hired Mustafa Suleiman, they started developing their own models, of OpenAI in the midst of that and began to make backup plans. For sure. Right?
So we saw they hired Mustafa Suleiman, they started developing their own models.
And even now, as they are negotiating with OpenAI for a new structure between OpenAI
and Microsoft, the blip hangs over all of that.
The concern that Microsoft has that OpenAI might not be there in a few years.
They don't want to spin up a bunch of data centers and cloud that they then can't monetize down the road.
So it increased the risk profile of OpenAI greatly
and we're still feeling it today.
Karen?
It definitely, I would agree with Keach
that it definitely showed kind of weakness
within OpenAI's foundations.
But at the same time,
with the way that Sam has sort of always orchestrated himself,
he used this crisis moment as a way
to make himself even more powerful.
And so he has been able to, even as he's losing
the relationship with Microsoft, establish relationships
with President Trump and start getting even bigger deals,
like the $500 billion Stargate project.
And so it sort of accelerated the trajectory
that OpenAI was already on.
And Altman sort of effectively used this moment
to kind of weed out some of the primary dissenters
and the primary stronghold of resistance
against the general push for the company
to go faster and faster and develop these larger and larger
models and now we're kind of seeing an even more extraordinary pace in that.
Yeah, he certainly removed the brakes, the people who were alleged brakes, right?
And I have to say, I've never met a tech company that didn't have one of these hair on fire
moments, whether it was Google or Netscape or even Microsoft.
There was a lot of sturm and drawing at the beginning of that company.
There's not a tech company I know that doesn't have something like this.
It just was so valuable and so famous at this point.
Obviously, Sam plays a big role in both your books and he
is probably the center figure of the AI movement right now.
Keacher Bookow's deep into Sam's biography,
his upbringing in St. Louis College at Stanford,
his time at Y Combinator.
What did you learn about Sam and speaking about family and friends and what surprised
you and how did he become this power broker at such a young age?
I saw some of it.
He's a charming person, obviously, and he's a good networker.
Keach, why don't you start with that?
Well, one of the things I was surprised to learn was that his childhood wasn't all that
happy.
I certainly went into the reporting figuring, oh, you know, upper middle class family, good
schools, everything's happy.
And as I did more reporting, I realized that his parents had had strains in their marriage
pretty much his whole life.
They were separated by the time his father passed away in 2018.
So I think that there's sort of like a drumbeat of anxiety
beneath all that optimism
that is an important thing to understand about him.
And then, you know, he was a gay teenager in the Midwest,
and that was a difficult thing.
And I think he, in his early years, stepped up and became kind of a, a voice
for, uh, the gay people at his school.
And that taught him a little bit about leadership and about risk taking.
So you see, um, I think that one of the things Sam's really wants to do is be a
leader even beyond, you know, of companies, right?
Like he has political ambitions and he sees himself
as sort of a great figure of history.
And I think both of these, those things were forged
in St. Louis.
Is there anything that you were like, huh, interesting?
There's always a moment, right?
Of whoever it happens to be.
Meeting Sergey Brin's parents was very insightful
for me, for example.
Yeah, I think getting to understand his mother
and father's relationship, I think, was the
thing that really helped.
You know, his mother is very ambitious and his father is kind of a do-gooder.
And there are these quotes in the book where his mother really didn't respect what his
father was doing sometimes.
And I think that that tension kind of lays at the heart of him a little bit.
So Karen, you interviewed around 260 people for your book, but Sam and OpenAI refused
to talk to you.
However, you did talk to his sister Annie, who's launched a lot of accusations against
Sam, including they allegedly sexually assaulted her when she was a child.
He and his family have denied that.
You write that the truth of allegations is unknowable, but quote, her story became a
microcosm to me of the many themes that define the broader OpenAI story and how much OpenAI is a reflection extension of the man
who runs it.
I'd like you to explain that because I did run down some of this stuff and it seems like
the sister has some struggles, I would say.
Yeah, totally.
I think in terms of why I think Annie sort of encapsulates so many of the themes in the book, one of the defining things is regardless of whether
you side with Sam or side with Annie
in terms of the clear struggle
that these siblings have with one another,
it really highlights that any individual,
especially ones that grow so powerful,
are going to have some deep personal and professional baggage that they carry with them.
And one of the things that I try to highlight in the book is we need better governance structures
because we should not be resting so much power in the hands of individuals.
This particular conflict that Sam had with his sister clearly weighed on him because it was the one thing that led
OpenAI's communications team to have a face-to-face meeting with me was when they caught wind
of the fact that I was speaking with Annie.
And so I think it really shows the fragility of resting a hugely powerful system and all
of these hugely consequential decisions of developing technologies that will shape humanity's future in the hands of people that have these whims and pressures and
capricious relationships and tensions.
It could be just a very troubled sibling. Like a lot of interestingly a lot of people who are like
this do have trouble. You know, I just did an interview with Barry Diller. His brother was a
drug addict. I think it's the most troubling part of his life from what I can get, and it's a shame. Keith, you write in your prologue about one moment in the interview with Sam where
his altruistic mask slipped to reveal the fierce competitor beneath. Karen, you have a version of
that when writing about the 2018 rift between Sam and Elon. You say the rift was the first major
sign that Open AI was not, in fact, an altruistic project rather, one of ego.
I've always thought it was ego.
I don't know.
The first line of my book was it was capitalism after all.
So both of you, do you think these are two sides of the same coin?
Did you think that Sam is duplicitous as critics has accused him of telling people what they
want to hear and then bad mouthing them behind their backs?
I never believed the altruism to start with of anybody, not just Sam.
So talk a little bit about this altruism versus ego.
Keach first.
So yeah, that moment was when he was bragging about having beaten Google to the punch, right?
Which is very satisfying.
But go ahead, go ahead.
Right, sure.
I mean, Sam is a product of this Silicon Valley Y Combinator world where being first and having things go up and to the right is everything.
So, and he's admitted that too, right? He can't like take away his desire to make that happen.
I think you can see him doing a dance for Elon Musk in those early years.
There was a moment where everyone was reading Nick Bostrom's book, Superintelligence.
Yes, I know. Trust me. in those early years, there was a moment where everyone was reading Nick Bostrom's book, Superintelligence.
Yes, I know.
Trust me.
In 2014.
And that was just kind of like what the cool kids were talking about a little bit.
Such a blowhard.
And it was kind of fascinating that the way that you showed that you were serious about
AGI even being a thing at all was saying that you were scared of it, right?
That was kind of like a code and a way to recruit researchers,
show that you took it seriously,
because that was still sort of an unusual thing
to even say out loud that you believed in.
So in one way, when he was kind of seducing Elon,
it was at that moment, right?
Who's most scared of AI?
Let's have a contest between us
about who can think of the most apocalyptic situation,
and we need to create something to counterbalance Google.
Which works on Elon because he's steeped in science fiction.
He's a science fiction buff, like an extraordinarily deeply read one.
Yeah, this entire story is deeply shaped by science fiction.
That was one of the things that was so fascinating to learn about in the research.
So I think that that's the way to sort of understand
the initial
nonprofit altruism thing a little bit, right? It was a vestige of a very specific cultural
moment. I don't know if it was a plot, you know, that they were then going to like toss
it off as soon as they had a chance to. But you know, Karen's criticism is absolutely
right that they did then toss it off and
become a regular company and for all intents and purposes.
Right. Karen?
Yeah, I would completely agree with Keech that it was sort of this moment in time where
Sam I think identified that he could use this as a way to recruit people.
Sam is, you know, he's a once in a generation talent when it comes to
storytelling. He's really good at telling the right story. And he also has a very loose
relationship with the truth. So I think...
Who does that remind you of? Steve Jobs?
Yeah. I mean, you know, it reminds you of a lot of people in Silicon Valley, because that is
what Silicon Valley...
Well, Steve Jobs was the original...
Values. But absolutely. I think Sam worshipped Jobs and I think he'd really tried to emulate
him a lot of ways.
And so when he comes into a room with someone and meets with someone, I really think the
best way to understand what comes out of Sam's mouth is that it's more tightly correlated
with what he believes the other person needs to hear rather than necessarily something
that he needs to tell
from himself.
And I think that is ultimately what Opening Eye became a bit of a manifestation of, was
he knew he needed a really gripping story.
And he knew, to go back to like beating Google to the punch, he knew he didn't have enough
capital at that time
to compete on salaries.
So he needed that special element in addition to salaries.
I opened the book with a quote from Sam
in his early days of blogging where he says,
the best way to mobilize people is to build a religion
and ultimately people realize the best way to do that is to build a company.
And I really do think that he understood that very well from the very beginning and he tried to create a sort of quasi-religious movement around a mission for OpenAI. We'll be back in a minute.
In every company, there's a whole system of decision makers, challenges, and strategies
shaping the future of business at every level.
That's why we're running a special three-part Decoder Thursday series,
looking at how some of the biggest companies in the world are adapting,
innovating, and rethinking their playbooks.
We're asking enterprise leaders about some of the toughest questions they're
facing today, revealing the tensions, risks, and breakthroughs happening behind
closed doors. Check out Decoder wherever you get your podcasts.
This special series from The Verge is presented by Adobe Express.
I want to talk a little bit about the role in the
AI race because they are dominant.
Um, when Altman started OpenAI with Elon, its goal
was to be more than a startup launching commercial
AI, as we said, that was actually focused on
researching, especially researching artificial
general intelligence.
That's been up and down over the years.
Um, and Altman, I think, probably always
wanted to turn it into a for-profit company. Elon probably was the most believer, if I recall
his attitude at the time. He really did think AI was going to kill us. Others, I don't know,
maybe Sergey Brin did and others. But Elon sued after he flounced out. To start with,
he flounced out. He sold his house and then he was mad he sold it,
essentially. Earlier this month, OpenA announced we restructuring again as a public benefit
corporation instead with a nonprofit parent overseeing the for-profit arm and being a major
stakeholder. It's not really a big difference because the profit and nonprofit are going to
have the same board and for benefit corporations are nonsensical to me. Elon says he's still suing.
Keach and Karen, talk about the weird structure
and why it's important for the OpenAI story.
First you, Keach.
So this weird structure kind of seems like a black hole
that OpenAI might never be able to climb its way out of.
The idea of a nonprofit controlling a for-profit.
They tried pretty hard and they hit a wall
and they had to retreat last couple of weeks.
So this is gonna be challenging
because this is going to make it harder
for them to raise money and they need a whole lot of money.
And it is important to note that when they were doing
this last round of fundraising,
they were telling all the investors,
oh yeah, yeah, it's gonna be cool.
We're gonna, the for-profit is gonna be in charge now.
If that doesn't happen, you can have your money back.
And then now that this has happened, it seems like, okay, the investors are going to give
it anyway, but in the long term, this is going to make fundraising a lot harder for them.
Right.
They raised $40 billion, a lot of it from SoftBank, I think $30 billion from SoftBank,
harder for them in that people don't quite know what to make of this thing.
Well, if you're an investor and you don't have control
of where your investment is going,
you don't have a voice on the board,
that's going to be pretty challenging.
If the nonprofit can come along and veto
whatever the for-profit is doing,
that's essentially what happened when Sam got fired,
it's going to make it a very wobbly investment,
a very high-risk investment.
Karen, why don't you talk a little bit about this?
Yeah, I definitely think that there is a certain level of
compromise that opening I had to make in this regard. But I also
think it sort of highlights the strategic mind of Altman in
that he is still able to make certain gains in the direction
that he wants to go within the space that he has. So originally,
the for profit was capped
and investors could only receive
a certain amount of returns.
And now as a PBC, it's not capped.
And that is actually a step in the direction
that OpenAI does wanna go.
And one of the things that I sort of kind of hit upon
where Keach was also saying earlier
that Altman sort of is fond of these strange structures
and has been throughout his career.
Altman plays legal defense,
whereas Musk plays legal offense.
And so he kind of creates these really nested
complicated structures that just make it really hard
for us as journalists to even scrutinize and report.
What does it mean that it went from a capped profit
to a PBC?
And I think that's an important part of the story as well because it is a
tactic that he uses.
Well, I think he wanted it to be a profit company. I think he got a lot of
roadblocks probably from state regulators, from federal regulators.
Elon, the case wasn't going away. Why isn't the case going away for Elon if
this is what he says he wanted? What do you think is happening here?
Yeah.
I mean, you mean like, so the real politic, I mean, you know, Elon said like, this changes
nothing.
Elon's lawyer said, this changes nothing, right?
And they're still going to go ahead and they still object to the PBC.
Oh, please.
He just wants his money.
If you step back a little bit, I mean, it is kind of extraordinary that Elon Musk lent
his name and promised a billion dollars,
gave a very small percentage of that,
but basically his name, his credibility, his money
to get this thing off the ground,
and then got nothing in exchange for that.
We also have to point out that he is a competitor
and that part of what's going on here
is a fight between companies.
And it's not just between XAI and OpenAI.
There's Anthropic and other folks there.
And a lot of these fights that maybe look like they are philosophical are also commercial.
Absolutely.
So, Almond would say he definitely needs the money to get computing power.
Karen, you write in your book that isn't inherent to AI or even generalised, but OpenAI started
the race.
You write, not even in Silicon Valley did other companies and investors move until after
ChatDBT started to funnel unqualified sums into scaling that included Google and DeepMind,
OpenAI's original rival.
It was specifically OpenAI with its billionaire origins, unique ideological bent and Altman's
singular drive, network and fundraising talent that created the right combinations for Tickle
Division to emerge and take over, which means they started this.
And I've gotten lots of emails and texts from people who are like,
it wasn't supposed to be a race and now it is.
So explain the importance of winning right now and
how the model of cost and resources play into that, Karen.
Yeah, I kind of want to start maybe a little bit earlier in point in time in that, so I
started covering AI in 2018, and at that time, pre-OpenAI sort of seizing the imagination
of what AI could be.
I mean, AI, there were so many different diverse ideas about what AI was in the research world.
There was so much fascinating research that was going into,
could we build AI systems without data or less data?
Could we build AI systems that were super compute efficient?
Could we build AI systems that reason,
but without needing to train it on the entire internet?
All of that went away once OpenAI released GPT-3,
and especially once OpenAI released ChatG-3 and especially once OpenAI released chat GPT.
Because it appeared, I mean now it's not clear anymore, but it appeared at the time
that this was going to be an incredibly commercially lucrative path to producing
more commercially viable and just really great products.
And so everyone kind of glommed onto this approach and because of that, commercially viable and just really great products.
And so everyone kind of glommed onto this approach and because of this race, all of
the AI researchers that were working in the field started getting these million dollar
compensation packages from companies and they shifted from independent academia into these
corporate labs and started having been financially conflicted in sort of the work that they were doing to
develop this field and this technology. And so the reason
why I say that this kind of compute-intensive scaling
laws paradigm of AI development is not inevitable is
because there were so many other paths being explored
that then got deleted. Essentially, they got forgotten.
Yeah, because they were deciding to spend the money, correct?
Exactly. And now people are stuck, or the industry is sort of stuck in this mind frame of the only
thing that they can do to win this race is to just keep spending more money and just keep acquiring
larger supercomputers than the other guy. And that has sort of created this crazy downward spiral
in terms of a race to the bottom for things
like environmental impacts and social impacts
and ultimately labor impacts.
So, Gage, obviously size matters.
What a surprising thing for a bunch of men.
We've recently seen counter examples to scaling, though,
the success of DeepSeek from China.
We don't have to go about the exact number,
but it's clear how much it costs.
But they use fewer GPUs.
They're looking in their scarcity.
They've found some efficiencies.
Keach, what impact do you think the launch has had?
Because despite DeepSeq, the push for bigger super campus data centers is obvious because
already the day after the inauguration, standing next to President Trump announced the $500
billion Stargate data center project earlier this month.
It announced open AI for countries to help other countries build AI infrastructure.
Talk about this.
Does it have to be this way?
Did Deep Sea have an effect?
You just saw this UAE deal that looks like they're really competing for something like
that.
So talk a little bit about what it could mean.
Sometimes it does feel like data colonialism, other ways it feels like an opportunity for
some of these countries.
Well, the reason that Sam Altman is like the man for the AI moment is because the version
of AI, this scaling laws version of AI, is something that requires so much money and
Sam is really great at raising money.
So it's like he's a hammer and that's the nail, you know.
And if he had different skills, you know, maybe that wouldn't be so important, but
he is a master fundraiser.
And so the version of AI that he is driving the world toward is one that requires giant
piles of money.
And the deep sick moment, yes, there was a freak out and they were, the fundraising process became complicated
in the weeks after that DeepSeek announcement suggested that, hey, there might be a much
more efficient way to do this, but it passed, I would say.
Meaning how so?
Well, just that it's been a full speed ahead, pedal to the metal, let's build data centers
all over the world.
It will really, we'll really see when people start sending checks for this fundraising,
right, not just promising the money, but like when the actual money arrives,
about how much confidence investors have in...
Is that something you're worried about?
I think it's something we're keeping an eye on, yeah, definitely.
So another area we're seeing, Sam, in this shift to position is regulation,
because that's part of this whole package because it looks like Trump is
Pedal to the metal like whatever they want and obviously that's why they were all standing there at the inauguration because this is the moment
When he testified to the Senate Judiciary Committee two years ago about six months after Chet GPT came out
He said the regulatory intervention by governments would be critical to mitigate the risks of AI models
He was back on Capitol Hill earlier this month. I want you to listen to his answer
to a question from Senator Ted Cruz about standards or regulations.
I am nervous about standards being set too early. I'm totally fine, you know, with the position some of my colleagues took
that standards, once the industry figures out what they should be, it's fine for
them to be adopted by a government body and sort of made more official.
But I believe the industry is moving quickly
towards figuring out the right protocols and standards here.
And we need the space to innovate and to move quickly.
Hmm. What a surprise. Interesting.
So what does it say about the... Karen, why don't you
and then Keith talk about this?
I'd like you both to talk about it. Go ahead, Karen.
Yeah, so I think when Altman testified two years ago, he called for regulation, but a very particular type of regulation.
Don't throw me in that briar patch.
Yeah, there were a lot of senators that were asking about current day harms like copyright, impact on jobs, environmental issues, things like that. And Altman very cleverly sort of orchestrated
a complete shift of their attention towards future harms
in talking about the ability of these models
to potentially extricate themselves
and go rogue on the web and that kind of danger.
And so he was really calling for regulation in that bucket
by saying the current models are, we don't need to worry about that
now. Let's really like nail the regulation for models that don't exist yet. And so I
actually think the most recent time that he testified, it's a little bit more, he's being
a bit more clear now with his stances, but it hasn't been a change in a stance. It's
just I think he's realizing the original talking points that he wheeled out are no longer as effective.
And so he's testing out something new ultimately
towards the same objective.
Well, at the time, it was also Biden administration
who was putting out that executive order.
When he testified that time, I think I texted him,
you're lying.
And he goes, how do you know?
I go, your mouth is moving.
But go ahead, Keech.
So it's been fascinating to see how the cultural moment has changed with Trump
returning to office, right? Suddenly now everything is about China. China, China,
China. So the year may we got that. Yeah. Right.
So the entire justification for all this infrastructure investment for us not
regulating as much as it seemed like folks
maybe wanted us to a couple years ago is about competition with China.
That fits into the Trump administration worldview and it just happens to fit with what the economic
incentives are of open AI.
It's been quite an interesting pivot.
What do you think about the pivot? I think it is troubling to pump up an enemy
and use that enemy as justification for doing things.
We'll be back in a minute.
In need of a laugh? Good news!
The Tribeca Festival returns to NYC June 4th through 15th for a program overflowing
with humor and heart.
Tribeca Festival has everything, including the world premiere of comedy documentary We
Are Pat, a documentary examining SNL's It's Pat sketch, a Tribeca Talks conversation
with comedian Jim Gaffigan, and a cast reunion and screening of beloved film, Best in Show,
featuring some of the biggest names in film and TV. Get your tickets now at tribecafilm.com.
So every episode we get a question from an outside expert. Here is yours.
This is Casey Newton.
I am the writer of the platformer newsletter, the co-host of the Hard Fork podcast, and
of course, Kara's former tenant and forever houseboy.
Keach and Karen, I am a big fan of both of your books.
So when you consider the broader societal and ethical implications of
OpenAI's work to build artificial intelligence, what do you think is the
most significant unintended consequence of OpenAI's rapid development that you
believe deserves more public and policy attention right now?
Great question from Casey, my houseboy.
Kate, you go first, then Karen.
I think it's about economic concentration.
SAM has had a long history of being interested in things
like UBI and being concerned about the impacts
that AI is going to have on the average person.
But if you actually look at what they're doing,
they are concentrating
economic power to a degree that we have never seen before. And I don't see any evidence
of breaks on that really at all. Yeah.
I would agree with you. Go ahead, Karen.
I think I would add one more layer, which is we are also seeing an enormous amount of
political power concentration as well now. The way that Silicon Valley has aligned completely
with the Trump administration, and the Trump administration
is now putting the full firepower of their authority
behind things like allowing them to build data centers
on federal lands using emergency presidential powers,
and helping them strike these deals in the UAE,
that is a degree that we've also never seen before.
And I do not think that what we've seen with Musk
being able to go into the government and create Doge
and kind of start decimating the government
is actually inherently different
from what these other AI companies are doing,
which is they're really positioning themselves
to gain as much economic and political leverage as possible
within the US and around the world such that they will reach a point, and I
personally think that they've already reached this point, which is why I call them empires, where they are able to act in their self-interest
without any material consequence.
Such as ecological problems or things like what you write a lot about.
Yeah, exactly.
Meaning they'll get to do whatever they want locally and globally because this administration has taken off every break, essentially,
and in fact is probably benefiting in some fashion that we're not aware of.
So on May 15th, Sam tweeted, soon we'll have another low-key research preview to share with you all,
which turned out to be OpenAI's new AI coding agent called Codex.
To be clear, low-key research preview is a way they described ChatGPT when it came out,
and Altman is clearly trying to draw parallels.
It was the same day that OpenAI announced that its flagship GPT 4.1 model would now
be integrated into ChatGPT, but they've also had some issues with a recent update, as the
chat bot is apparently too sycophantic and they had to roll back some of those issues.
How important are these new releases, Keach and then Karen?
Well, I think it was interesting that we saw Fiji Simo recently even be put in charge of
applications to Instacart CEO.
She's like the Sheryl Sandberg of this thing.
That's how people are characterizing it.
Right.
She has history with advertising and products.
And so you can kind of see the kind of company that OpenAI is trying to become.
It was pointed at becoming a big consumer tech company, which I don't think anyone
could possibly guess, even when chat GBT was launched. I certainly didn't see that coming. I thought it was
gonna live inside Microsoft, you know. So yeah, I think these most recent
products are just steps on the path toward that. I mean, we'll see more with
these devices. They've been talking to Johnny Ive, we know. I think there's a
very good chance they're gonna become sort of an Apple-like consumer company in the future. Yeah, I think there's sort of two chance they're going to become sort of an Apple-like consumer
company in the future.
Cari?
Yeah, I think there's sort of two things happening here.
One is that OpenAI is not really retaining its research edge anymore.
You know, I talk with developers, application developers, who are using the APIs for all
of these companies, and they're starting to create their applications to be model agnostics
so that when Anthropic is ahead
of OpenAI, they switch over.
They plug into the other API.
When Google's ahead of Anthropic, they switch over again.
So that really is starting to tell you
that the research side is becoming commoditized,
and so the competition is now who
can capture the most users with the most compelling user
interfaces.
So they're really trying to drum up
the releases in that regard.
But I think the second thing that it shows is also Altman's management
style. He actually had an interview with Sequoia recently where he touched on it
himself where he was like, I don't want a lot of people to be in one room
working on the same thing because then we get mired in bureaucracy where people
are just debating each other and there's lots of infighting
and he's a very conflict-averse person.
And so he was like, we just needed to be doing
a lot of things all the time
so that there's a few people each working
on all of these different things
and they're not gonna be in conflict.
And then we like ship, ship, ship, ship, ship.
I think it's very telling that now Fiji is coming to, I think, maybe create some kind
of strategy behind it instead of just having chaos.
Right.
He's a very operational person.
Exactly.
I would say he is not.
And obviously, there's been a lot of dramatic departures, although each one of them then
goes and starts their own company.
They're like, this is safety, and then they get billions of dollars, which is almost laughable
in some way.
In that regard, research does get left behind because OpenAI was to do the research and
achieve artificial general intelligence going forward.
Is that the main goal?
Again, the definitions of it is changed depending on who you're talking to.
I had a different definition from Dr. Fei-Fei Li the other day when I saw her versus someone
else versus the people who debate on Twitter who I try not to pay attention to.
Is that still the main goal?
Yes.
And I think part of what the Fugees, CMOs hiring is so that Sam can delegate and he can sort of focus on his
big goal, which is of course raising money and research, but being the guy who brought
the world AGI, right?
That's what he wants to be.
He wants to be a man of history.
So he wants to keep his eye on that ball.
I think AGI is still narratively the goal because it is the most narratively persuasive
and continues to have a lot of power in shaping public discourse and also continuing to rally people within open AI
towards a common goal. Right, because it's not like you're just making a search engine
then you can sell advertising against, which is kind of boring. Yes, exactly. But has open AI been
focused on research breakthroughs that would enable a so-called AGI in a while?
I don't think so.
I think they have largely shifted to a consumer product company, as Keech mentioned, and they
are really starting to maximize their models on user engagement, which, you know, I don't
think models being maximized on user engagement is going to lead us to an AGI, like, no matter
how they try
to define it.
And so I think there has long been a divergence at the company between what they publicly
espouse and even what they espouse to employees and what the priorities of the company are,
but it is starting to diverge even more dramatically.
Even more, between the researchers.
No, it's a consumer company.
Each of you, who would you imagine they think their real competition is?
Google?
Yeah.
I mean, I'm covering Google for a really long time, and this is the first time that someone's
really given Google a run for their money.
Which isn't such a bad thing.
Not just OpenAI, you know, all the chatbots, them in the front.
But yeah.
I want to finish up talking about the future then.
One thing Sam has done is he's definitely shape shifted himself in order to get along
with the Trump administration.
He's not become like Elon.
They're definitely in the tank for those people.
I would not say he is, but obviously he went to the Middle East.
I never get a sense of some of the more ridiculous,
what's the opposite of Trump derangement syndrome,
Trump crush syndrome or something like that.
I don't sense that from him in any way.
I just feel like he feels like he has to work with him.
There's no other choice.
How do you look at that?
Yeah, he was a no way MAGA,
but I thought it was really interesting
that he did not support either candidate verbally during the campaign.
So that was like months, right? Months even before the election. He just kind of sat tight.
So he has advisors around him and they're reading the polling and all of that, right?
That he saw that he was going to have to work with this person.
So they find their areas of overlap and that's about building AI and infrastructure.
What do you think, Karen?
Yeah, he's a strategic guy and he's an opportunist. He is willing to align himself with the people
that are going to get him where he needs to go. And he's willing to suspend maybe his
own inner values for that. So I think that is ultimately what we saw.
And was I a little bit surprised?
I was, but then it made so much sense that I was like, oh, of course that he would have
done that.
I think Altman is trying to basically reach escape velocity with the backing of the Trump
administration.
He's trying to get to a point where all of the
infrastructure is already laid.
You know, the first bricks are already placed on the
ground and you just can't do anything about it anymore.
Even in the next administration, if it shifts
back to the Democrats, I think he's just trying to
move as quickly as possible such that it becomes very,
very difficult to unwind.
And I think that's, you know, it's not just opening eyes strategy or Altman strategy. That's been the story of Silicon Valley for
a long, long time. Just move fast until you've superseded the law and you've superseded other
mechanisms to rein you in.
Keith, you just mentioned something I want to come back to. You wrote about Sam's political
ambitions to become governor of California,
not this cycle, but maybe even president. He has denied these ambitions. How likely
do you think they are?
Well, that was a moment back in 2017 when he definitely did explore running for governor
of California. And he talked to people about, you know, the president thing. I don't think
he was super serious about it, but this was something that was in the air.
I think that he wants to be in the room where it happened.
And AI turned out to be the way for him to get there, kind of more than anyone could have guessed.
I thought it was really interesting that in the beginning of OpenAI, or early years, he had said publicly, they had gone and tried to get the government to invest.
And they had been turned down.
They did, yeah.
And I think he's always believed that AI is something
the government should have been doing anyway, right?
That that's the ideal model and I guess
if they're not gonna do it, I'm gonna go do it myself.
So I think we're kind of seeing that moment
come to pass in some ways.
It's a little bit what Karen was saying,
that there is this sort of mirror image of China,
state-backed capitalism thing that I could very much see like emerging in the future.
It's already sort of here.
And I think he just wants to be there sort of at the center of organizing our society.
Do you ever see a rapprochement with Musk?
Yeah, he's a pretty flexible guy. of organizing our society. Do you ever see a rapprochement with Musk?
Yeah, he's a pretty flexible guy. I think he would be willing to do it.
I don't know, it's more Elon that I would question.
Who do you think each of you,
his biggest enemy is then, himself?
What something else?
I mean, at this moment it it is Elon, without question.
There's both like real emotional anger there on both sides, as well as competition.
So that fight is still very real.
I think that absolutely is true for this moment.
And I think what's interesting about Altman's whole history is he gets a lot of detractors over time because what I realized was if you agree with
Altman's vision for the future, he's the best asset ever because he's so persuasive at being
able to get you the things that you need to enable the vision that you share. But if you disagree with
him, he's the greatest threat ever because now you're up against someone who can just weave these fantastical dreams
that rope other people in, rope the capital in, rope the political power in.
And so he has encountered many, many people along his career that have become, you know,
enemy number one in that particular era.
So it's Musk now, but certainly there will be more in the future.
Last question, you've both dedicated years now to understanding the people and the person,
particularly behind this technology, is incredible and incredibly frightening to a lot of people.
I want to ask you both, what's your P-Doom?
What is the percent chance you think AI goes wrong, someone creates an existential catastrophe for humans?
And let me give you the positive side. What is your greatest hope for it?
So why don't we start with Karen and then Keech?
So I kind of hate P-Dooms.
P-Dooms is the worst case scenario.
Yeah.
But I really think that we risk undermining democracy
because we are allowing so few people to accumulate so much
economic and political power, as I mentioned, and we are building these new forms of empire,
which are the antithesis of democracy. Empire is based on hierarchy. It is based on the
belief that there are superior people that can have the right, whether it's God-given
or nature-given, to rule above inferior people. And democracy is based on a beautiful philosophical notion
that people are created equal, and that that's why we all
have agency to actually shape our collective future.
And we are rapidly moving towards a world
where most people do not feel that agency anymore
and therefore are not actively participating in democracy.
And that is part of what we're seeing with the erosion of
democratic norms in our society right now. And I think the most optimistic version of the future is I hope there have been so
many people that are now activated in wanting to understand AI, wanting to grapple with AI,
wanting to be part of that conversation, whether that's artists,
whether that's educators, whether that's kids,
that I really hope it starts to shift the trajectory that we're currently headed on
and people will mobilize and start reasserting themselves
and re-livening democracy. So yeah, I don't think there's a very large chance
of existential risk from AI, but not zero, right?
But the possible upside, if you want to ask,
like the one possibly good scenario,
is that when they hit the wall
with this nonprofit conversion plan,
it means that the nonprofit is gonna remain in control.
And Sam has many times said that he envisions a future
where all the people in the world are gonna be able
to vote on what AI looks like.
And it was never clear to me like how the nonprofit
allows that and it's one of those things,
oh, we haven't quite figured it out yet,
but we're gonna figure it out.
But there is a desire to have some kind of democratic mechanism. And if they are not able
to wiggle out of the nonprofit, maybe that is something that can emerge over time.
You have no dooms?
I mean, you know, single digit percentage chance of total annihilation from the robots.
Thank you both. They're both terrific books. Usually, there's all these books that come out
in certain companies. This is a company everyone should be paying attention to early, later, in the future and stuff like that.
So it's really important to understand where they came from and especially this particular figure, Sam Holtman.
Thank you so much.
Thank you.
Thank you, Kara. On with Carouss Whisher is produced by Christian Castro Roussel, Kateri Okum, Dave Shaw, Megan
Burney, Megan Cunane, and Kaylin Lynch.
Nishat Kurwa is Vox Media's executive producer of podcasts.
Special thanks to Maura Fox and Eamon Whalen.
Our engineers are Rick Kwan and Fernando Arruda and our theme music is by Trackademics.
If you're already following the show, you don't have Trump crush syndrome.
If not, you have a P-Doom score of zero.
Go wherever you listen to podcasts, search for On with Kara Swisher and hit follow.
Thanks for listening to On with Kara Swisher from New York Magazine, the Vox Media Podcast
Network and us.
We'll be back on Monday with more.