The Bulwark Podcast - Ronan Farrow and Andrew Marantz: The Dangers Posed by Sam Altman
Episode Date: April 7, 2026AI poses real existential threats. The global economy is dependent on it, it's being deployed in war zones and used for domestic surveillance, and it's increasingly integrated into our medical and fi...nancial sectors. But the guy sitting atop the world's biggest AI company, Sam Altman, is regarded by some colleagues as a liar, driven by a quest for power, and someone with sociopathic tendencies. When Biden was in the White House, Altman was worried about the limited regulation of AI; under Trump, he's loving that the shackles have come off. Plus, Tim on how the Dems need to get the politics of the Iran war right: Welcome converts into the fold, and prioritize American interests.Ronan Farrow and Andrew Marantz join Tim Miller to discuss their New Yorker piece on OpenAI’s Sam Altman. show notes TNL is LIVE tonight at 7:45 ET on Substack and YouTube Ronan's and Andrew's story in The New Yorker Tim's interview with Karen Hao on the unchecked rise of Altman For their buy 1 get 1 50% off deal, head to 3DayBlinds.com/THEBULWARK
Transcript
Discussion (0)
Welcome to the board podcast. I'm your host Tim Miller. On today's show, we are going to take a little swerve into AI politics with the authors of a New Yorker profile on OpenAIs, Sam Altman. But first, a programming note and a little bit of what's on my mind. It's gross to the Iran War. Tonight, the next level podcast will be live on Substack and YouTube at 7.45 p.m. Eastern to cover the latest war crime red line from the madman that the American people elected president again. So make sure to tune in from that. Here's the latest.
It was a bleat from his social media account this morning, Tuesday morning.
A whole civilization will die tonight, never to be brought back again.
I don't want that to happen, but it probably will.
However, now that we have complete and total regime change,
where a different, smarter, and less radicalized minds prevail,
maybe something revolutionarily wonderful can happen.
Who knows we will find out tonight one of the most important moments
in the long and complex history of the world,
47 years of extortion, corruption, and death will find,
the end. God bless the great people of Iran, you know, treating death of a civilization as some like
apprentice reality show thing where there's a cliffhanger. Can't wait to see what happens on next
week's episode. It's truly sick. It's truly deranged. It's obviously deranged. And even people who
otherwise supported Trump can see that. We know this morning that Iran recognizes it as
completely deranged and they recognize that they have a counterparty that's not rational or worth
dealing with. They announced that they're cutting off all negotiations following the threat.
So pretty ominous stuff ahead. And we don't exactly know what will happen tonight, but we do know
is that the consequences of his chaos and lunacy are going to impact people's lives, not just
in Iran, but here at home and around the world.
We saw last night a Saudi petrochemical plant get hit.
It's a random attack, but it's going to create a major supply chain disruption.
It's going to impact the cost and availability of everyday products throughout the world.
And if Trump does not turn back on his threat now, this is only just a small sample of what's to come.
And so as people start to feel the damage that Trump's War of Choice has wrought, they're going to be pissed.
Right. Some of those that supported them are going to feel betrayed. And as a result, this is the best opportunity for the Democrats to regain credibility with voters that have turned away from them during the Trump era. The human stakes are obviously more important than the political stakes, but it's because the human stakes are so high that it's critical that we get the politics right now. This relates to something we've been talking about around here all week. I feel like I've said my piece a little bit in the discreet.
course about whether Democrats should engage with anti-war streamers on the left, even if they
have problematic or bigoted views. So I'm going to leave that B on this show. But as I've been
thinking through my arguments about that and why I was making the case that I was making and watching
at the same time some prominent Manistphere figures break up with Trump and some prominent
mega figures break up with Trump. I started to noodle on like what was underneath all that.
what was underneath the argument that I was trying to make.
And there are two topics I want to get to before we get to our guess.
One is the importance of taking yes for an answer.
And the other is how to think about America first and how to engage with people that see
themselves as America first if you're part of the pro-democracy coalition.
On the first topic of taking yes for an answer, I know I've heard.
I think some people might see my argument.
here as like strategic positioning for the future or a performative bit or an engagement bait.
I promise you it really isn't.
I am genuinely striving to be a person that takes yes for an answer, like in politics and in
life.
When people come around, I just think it's important to accept it for what it is.
This doesn't mean that we should allow ourselves to be made into a sucker or to be fooled
by people that are pretending to be converts, but it does mean that we should be open.
minded about folks that are changing their point of view. It's important to recognize that a person can
be genuinely fooled or they can be blinded by their own motivated reasoning or tribal prejudices.
And as a result, they end up participating in something that's true nature they didn't really see.
I know that might be hard to believe, given how obvious and awful Trump has been for so long now.
but compartmentalization is a hell of a drug.
Tribal mindset is a hell of a drug.
We all have seen it.
And so I think that in order for us to ever move forward
out of this awful place that we're in,
we have to accept that there are people
who are genuinely trying to change themselves,
their engagement with politics,
their engagement with the country,
their engagement,
fellow Americans and try to do the best we can to foster an environment where they can come on over.
I want to play a bit from Tim Dillon is a, I think it's fair to call him an America first comedian.
I don't know that he's maga, but if you listen to him, he's, from a comic standpoint, he sounds a
lot like Marjorie Taylor Green and has had Marjorie Taylor Green on his show and giving her a love
up about how she should run for president.
So that's kind of his politics.
I want to play a little bit from his show over the past weekend for you.
It's the greatest con in history.
By the way, it's the greatest con in history.
I mean, truly, it's the greatest.
It's the, and I don't say great in like a good way.
It is truly the most successful con in history.
It makes Enron look like a guy doing three card Monty on the street.
Anything that you can remember and identify as a complete and utter scam,
this is the greatest con in history.
to run as an America first and you're going to take care of America and then turn around and go,
you know, all of these things.
Daycare, Medicare, Medicare, Medicaid.
We've nothing to do with that.
We're fighting wars.
That's what we're here to do.
We're here to have a defense budget of $1.5 trillion and we're here to fight wars.
It is the greatest scam in history.
You've got to hand it to him.
And I mean, truly, and not, you know, again, not in a moral way, but like you got to hand it to him.
this is the greatest about face in political history that I have really ever seen.
A lot there.
I'm not sure we have to hand it to Trump ever or to ISIS.
And I think that we can be honest,
we're here among friends.
This was not the greatest scam in history.
And 70 plus million people,
everyone listening to this podcast easily avoided being scammed by this.
many, many, many, many, many, many people told the scammies that they were being scammed
why it was happening. So for the rest of this, it is a little frustrating. It would be nice,
at least if you're going to call it the greatest scam in history to give us our flowers on this
one. You know, I mean, we must be all geniuses if we avoided getting scammed by this two-bit
huckster. And these bro rants, the mega-brough rants, where they're talking about how mad they
are about this. They never do seem to like kind of say, hey, I do have to hand it to the
resist wine moms watching MS now. It seems like they had this one. They don't ever do that.
So I get it. I get it. Many of us listening to this want to shout at Tim Dillon and be like,
you're a fucking moron. All right. I want to do that. I'd love for this entire monologue to have
just been about how people who are fooled by Trump have baby brains and they don't have the ability
to abstract. And I worry that I'm being a little able.
when I discussed the manosphere since they haven't achieved object permanence and they're too stupid
to understand that one of the stupidest Americans pulled the rug over their eyes. I'd love to do that.
Find a monologue about that very satisfying. But, you know, here we are. We're here. And I think
that as satisfying as that is on the surface, I also fundamentally believe that humans are redeemable,
that people are redeemable, that they have good and bad impulses, that they have good and bad
judgments that we all do.
The people like Tim Dillon or Margie Taylor Green can move forward now and use their skills
of podcast ranting while bimboifying themselves for good.
Maybe Tim Dillon can use his skills for good now.
It seems like he is.
And so let's accept it.
Let's take yes.
If the Buller is about anything, it should be about welcoming
converts. We started as a bunch of converts. So even if you refuse to give me that, you're like, Tim,
you're just being a softy, you know, we need to be more hard-nosed about all this. Okay. Just looking at
this from a Machiavellian perspective. The more people who bail on Trump, the better right now.
And if the people who bail on Trump can come into an uneasy alliance with the Democrats,
even temporarily, in for this midterm, but even better for the next presidential election, better still.
like Trump's power in domestic affairs is fading.
This is why he's doing the insane wheels off threats about Iran.
He feels like he has control in Venezuela and Iran.
He's losing his ability to bully people here.
Just look at the difference between April of 2025 and 26
in the way that all of these institutions in the country were cowering to him then.
They're stopping that now.
People are standing up to him.
People were in the streets.
And so the more people that jump off the Trump train
and the more people who oppose him
and the more commentators who oppose him
and the more content creators who oppose him,
the weaker he is.
The lower his numbers get,
the harder it's going to be for him to fuck with the midterms
or the 2028 elections.
You can only do so much
when it comes to
chicanery around
an election system that,
sorry, buddy, isn't nationalized.
So this is good.
want this. This weakens him, the more people who come along. And so, as far as I'm concerned,
for now, I don't really care how fucking Looney Tunes these people are. I don't care what gross
comments they've made in the past. If they want to turn over a new leaf and oppose Trump
in this moment, as far as I'm concerned, the water's warm. And that takes us to the, I think,
bigger question of how to then build a viable coalition for defeating Max.
which might have to include some of these people,
which will have to include some of these people.
I mean, let's be honest, the more I think about this,
the more convinced I am that for the Democratic Party,
they have an actual majority,
a real majority where they're winning all over the country,
not just in blue enclaves,
they're going to have to speak to America first voters.
This is going to be a tough pill for some people to swallow.
But there is areas of common ground
between Democrats and America first voters.
So I want to just talk about who I'm not talking about
and what I'm not talking about first.
I don't want the Democrats to become America first in the branding sense, in the sense of Lindberg and Trump.
I don't want anti-Semitism.
I don't want them pretending like the U.S. can just act as if the rest of the world doesn't exist.
I don't mean we should zero out aid to the poorest people in the world or for some reason America first always seems to coincide with siding with the fascists in other countries.
I'm not for all them.
And I also don't think that Democrats are going to get the America first.
capital AF, the America First is fuck people, the ones with the red hats, like they're not
going to be getable most of them.
Okay?
We can just acknowledge that.
But there's another category of people out there, a couple different categories of people
really overlapping, that are getable.
And they just don't believe that politicians cared about people like them.
And so they were attracted to these anti-establishment figures, whether it be in the
Manosphere or on the far left, and some cases on the far right.
They thought that the like political.
class in both parties that screwed them over.
There are leaders cared more about their donor friends and foreign entanglements and corporate
interests and Georgetown cocktail parties and getting medals and going to conferences and Davos.
They cared about all that shit more than they cared about the average Joe or Jaden out there
in the country.
A person that has that worldview is ripe for the taking right now because they have legitimate
grievances and Trump sold them out.
You heard the case from Tim Dillon.
They don't like this war.
They're unhappy about the economy.
They don't understand why they're spending money in Iran.
And let me tell you, things are about to get a lot worse on one or both counts
when it comes to this war or their economic standings, probably both.
And once the economic struggles really start hitting home, not like what we're seeing
right now, which is gas prices up a little bit, which sucks.
But once we get into recession territory or once the supply change gets so fucked up,
we get into a real 2020-style inflation, once we have stagflation, like the late 70s,
these people are going to be more pissed than ever.
And guess what?
When they learn about the corruption, because they're not hearing about it from their outlets,
when they learn about how rich the Trump kids and the other insiders are getting,
they're not going to like that too much either.
So the Democratic Party right now has to credibly make the case that the complaints for these voters
are being hurt, that they're grieving.
are being hurt. They should not be dismissed. They should not be attacked in this moment. Democrats
should be telling Americans who are upset about Trump's focus on his ballroom and the stupid war
and all the stuff that they don't care about. Democrats need to tell those voters that they will
prioritize American interests first. Once they take back the majority, they will be the ones that
care about the forgotten man. They need to fashion a lowercase America First of their own, one that's
in line with liberal values, that talks about economic opportunity, that talks about the common good,
and make a pitch to America First voters that's about something other than demonizing brown people.
But to do that in a way that's credible, they need to be doing it loudly right now, maniacally,
really, right now. They need to be opposing Trump and talking to
talking to the disaffected Trump voter, the disaffected America First voter, about how they are going to respond to their priorities.
I just have to say it.
The left flank of this party seems to get it right now.
The people that are doing this best mostly come from the left flank of the Democratic Party.
Some for the middle are a little still in the uptake.
I've liked I've seen from Ruben Gallego as one prime example.
So if you're a regular old Democrat and the main body of the Democratic Party and you're looking at what you should.
be doing, look at, look at what Rubin's been doing lately. People are pissed. They're pissed for good
reason. Democratic politicians should be channeling their pain, Bill Clinton style, channeling their
rage, and trying to repurpose it for good. Now, right now, this moment during this war, during this
economic crisis is a time for people who want to save the country from MAGA to jump into battle.
They have to be unapologetically anti-stupid war of choice, anti-corruption, anti-Trump too,
and committed to putting Americans first and making sure that the American voter realizes that they care about them and their concerns.
That's the job.
And I think that right now, trying to come up with, trying to fashion a credible message that take,
from America first, that's simple for people to understand, that breaks through the bubble,
that goes out and reaches out to people who are in different media environments, like this is
the job right now for all of us. So that's what I'm going to be doing. So the Democratic politicians
should be doing too. All right. So if you have a new house, you're doing a remodel or you're trying
to prevent your neighbors from peeping in your windows, maybe one of the most annoying things
to have to shop for is blinds.
I was taken aback when we moved to New Orleans
about the cost and investment and time
that went into blind shopping.
And luckily, our sponsor at three-day blinds
has solved this problem for you
and is making it a lot easier and cheaper
to get some new blinds, to fancy up the house.
There's a better way to buy blind shades,
shutters, and drapery.
It's called three-day blinds.
They're the leading manufacturer
of high-quality,
custom window treatments in the U.S.
and right now if you use my URL,
three day blinds.com slash the bulwark,
they're running a buy one,
get one 50% off a deal.
We can shop for almost anything at home.
Why not shop for blinds at home too?
Three Day Blinds is a local professionally trained design consultants
who have an average of over 10 years of experience
and provide expert guidance on the right blinds for you
in the comfort of your home.
Just sign up for an appointment and you'll get a free,
no obligation quote, the same day.
If you're like me and you're not very handy,
the DIY project of the blinds.
That's intimidating.
That's something you don't want to deal with.
Well, the good news is the expert team at the three-day blinds handles all the heavy lifting.
They design measure and install so you can sit back, relax, play Minecraft, stream, binge on bulwark YouTube videos,
and leave the blinds hanging to the pros.
Right now, get quality window treatments that fit your budget with three-day blinds.
At a three-day blinds.com slash the bulwark for their buy one, get one, 50% off.
deal on custom blinds, shade, shutters, and drapery for a free, no charge, no obligation consultation.
Just head to three day blinds.com slash the bulwark.
One last time, that's buy one, get one 50% off, and you head to the number three,
DAY, blinds.com slash the bulwark.
Worked out great for me.
You'll love it too.
Get on it.
All right.
Now, while we're in catastrophe talk, that's a good time to chat about artificial intelligence
and our Silicon Valley betters who are among the people that.
we should probably be railing against in the moment.
And I want to bring in two guys who have an amazing article out yesterday and the New Yorker.
The first guy I think you might have heard of.
He's an investigative reporter and contributing writer to the New Yorker.
He won the Pulitzer Prize for his reporting on Harvey Weinstein.
It might be right behind him there on his me wall.
It's Ronan Farrow.
And alongside him, it's staff writer at the New Yorker.
He writes about technology and politics.
He's the author of Anti-Social Online Extremist, Techno-Utopians,
and the hijacking of the American conversation.
and it's Andrew Morantz.
Hey guys, you have a piece out yesterday titled Sam Altman may control our future.
Can he be trusted?
Question mark.
That seems like a Betridge's rule of headlines kind of covers that one.
You know, any headline with a question mark?
The answer is no, no.
But I'm wondering what your top takeaways were.
You sat with him, I think he said 10 times.
And we'll get into the backstory and all that.
But just at the top level, like, why does it matter whether he can be trusted?
What was your sense of him?
I assume a lot of CEOs can't be trusted.
More than a dozen times.
Andrew, you want to tackle that?
Well, I mean, I think this raises actually an important point, which is, you know,
yes, a lot of CEOs in corporate America are not, you know,
beacons of moral responsibility.
But the thing about Sam Altman is not that, you know, we went out looking for the person
who, you know, inspired the most, you know, divisive opinions or who had the most critics.
The thing about Sam Altman is that the standard.
by which he asked to be judged from the beginning was,
I'm not going to act like a normal corporate CEO.
And in fact, open AI will not be a normal kind of company.
In fact, it won't be a company at all.
So, you know, this feels like ancient history now,
but we go back to the founding of the company,
which is just about a decade ago.
And he emails Elon Musk out of the blue and says,
AI is going to be so existentially dangerous,
like literally existentially,
like it will kill everyone on earth unless it's handled properly.
And the way to do that is to not leave it to the evil megacorporation
Google or to leave it to China or some American competitor, but to have us, the good guys,
start an AI safety nonprofit research lab. And the key component, among others, is we're going
to try to ask for all regulation that we can from the U.S. government. We're going to try to
share information openly, and it's going to be run by people of the utmost highest integrity.
And if you sense any slipperiness or any untrustworthiness or any power-seeking behavior,
we need a nonprofit board who's empowered to fire that person. So those were the standards that he laid out
at the beginning. I think that was the most interesting thing. I also talked to Karen Howe who wrote a
book about Sam, I don't know, a couple months ago now. People can go check out as well if they want.
And, you know, as an outsider, it didn't occur to me the degree of the which he doesn't like
really have technical skills with regards to AI and that he was, he's kind of the front man and
the salesman for this and the executor, really, of the program. And, you know, it also didn't
occur to me like kind of what you laid out, like the degree to which
The central pitch of Open AI was that this is dangerous and you need to have this in the hands of responsible people.
And I think that, to me, you know, sort of shines a light.
I'm like, why this is, why this is so relevant.
So talk to us about that, like a little bit more about that origin story at the start, the beginning of the company.
Well, we should.
And Ronan, I want you to jump in in a sec, but just to finish that thought, like there basically is this kind of basic inductive logic problem here, right?
where it's like, were you telling the truth then or are you telling the truth now?
Because if you fast forward to now, all this safety talk, all this, you know, Dumer hysteria is very derided in Silicon Valley and in Washington and including by Sam Altman.
But what is sort of lost in that is that we quote Sam Altman himself extensively in this piece being the doomriest of the Dumers and saying if we don't solve the alignment problem, which is this basic unsolved problem in AI research,
that, you know, what if the machine's interests are not aligned with our interests?
It's a great book by Brian Christian called The Alignment Problem.
Sam Altman, among others, was one of the loudest proponents of how dangerous this problem was.
And we have people in the piece who say that Sam Altman went out of his way to cold call them and say,
you are a researcher in this tiny field of AI alignment.
I need you to come on board so I can endow a billion dollar prize to solve this problem.
you know, you have to read the piece to get the full details, but basically that doesn't happen, right?
The prize slips away. The problem goes unsolved. And today you have Sam Altman saying,
the alignment problem remains unsolved, but like, don't worry about that whole it might kill us thing.
Like that was then, this is now without saying that the problem is solved. So either it was not true then or it's not true now.
And I'd point out that there are immediate concerns that even people who are kind of pragmatist,
people have about Altman. It's not just the Dumer said. We explore a range of opinions in this
piece and a range of anecdotes that support those opinions. On the one end of the spectrum,
you have people who say, you know, one board member said he has two traits, a strong desire
to please people to be liked in any given situation, and a sociopathic, almost, is the
quote, lack of concern for the consequences that may come from deceiving someone. So you have
people like that who are saying there is a pattern of serial deception here that is untenable
even for just the chief executive of any major business. Then you have people who say, look,
as Andrew is alluding to, this is different. AI has real existential stakes. And it doesn't
have to be the, it's going to kill us all, Skynet Terminator scenario. There's the way in which
our entire economy has tilted into dependency on AI. And economists are warning of a recession.
If Open AI and other major companies go under, underperform, a lot of that stake here,
millions of jobs exposed to disruption. We see how it's being used to very rapidly and
effectively devise bio-weapons, how it's being deployed on battlefields, how it's now increasingly
integrated in our medical and our financial infrastructures.
So the scenarios that the so-called Dumer's warned about are less airy and ethereal with each passing year.
And meanwhile, you have what we document about Sam Altman, yes, but I think the reason we looked at it is these questions about integrity and the level of integrity we should demand with the people who, in the words of one person we quote in the article, have their fingers on the button.
That is a big question.
It goes beyond Sam Altman.
And he is a particularly extreme case where even against the baseline of people expect dishonesty from Silicon Valley executives who build businesses on hype, people come out of rooms with him commenting on this. And we uncover these reams of documents about this and efforts across his career to kind of force him out over this. But the problem is not just Sam Altman. The question of integrity is something that I think we both felt deserved to be front and center as this technology is accelerating.
And it's fallen away in too many cases.
I want to get to kind of the present day stuff in a second, but just going back,
because I think that Sam being a pitch man and him being in conflict with the folks who are actually working on the technology,
I think it's a pretty important part of the origin strikes.
It ties together, like the integrity element with the technology element.
And the two people you focus on in the article, one is Ilya Sutskiver,
am I saying that right?
Oia Satskover, who was, it seems like, the first genius person that Sam and Elon recruited to actually work on the technology, selling him on the importance of, you know, doing this ethically and for good.
Another character is Dario Amaday, who ends up leaving and running Anthropic, the competitor AI company right now.
You have from both of them like menos that were taken at the time at the beginning of the company where they're starting to sense that like Sam can't be trusted.
Talk about those sets of memos a little bit.
To Ronan's point, these are both individual and structural concerns, right?
So the individual part has to do with this mistrust.
And I think a lot of people understandably have skepticism of people like Ilya Sutskavir and Dario Amade, because in the present day, they run.
competitor companies to open AI, right? So there's reason to be skeptical. But I think the structural
thing that really is undeniable here is that, again, it's hard to imagine how much these people
were in this unique situation, right? They're constantly comparing themselves to the Manhattan
project to Robert Oppenheimer. And the reason they're doing that, A, is that they think the thing
they're building has massive utopian potential, you know, to power the world, to create unlimited
energy and massive dystopian potential to literally destroy the world. And the few scientists who
are capable of building it, right? As you pointed out, Tim, Sam Altman is not one of them, right? People like
Ilya Sutskiver and Dario Amadai and others are capable. And so in order to harness that talent,
there's a major obstacle. One is these people all have lucrative jobs at places like Google. So you
need to tell a story that gets someone like Ilya in his case to turn down a $6 million a year
counter offer from Google and go work for this scrappy nonprofit safety lab called OpenAI.
But another thing is that these people disproportionately were terrified of building this thing,
of bringing it into existence, right? You have all these, you know, atomic scientists who are like,
I don't want to build the atom bomb. And so in order to get them to build it by their lights,
this, you know, potentially most dangerous invention in human history, part of the pitchman's job
is to tell them a compelling story about you not only need to build it, but you need to build it
for us and not for the other guy. And,
So that has to do with, you know, this game theory of if we don't get it first, the bad guys will
get it first.
But step one of that logic is to convince each individual discrete group of people, I'm actually
one of you.
So one thing we document over and over in the piece is that someone like Altman, because
he is a really good pitchman, according to these documents, which are most of them never
before seen by the public, never before reported on, it really gets into this granular detail
of how that pitch can land, right?
there are other people out there like Elon Musk who are maybe a little more ham-fisted in the way
they try to approach this rhetorically. But what we keep seeing in these documents and hearing from
interviews is that Altman is able to get into these groups and say to the safety-obsessed
kind of Dumer people, I'm really one of you. And then turn around to investors and say,
actually, let's go make a ton of money. The problem is, and this is what emerges in these
documents and accounts from, you know, there's more than 100 people we talk to here, that works
up to a point, right? If you're telling everyone that your agenda is their agenda, even if those
agendas conflict, you can accumulate a lot of money and you can rev up a lot of growth. But then in a lot
of these cases across his career, there are just uprisings of colleagues who say enough is enough
and feel like the conflicting assurances to different people and we document a lot of different
examples of it across the piece just create too much chaos. And,
And as Andrew is pointing out, in this particular case, with these particular stakes, there is the
existential problem of the pitch is about, we've got to go slow, that's the mission statement,
we're building a non-profit.
That's what Open AI, again, originally was.
And then a situation where over time, it seems that Sam Altman was telling other constituencies
that wanted growth, that wanted profit, that wanted to a framework, he uses a lot, win.
No, we're going to go as fast as possible.
So, for example, we look at internal documents from the early days of Open AI in periods where their explicit pitch was, look, we don't have the money and resources that Google has, but we're the good guys. We're a nonprofit. That's how we're going to stay. People were taking pay cuts to join, and many of them feel burnt by that now. They feel they joined something that we document looking at these communications internally. Even at the time, the co-founders were frustrated with. We're stepping away from
Greg Brockman, Altman Second in Command, in one of his diaries, talks about it being potentially a lie,
that they were pitching this as a nonprofit and then turning around and spinning it into a for-profit.
And so some of these moments particularly at Ilya wrote turn into the basis of this period where Sam gets pushed out as the head of the company for, what, five days by the board,
doing what the charter had set up to do, which is, you know, ensure that, you know,
you know, if somebody gained too much power or was power hungry,
we're just going to use this technology in the wrong way.
You know, they could be removed.
He gets removed for a short period of time.
Maybe I'm just getting hung up on this because of what we've learned immediately.
But I do think it's pretty telling as he tries to, as he, you know,
wiggles his way back in, you know, he hires crisis comm guys with reputations for being hard-nosed like Chris LaHane.
And then he pushes his way back in and he replaces the board and he brings in Larry Summers.
I'm just like, I think that that's like,
anecdote tells you a lot about, you know, the mindset of Sam Altman in this period.
The ultimate legitimizer, Larry Summers, Christine of reputation.
Yeah, when you're looking at somebody, you're like, I just want to demonstrate that, you know,
I'm not doing this in order to gain power, but I'm, you know, I'm doing this because I want
to bring in a moral arbiter who can be a judge here. We're going to bring in Larry Summers.
So anyway, just talking about anything else that struck you about the research on that, that period.
I think when it comes to Sam unraveling the effort to fire him, right, having pitched a company
where there was this specific shape to it, where the mission was not about growth, where normal
for-profit imperatives didn't govern, and where a board with a non-profit mission about protecting
humanity from this technology could fire a CEO at their discretion in the interest of that
mission and then just made that all go away. That is significant in this story, right, because it
says a lot about Sam Altman. But more than that, it's another moment that feels like an
inflection point in the AI business. That is a moment where the convictions of these early AI founders
who all said that they cared about safety were tested. And what was proved out is when the rubber
meets the road, the money talks. There were a lot of investors who had put a lot of money into
Open AI. In this particular case, the board badly fumbled the ball. This was a board of, in the words
of one former member, JV, people who really were not cut out for this cutthroat corporate
warfare. And they did the firing. And this is the first piece that I think really documents in
meticulous detail why and what their proof points were. A reader can decide, do they think
was enough lying? Do they think that Sam dissembling about, you know, whether a model had been tested,
whether a model had been leaked, what requirements were in place for safety testing? Does that matter
enough? But they had their reasons. They didn't express them adequately. They didn't make the case
in the public arena the way Sam was. And there was a ready audience of investors who were thinking
about the bottom line. And, you know, none of these people are villains. They were also flummoxed.
They were saying what the hell happened.
And acting on, I think many of them now, in retrospect, admit poor legal advice, they were afraid Sam was going to sue them.
And so they released this kind of mealy-mouth statement that didn't clarify anything, saying that he had lacked candor.
So Satya Nadella, we have, you know, him in this piece calling Reed Hoffman and saying, what happened?
And then they're all calling around trying to figure out, you know, was it sexual misconduct?
was their embezzlement.
But this was a different kind of critique.
And it was a critique that really requires the kind of gradual accumulation of wrongs that we document in this piece.
What the market proved out is people, at least with the amount of information that was available at the time, didn't care enough.
Now, I would point out one last thing.
There are some people who at the time talk about having given Sam Altman the benefit of the doubt and even helped him come back in that investor community.
who now tell us looking back and seeing how in their view the alleged lying has persisted
since then that they're not sure they would have done that again and that even if they
wouldn't have fired him at the time or allowed the firing to go down, there would have been
much more severe warnings. They would have done more to ascertain that this wasn't a stable
trait that was going to cause future problems because since then there have been a lot of cases
where even outside of Open AI and these smaller examples about the safety of their products and so on,
there's just deals being announced that other parties sometimes feel are conflicting.
There's a fight going on with Microsoft right now that we write about in the piece about that.
And also just final thing, just to listeners who are thinking, like,
how could some of these employees and board members have been so naive that they were shocked
that a business guy wanted to make money from something, even though he said something else?
a lot of the people now say the same thing about themselves.
Like a lot of people who we've spoken to say in retrospect, yeah, I guess I was naive,
but I really believed it at the time.
And the piece does, I don't know if you guys did this intentionally,
kind of lay out the ways in which Sam at times, you know, lies or evades the truth.
I mean, as you guys kind of go through the history with him in the various of interviews,
I notice there's a lot of times where Sam doesn't recall that or Sam thinks it went a different way
than somebody else said.
It seems like that was a trend in your conversations with him.
No?
Yeah.
I mean, I think we both cared very much about getting his and open AIS responses to everything, Andrew.
Yeah.
I mean, the thing is, when you go back through this stuff, we do actually have a very meticulous fact-checking process.
And, you know, one thing that when you're dealing with the daily news cycle of, you know, TV, newspapers, whatever, you're not often asked to account for, like, did you say this thing in a closed door?
skiff meeting with intelligence agencies in 2017, right? So the fact check that we came to them with,
and also before the fact check, the interview stuff that we came to them with, I think it's just not
the level of detail that you're used to being asked to account for. But the fact is, you know,
the pitch over the whole decade taken as a whole has all these inconsistencies in it that really
are just hard to account for. So I think often you would get from an executive and in the moment
explanation and then later they'd say
oh, did I say that? I meant something different.
I don't know about you, but I like keep my money
where I see it. And if you look at the big
wireless carriers, there's all kinds of
fees and junk do you
don't expect. So this can be
the moment for you to switch over to our friends
at Mint Mobile. Mint Mobile
allows you to save big money
particularly compared to the other wireless
carriers. It's an option
if you've got a teen in the house
and you've delayed and you've delayed and you've delayed and
letting them have a phone.
that'd be a good affordable option for them.
So you're not paying out the wazoo for your child's phone habit.
That's something to think about.
Mintmobiles here to rescue you with wireless plans starting at just 15 bucks a month.
All plans come with high-speed data and unlimited talk and text delivered on the nation's largest 5G network.
Bring your own phone and number.
Activate with ESIM in minutes and start saving immediately.
No long-term contracts, no hassle.
Mintmobiles, wireless service, I can tell you.
That's excellent.
You think you're going to leave one of the main carriers and it's going to impact the quality of your service?
Not a problem.
You can promise you with Mint Mobile service will be excellent.
If you like your money, Mint Mobile is for you.
Shop plans at mintmobile.com slash bulwark.
That's mintmobile.com slash bulwark.
Upfront payment of 45 bucks for a three-month five-gigabyte plan required, equivalent to $15 a month.
New customer offer for first three months only, then full price plan, options, available, taxes, and fees extra.
Seymint Mobile for details.
I'm going to go to Presidde.
One thing that I struggle with reading both Karen's book
in your piece about this is
I've never met Sam, but he's pitched
by all these other people as like this
very
convincing charismatic Jedi
that wins people over.
And I don't know, I've consumed
a ton of his interviews now.
And I find him to be like robotic
and like devoid of
human feeling or touch.
And so you guys were with them a dozen times.
Like, help me with that disconnect.
Like, there are a lot of people I don't like particularly,
but I understand that they're charismatic.
I mean, our current president, for example.
But, like, I don't get that at all with him.
We obviously capture a range of opinions on this.
Yeah, and Andrew and I have had this conversation internally, too.
In the piece, we have people who are very convinced and wowed by his persuasive powers
and much less so.
I had this conversation with my mom.
last night she read the piece and she called me and she was like, oh, you know, I just,
I see him in interviews and I think he has this vulnerability and I have kind of my mom instincts
kick in. So I see why, you know, he, you know, he is so persuasive to people. I know. I know.
I will say when I look at the range of perspectives in the piece, clearly the man is an incredibly
savvy pitchman as we've been discussing. And I think there's a through line of character analysis
from people close to him,
where they talk about him
really being void of doubt
in the moment. As he is
telling you that the thing you care about
most is the thing that he
cares about, he's saying
something that it feels like
he believes. And I don't think
there's a lot of sort of follow-up or
self-questioning. And according to
many of these critics, there really is no true
North that provides a baseline
of consistency or facts underneath
all of that. There is
a portrait of him from a former board member who's on the record in the piece, Sue Yun,
that I find very measured and interesting, where she says there's a lot of people in Silicon
Valley who look at this trait and this kind of dissembling on things small and big.
It's everything from in the piece we talk about at one of his earlier startups,
him supposedly claiming to colleagues that he was a champion ping pong player and then
turning out to be one of the worst ping pong players in the office.
And he says he was probably joking.
but there's small stuff like that.
And then there's, you know, the bigger showdowns
where we talk about, like,
he calls Daniela and Dario Amadeh into a room
and accuses them of plotting a coup against him,
and he attributes it to another executive who's told him this,
and then they call the executive in,
and that executive says, I didn't say that.
And then Sam, in the moment, says,
well, I never made this claim either.
And they're like, you just said it.
And this is another one where Sam says,
you know, it's not quite how he remembers it.
He has differences.
you can read the piece for more.
But this board member, Sue Yun, talks about,
look, people see all of that,
and they say he's Machiavellian,
he's some villain.
But she pushes back on that
and says, look, having dealt with the guy,
I think he is,
she uses the word to the point of fecklessness,
just convinced of the shifting realities
of his sales pitches.
It goes back to this lack of doubt.
And so, therefore, he says things
that people wouldn't say in the real world
if they're connected to the real world.
Yeah, and the other thing to remember about this pitchman charisma question is who the audience is, right?
So you mentioned, Tim, you know, Trump has a charisma, you know, Obama has a charisma, right?
But that's not the kind of charisma that would necessarily work in these rooms of engineers.
Sure.
Or even with regulators, right?
So if you're an engineer, what you want is someone who's really kind of thoughtful, humble, conscientious.
Dorks like him.
Right.
Or looks like it.
And if you're a regulator, actually, a big part of it.
of this is the public piece, coming on the heels of, you know, the tech clash and the social media
boom, it seems to me, anyway, as an observer of this stuff, that what the public wanted and what
Congress wanted was someone who did not come off as really, you know, blustery and charismatic,
but someone who would come to them and say, I'm terrified of this thing I'm building, please
put me in regulatory handcuffs.
One thing that I was marinating on as I was reading it, thinking about his traits, he's a
pleaser, extreme self-confidence.
what is other traits sociopathic lack of concern for consequences that may come from deceiving someone
these are quotes from other people to be clear yeah yeah the chat has all those the lLM has all those
traits the LLM is a big as a pleaser and irrational confidence and randomly hallucinates lies
and I don't maybe there's no connection there at all but I felt like I wanted to at least mention
it you know sometimes you know the creation reflects the creator
When I think about a topic that I want to spend a year and a half of my life on, and this one in
particular, I think for both of us, had a lot of sweat and almost or perhaps even literally tears
into it. It's incredibly complex. And you can imagine how pressurized the environment around this
is and, you know, the amount of pushback and just a piece like this is a heavy lift and incredibly
detailed and ambitious. And we both wanted to get it really right and fair. For me, all of that
flowed from the fact that this, again, felt like a bigger inflection point, that the critics who
allege that these things we document in Sam Altman are actually signifiers of a wider race to the
bottom on safety and maybe on honesty in general in American business and in Silicon Valley in
particular, that felt important to me. And I think the way in which the development of these
large language models has progressed, where they're trained based on human feedback.
and human beings like frictionless responses that mirror back what they have said,
better than responses that challenge them and say, no, that's incorrect.
So you wind up with these two phenomena, which we talk about in the piece of sycophancy,
you know, these models just parroting back things that you're going to like hearing,
and hallucination, where they fill in gaps by making stuff up.
And these are traits that, you know, are very troublesome to root out of the technology.
And in the case of sycophancy, especially, we've talked to computer scientists who really frame this as it's accepted as a necessary cost of doing business.
There's a feeling that these frictionless answers help retain users and keep them on the hook.
So I do think that that is a metaphor of some consequence.
I mean, it works.
We all know people like this.
Like, I know pleasers.
There's no shortage of people that have these traits.
No, in gay, in the gay life.
I can say it. The gays can say it. We have a lot of narcissistic pleasers in gay world, you know,
you're right to point it out, Tim, because, you know, I will say, Andrew pointed out, you know,
we didn't go into this looking at some, in some pieces I've done, it's, I have a specific
lead about criminal activity and the kind of moral shape of the piece is clear fairly early on,
even if, of course, then I'm testing those assumptions all the way through. This was a
where we really like we parachuted in and and looked at what are the biggest unanswered questions
and can we examine them forensically and fairly. And further to that, as I was dealing with Sam
over and over again in this, I really felt a great duty of care to be incredibly fair to him.
And part of that was because I did feel, you know, a kinship in many ways. Some of the traits you
talk about I really get as a gay man. He talks about this in the piece. And to be clear,
he's very quick to dismiss any link between his gayness and this kind of like best little boy in the world phenomenon that we all know about in the gay community.
But he fit an archetype that is familiar to many of us.
He was hyper, hyper ambitious, hyper focused on winning.
That can come from a number of places.
But he does in one moment talk about having been beaten up as a kid.
We couldn't find records of this.
He didn't report it to anyone, but he does mention it.
And then he kind of wheels back and says,
as, well, maybe that gave me an enduring desire to please in a way that I haven't examined
and don't get the significance of, but I'm sure not. I just miss the significance of this.
So we relate all of that in the piece, but I think it's telling the portrait of him in this
is actually undertaken with a lot of sympathy and care. And I think it's reflected in their
response. You know, they are in this position right now of doing kind of two things. They're
trying to obviously downplay the piece, but they're doing it softly because they're also relying on
the peace in legal filings now. They're in their fight with Elon Musk, relying on it on a number
of assertions about Musk's competitiveness. This is a sensitive politics podcast. So I've got a whole
section of politics topics as it intersects with Sam, but also just AI broadly and Elon.
The first is what he said after Trump won, which was watching at POTUS more carefully recently,
has really changed my perspective on him. Parentheses.
I wish I had done more of my own thinking.
You know, that struck me as somebody really trying hard to appeal to Trump and people around Trump,
like this notion that, you know, he got caught up in the woke mind virus.
And he just, like once he started using his own thoughts, he realized he saw the genius of Trump.
I don't know when your last interview was with him, but things aren't going as quite as well with Trump maybe as when that tweet was.
I'm just wondering if you asked him about that,
what his current thinking is about the administration.
Andrew, I mean, I think for me the big takeaway is
this is yet another area where there were very clear assurances
and statements of principle and then very different conflicting statements.
Well, first of all, I notice you only asked Ronan for his thoughts about the gay community,
but I'd like to offer some of my, no, I'm just kidding.
Oh, please.
No, I'm just.
Let's get some straight commentary.
Yeah, exactly. I'd like to straight explain.
Yeah, we want to know how allies see us from the outside.
Exactly. That's what I'm here for.
No, I actually think that the Trump stuff is a key example of this, right?
Like, you have someone who, and this is not unusual for his milieu, very stalwart donor to Democrats and Democratic PACs for many years.
He has, you know.
Is his milieu also a gay commentary or is there some other part of his milieu?
That's actually Silicon Valley liberals who were, you know,
know, a thing until a couple of years ago.
And he says, actually, in the piece, you know, I am very worried about the rise of autocracy,
which he says, that's not a gay thing, that's a Jewish thing.
But then suddenly his fears about Trump, who he has alluded to as, you know, compared to Hitler
and all these things in the past, as you say, they kind of go away after it seems like
Trump is going to win.
And there's just something about this was like, I'm sorry I'm caught up.
I know you discussed it at length, like his relationship with Trump, we can talk about
it more.
But there's something that just really, I guess, really get caught up with because it ties to his
personality, just this notion that, like, he's really changed his perspective on him.
Like, he's watched him closely.
Like, that's, like, Tim Cook has shown sycivancy to Trump, right?
But, like, in a more...
I see what you're saying.
I don't know.
I don't know.
Just, and I'm more like, hey, we can do...
I can do deals with him.
I can work with him.
Like, this was, he was trying to say that, like, he really had changed his...
Like, that he had thought about it and he'd been wrong about something and that Trump had
won him over.
The rest of that tweet, which we left out because it wasn't worth sort of explaining to the New Yorker audience was,
I've thought about it more and I realized I really fell into the NPC trap, right?
Which is this kind of me-me-speak for like I was acting as this non-player character.
So it's exactly what you're saying.
Yeah, the guy that created the biggest AI company in the world.
Right.
compares himself, like, tries to diminish himself and be like, no, I was just, you know, an NPC.
I was just a mindless person who went along with the crowd.
Like, that's a pretty concerning trade, though, about himself.
that he self-analyzed, by the way.
But anyway, to go into that with Trump,
I mean, that is just a level of suck-upitude
that is, I think, even a little bit higher
than we've seen from other people.
Obviously, we're all witnessing
what is happening in Silicon Valley right now,
right, at a time when Silicon Valley
is the center of gravity and the economy
has essentially all of the levers of power in Washington.
With AI, specifically, AI money is flooding politics.
little surprise that we see such anemic pushes for federal regulation on this that could
actually meaningfully slow development in the name of safety in the way OpenAI initially
committed to. Because if you are running for office in this country right now, you know,
you're contending with a whole new economy of AI-driven PAC money. We talk to people,
even in the camp of Altman's defenders, who really just said it, I think,
think what the reality is, which is like, Sam isn't actually Trumpy. Come on. You know, this,
this is a guy who wants to win. And right now, the, as you put it, suck upitude is an avenue for
winning. And I think it's just, you know, it is understandably dismaying. Doesn't show great
judgment, maybe, though. Yeah. Well, and the other thing about the material consequences of it,
yeah, we couldn't get suck up aude past our copywaters, unfortunately. But the,
But the other, the material consequence of this is directly tied to the thing we were talking about before.
Do we proceed with caution and sort of tie our hands regulatorily, if that's a word?
Or do we kind of go full speed ahead?
This transition from Biden to Trump, it's not only about the rhetoric and how he justifies it,
he spent, according to our reporting, all four years of the Biden administration working in public and behind the scenes to say,
you guys are doing a lot, but you're not doing enough to regulate us.
You need to be more aggressive with your EOs and all this stuff.
And then literally on the first full day of the Trump presidency,
all the shackles are off and they announced this plan to launch, you know,
the most infrastructure investment in history.
And his line since then has been, what a refreshing change,
what a pro-business president.
You know, I'm so glad that the woke regulators are going on.
And this is, you know, I think the crap's problem, Tim,
it is a situation that we're witnessing now
where the organizations that are best positioned
to understand the danger,
and it seems when they were talking about the danger,
we're saying we've got to go slow
and prioritize safety,
are also the ones with the financial incentives
to downplay that danger and rush past it
and ask for forgiveness, not permission.
So it is really truly a situation
where strong governance and a meaningful regulatory framework is needed.
And the environment for that just doesn't exist right now.
Yeah, a couple of political that's related to that.
So one is so now they're giving a ton of money.
I mean, just in addition to Sam now, like big friends,
like pretending to be friends with Trump, traveling with him,
talking about how he's seen the light.
Greg Brockman, you mentioned a second command,
donated $25 million to Trump Super PAC,
50 million to a separate super PAC going after anti-AI candidates.
Maybe this is the naivete of a former Republican who didn't come up in Democratic world to think that this might be possible.
But I don't know.
This feels risky as far as backfire is concerned.
I mean, I just think that the potential backlash against these guys in this moment, you know, as Trump's popularity is fading, seems like it has to be real.
And they don't know. Maybe they're just like, hey, we're living this one day at a time and we'll deal with the future when the future comes in 2020.
But it's a pretty significant gamble and bet. It seems like they're making, particularly in the context of what you see is happening with Dario, with Claude and with Dario.
Well, yeah, I mean, you're already seeing some consumer backlash, right?
After the Pentagon thing, which we can get into in detail. But after Anthropic kind of emerged from the Pentagon thing, I think to many people looking better.
and Open AI emerged looking worse.
You did see a big sort of deleting chat GPT moment.
I have to say, like, a lot of these people, I think,
take their own rhetoric seriously enough
that they think after 2029,
the world will be permanently altered by superintelligence
and all of this will be different.
So to the extent that you see people going all in now,
I think it's because they really think,
like, this is the decisive time
and whoever grabs the ring now will own it forever.
Tim, I think there are people around,
Sam Altman, who also make the same argument that you make, that some of these moves are
strategically risky. And I think when you talk to particularly the set of people who maybe aren't
safety-pilled, and they're not rendering this in apocalyptic terms, but they still think that
the amount of dissembling on evidence in Altman's record is a problem. One of the ways in which
they consider it a problem is if you have that trait, there can be a lot of rationalizing
and a lot of not reflecting on whether your assumptions are right, right? You see over and over
again, Sam Altman seemingly uncritically believing these conflicting things as he's saying them.
And, you know, when he talks about these alliances with Trump, with some of these
Middle Eastern autocrats, you know, with NBS, we have a whole transatliferation. We have a whole transatl
reporting about his geopolitical activities where people around him were saying like, hey,
you know, MBS just chopped up a journalist with a bone saw. You can't like be on a board associated
with him. But he's single-minded about his mission. He wants that Middle Eastern money, as many in
Silicon Valley do. What is distinctive to Sam is he, I think, is very resistant to those
cautionary notes around him because when he says he believes a thing, even if it conflicts with other
points of evidence, he shows over and over again that he runs with that. So on Trump and with some of
these other alliances, he makes the argument that that provides him more access and that that's
useful and the old chestnut of, you know, it's better to be on the inside to try to help. And I think
he's able to convince himself of that, if not others around him. And also,
business with Sheikhtonoon, the UAE family member of the autocrat who bought into Trump coin.
This is a big story about the amount of money that he put into the Trump family.
I just, part of me would be, there's a consumer backlash, but you might look at this and say,
you know, hey, if the Democrats take back control, there's going to be investigations into this
type of stuff into the nature of my relationship here.
And this could be risky.
Maybe, though, there's reason for him not to worry about that, looking at how toothless the Democrats
That's where I'm looking into the last generation of tech leaders.
And then with regards to the regulation bill in California,
I was interested in that.
I didn't follow that story that closely,
which where it seemed like there was a popular piece of AI regulation in California
that Newsom ends up vetoing.
What happened?
What happened there?
I mean, we have reporting suggesting that a lot of investors,
You know, we have Ron Conway people in our piece say, who's a powerful Silicon Valley investor, who's an Altman loyalist.
We have people in the piece saying that he lobbied Newsom and Pelosi to come out against that bill.
This is the standard stuff of politics, but again, the thing that makes it unusual is that the public posture of Altman and Open AI is we support all regulation.
And then behind the scenes, we document a lot of cases where they're doing precisely the opposite.
I mean, a kind of middle path, I think, between the most sci-fi, you know, the universe will be tiled with supercomputers and will take over galaxies and the most mundane, okay, you know, business is full of dissembling. What did you expect? I think this regulatory stuff and geopolitical stuff really is kind of the middle ground between the two because the amount of power internationally and domestically that you can consolidate in the next couple of years, even if you do that in a way,
that causes your core audience to have all these doubts about you.
And even if you kind of are, you know, unmasked to a certain audience as, you know,
seeming to be hypocritical, I think the bet of a lot of these guys, and this is not just open AI,
but a lot of them is if we have direct deals with the Emirates, the Saudis, to some extent,
with the U.S. government, that will be powerful enough for our game plan that the consumer piece of it
won't really matter.
And to some extent, maybe the regulatory piece won't even matter because we'll be so far out ahead of it.
I want to talk about Elon being awful, but we're running out of time.
I think we should focus on more important things.
We all know that Elon is awful.
I should just mention that probably the grossest behavior in the profile after we spent all this time talking about Sam was the way Elon seems to be spreading lies about Sam.
And you guys debunk some of that in the piece, which folks could go read.
I'm just more curious, like, since you did all these interviews, you know, I moved from the Bay.
So I'm here in New Orleans.
I'm not around these guys anymore.
So I'm not hearing the scuttle butt as much.
Like, what are these guys all really think, do you?
think about the tech.
Because on the one hand, I look at it and I'm like, you know, these are pitchmen,
their PR folks, like talking about the catastrophic risks and talking about all the jobs
going away is also a way to get investment.
And I had Reid Hoffman on.
Like, he thinks that this is a great product that has some downsides.
But, like, also there's all these opportunities.
On the other hand, the safety guys, when they quit, there's this trend of them, like, writing
notes that seem like dystopian kind of like makes them sound like they're dystopian profits
who are planning to live out their days in monkish solitude.
And like that's a little alarming for some of us
when you see like a sequence of those resignations from safety guys.
So how did you guys, when you're actually interviewing the safety guys,
interviewing the executives, like how do you balance that in your head?
We document this range of opinions, right?
And in some ways this reflects the collision of businessmen and scientists in this arena.
There are the safeties who leave and write these do-me notes as you talk about, and they're really scared.
There are the business people who are full accelerationists, and there are people on both sides of the business and science aisle who have a range of these opinions.
I tend to find very sober appraisals, some of which have been made in public, so we can talk about them from people like Demis Hesabas, who talk about the immense potential, both in terms of,
dangers and in terms of upside, right? This isn't totally vaporous. This is already technology that is
changing medical diagnosis. You know, it's helping catch cancers earlier. It's helping with weather warnings,
you know, that can save lives. There's nuts and bolts things happening that are material.
Hesabas is one of the people who says, yes, both the potential and the risk are real, but also
some of these projections are way farther out. And there's a significant,
contingent of scientists we talk to who share that view, you know, that the kind of pitchman
hype machine that you get from some of these leaders in the field is talking about certain
kinds of developments, you know, when Sam Altman says, we're almost there, we've cleared the horizon,
we're going to be on other planets, we're going to cure all forms of cancer. In recent days
around the launch of this piece, he was again kind of sounding similar notes of, you know,
we're all going to be in a superabundant utopia very, very soon. I think the more sober folks in the
industry tend to say that even if some of these potentials exist or some degree of those potentials,
it may be farther out. That's consequential in terms of the risks because the whole economy is
propped up on some of this promise, you know? And that is an older, wider Silicon Valley story.
You know, people building companies and inflating valuations on hype and future projections
long before they are offering a product of value. That's happening now on a massive scale with much
higher risks. Andrew, I don't know if you have any thoughts to add on that.
That basically seems right. Look, there's so much uncertainty here, including from the people who
are building it. They don't really know what it is. They don't really know if we're going to
build superintelligence in six months or 60 years. Like, nobody really knows what's going on.
I think the thing I would say is these guys are constantly comparing themselves to the Manhattan
project, right? When you see the Oppenheimer movie, the moment that sticks out to me is, you
know, Matt Damon, the general who, you know, is supposedly in control comes and tours the thing and
they're about to do a test. And they say, oh, just so you know, there's a slight chance that when we do
this test, it might ignite the atmosphere and destroy the world. And he's like, wait, there's not a,
there's a slight chance? Like, how slide are we talking? And they're like, yeah, well, you know,
0.0 or something. And he was like, I was really hoping you would say a 0% chance. And so I think
if there's anything other than a zero percent chance of catastrophe, whether it's, you know,
economic catastrophe, material catastrophe, it actually is something that we need people to take
seriously. And I just don't think we're seeing a high level of seriousness.
And one last thought on the high level of seriousness. You mentioned the Elon stuff and this
kind of mud fight going on behind the scenes, reporting on the safety concerns and the allegations
of lying and some of these critiques with more substance and getting a full face blast of
you cannot imagine the number of people calling with this pedophilia allegation and all this kind
of personal stuff, which we spent months looking at and I interviewed, you know, all the people
supposedly linked to it. And it really does seem to be untrue. I think what is significant about
that is the people with the fingers on the button, there are valid questions about whether
we should trust them with that responsibility. They're engaged in a no-holds-barred mud fight.
There are very few standards. We don't have the right kind of oversight. And while there are these
existential stakes, they're at each other's throats. At times, in my view, like children. So this is
something that I want us all to be aware of and tracking more closely than is happening right now.
I do have one last thing. I apologize, Andrew, it is just for Ronan. Speaking of existential stakes,
it's your mother's role in the 1974.
Great Gatsby. No, it's the Iran threat this morning.
I mean, you worked to the State Department.
You wrote about the first Iran deal in your book.
Trump this morning is talking about threatening to kill Iran's whole civilization tonight.
Iran, right before we got on, said they're closing all diplomatic and indirect channels of communication.
I'm sure you're just watching this with interest, maybe not as a reporter hat.
I'm wondering if you have any thoughts.
Yeah, both in my own background at the state.
Department and then actually wearing my reporter hat because I wrote a book where I interviewed
at the time every living secretary of state about the decline of diplomacy and militarism
taking over American foreign policymaking. This is an idea of that that I think none of them
could have expected at the time. There is a reason why we empower a whole cadre of professionals
to study other regions of the world that are geopolitically sensitive to all of our safety.
to engage delicately, to try to come to the table and make deals, that is still possible.
Actually, in this very era of history with Iran, there were deals on the table that were being
advanced by other international partners.
And to see the collision of the kind of mania of this administration and those very combustible
geopolitical circumstances and the falling away of all of the infrastructure that might
save lives in a situation like this.
It's capricious and it's wanton.
And you don't need me to tell you that.
I think everyone is seeing it.
Well, all right, guys.
Well, appreciate all the work you did on this.
We'll leave it there.
Yeah, you know, just some light fare, you know.
The squabble annihilation via AI, why two man children have a food fight about their personal
grievances.
I worked in a little, a little it's helping with, with Brethren.
cancer diagnosis, you know.
There you go.
We're doing the best game.
We've got a little glimmer.
Appreciate you guys so much.
And we'll keep an eye on the next thing you're reporting on.
All right.
Thank you so much, Tim.
Thanks.
Thanks, Tim.
Thanks, guys.
The Bork podcast is brought to you.
Thanks to the work of lead producer Katie Cooper,
Associate producer Ansley Skipper,
and with video editing by Katie Lutz,
and audio engineering and editing by Jason Brown.
