Hard Fork - Anthropic’s Cybersecurity Shock Wave + Ronan Farrow and Andrew Marantz on Their Sam Altman Investigation + One Good Thing
Episode Date: April 10, 2026This week, we look at the cybersecurity threats that a new unreleased model from Anthropic are posing to software everywhere. And we ask whether Project Glasswing, the company’s bold new defense ini...tiative, will give tech companies enough of a head start to secure the web. Then, we’re joined by Ronan Farrow and Andrew Marantz of The New Yorker to discuss their blockbuster new profile of Sam Altman. And finally, we look to the skies for this edition of One Good Thing. Guests: Ronan Farrow, investigative reporter and a contributing writer to The New Yorker. Andrew Marantz, staff writer at The New Yorker. Additional Reading: Anthropic Claims Its New A.I. Model, Mythos, Is a Cybersecurity ‘Reckoning’ Why Anthropic’s New Model Has Cybersecurity Experts Rattled Sam Altman May Control Our Future — Can He Be Trusted? Artemis II Moon Launch We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Casey, I got a haircut yesterday. Thanks for noticing.
Kevin, it looks extraordinary.
Has this ever happened to you? I went into the barber.
I sat down in the chair. He did not ask me what I wanted. He just started cutting.
Has this ever happened to you?
No, because they know I'm not straight.
With a straight guy, you don't need to ask them. You just get the standard haircut that a man gets.
He one-shot in my hair.
He said, yeah, I've seen this before. I know what I'm doing here.
Whereas if I walk in, it's like, okay, let me get out the schematics.
and have to watch a couple of YouTube videos.
It's also like not a barber that I've been to a lot.
So like it's not like he knew me.
See, this is exactly, it's like the fact that you just go to random barbers and will accept
whoever happens to be.
This is why they can just start cutting your hair.
Oh, who is?
Yeah, I don't know this person.
Yeah, do whatever the hell you want.
See if I care.
Yeah.
That is the straight approach to hair.
But it's working great for you.
Thank you.
Yeah.
Appreciate it.
I'm Kevin Roos of Tech columns at the New York Times.
I'm Casey Noon from Platformer.
And this is hard for.
This week, the dangerous new AI model that has cybersecurity experts on high alert.
Then, New Yorker writers Ronan Farrow and Andrew Morant to discuss their spicy new profile of Sam Altman.
And finally, it's time for one good thing.
Although I guess really there are two things in the segment.
Yeah, we should really rename the segment.
Okay.
Casey, we have a big announcement.
Kevin, what is the announcement?
We're ending the show.
No.
You're finally free, America.
Yes. No, on June 10th in San Francisco, we are doing the second ever installment of Hard Fork Live.
It's too fast. It's too furious. And it's happening.
I tried to get them to let me call it Too Hard 2 Fork, but they decided that was not appropriate.
Kevin, where can people get more information about Hard Fork Live 2?
Okay. It's happening on June 10th in San Francisco at the Blue Shield of California Theater.
bigger venue than last year.
Tickets will be on sale at nwitimes.com slash events.
Not today, but next Friday, April 17th.
So we're giving you a full week to get your act together,
reach out to all your friends,
use meta-AI to plan a trip to California.
Use cloud code to build your scraper bots
to scoop up all the tickets.
And on Friday, the 17th, you can buy tickets.
And we will just say in advance,
Last year, the tickets did sell very quickly.
They did. So get in there quickly if you want to go.
There would be more tickets available, but Kevin reserves 50 for, quote, his team, which I don't even know what all these people are doing at this point.
But they'll be there and say hi to them too.
So get your tickets next Friday, April 17th at NYTimes.com slash events.
Well, Casey, as you know, on this podcast, we have a rule about discussing AI models called ship it or zip it.
Ship it or zip it.
Unless you're actually putting it in people's hands, we usually do not want to hear about it.
Yes, but today we are making an exception for the new anthropic model Claude Mythos preview
that just was announced but not released for reasons that we will talk about.
But first, since this will be a segment and a show about AI, our disclosures.
I work for the New York Times suing OpenAI, Microsoft, and perplexity over alleged copyright violations.
And my fiance works at Anthropic.
Casey, this is, I want to say, like, the biggest story of the year in AI.
I know there's been a lot of AI news.
I know that people are probably saying, oh, here they go, talking about another model again.
I am telling you this is something that people need to be paying attention to because of the
implications, because of the way it was rolled out, and because of the model itself, which we
will get to all of that.
But do you agree that this is a big deal?
Well, you know, when we were talking about the show this week and we were kicking around
the idea of like, hey, exactly how big do we think this is? You pointed out that one question
people have been asking this week is, are we going to have to rewrite all software? And I feel
like usually when folks are kicking that question around, it's a big story. Let's just talk
through what was actually announced this week. So on Tuesday, Anthropic announced that it was
starting something called Project Glasswing. The name Project Glasswing refers to the
glass wing butterfly, which has transparent wings, and so it can hide in plain sight, and that
is thematically important for reasons that we will come back to. It's also a delicacy in some
countries. I've never had glass wing butterfly. Oh, you've got to try it. So notably,
they are not releasing this model to the public because they claim it is too dangerous to do that.
Instead, they are giving access to a consortium of tech companies, including Cisco, broadcoms,
of makers of internet infrastructure as well as Microsoft, Apple, Amazon.
Basically, every big tech company that is not OpenAI or meta is getting access to this model,
but not general access, just access to do defensive cybersecurity testing,
basically to go out and harden their systems and their infrastructure and their software
before the general public can get its hands on this model.
So what are some examples of what Mythoccurts?
was doing in training that's so alarmed Anthropic that it came to this point?
So Anthropic has been running this model internally for several weeks now, and they claim that
this thing has found vulnerabilities in every major operating system and web browser.
They gave some examples that have already been patched.
One of them was that this model apparently found a 27-year-old security flaw in OpenBSD.
OpenBSD is an open source operating system that runs on firewalls and routers.
It is sort of like a critical security layer on the Internet.
And it was designed specifically to be hard to hack.
Right.
And this model, because of its advanced coding and reasoning capabilities,
was able to find this bug that 27 years worth of professional security researchers
had not been able to find.
What else?
Another example was that it found a,
bug in a piece of popular open source video software called FFMPEG that had, according to Anthropic,
been scanned for bugs five million times by automated security tools without finding this
critical exploit. And that's why it's important to always look the five million in first time,
because you might find something. Now, Casey, I think for people who are not cybersecurity experts,
it might be worth sort of sketching the context here for like how software works.
Yes.
So, you know, every piece of software, every operating system, every app, every web browser that people use,
is built on a mixture of tools.
Some of those tools are proprietary to the companies that make the software.
Some of them are sort of shared open source tools that are just in everything.
Companies will just grab this open source thing and plug it into their thing.
Because that's compatible with everything else, saves you a lot of time and trouble.
It's already been security tested by decades, sometimes of researchers, and this is sort of a big
piece of kind of the foundation layer of the internet are these open source software projects.
What is happening now, according to Anthropic, is that they can basically use this model,
Claude Mythos preview, to sort of proactively go out and find all of the unfound bugs,
they call these zero-day exploits, with a sort of speed and,
efficiency that no human security research team could do.
Yeah.
And, you know, I would say that it can be difficult to talk about cybersecurity in a way that
resonates people for a couple reasons.
One is just that cybersecurity as a field exists essentially almost entirely to alarm people
and say, here are a bunch of problems and these are really scary.
You know, I hope that folks in the cybersecurity field would not mind me saying, like,
it is just like kind of an alarmist profession and that when I've talked to these people over the past
15 years, they've been telling me like, look, the entire internet is held together with spit and glue
and we're very lucky that there hasn't been a catastrophe yet, okay? So after all of this news came out,
I was like, I want to talk to some people who are at least not working for Anthropic or this
consortium to try to give me a gut check on how big a deal this is. And so I talked to Alex Stamos,
who formerly led security at Yahoo and then Facebook. And Alex said like, yes, this is a
big deal. And he was hoping for a long time that we would see a consortium come together like this
because of exactly what you just said, Kevin, the intelligence in these machines and their ability
to work autonomously are now great enough that they can chain together exploits that human beings
either would never see, would take them a long time to see, or they would just never get to
because we're limited in ways that these machines are not. So that got my attention.
Now, we should also talk about, like, what the strategy is here from Anthropic, because I think a lot of people see an AI company that is known for sort of being alarmist about safety, say, we've created this powerful, spooky new model, and we're not going to show you because it's too powerful and spooky as some kind of marketing tactic.
So I think we should just say, like, that is not, to my understanding, the case here.
No.
In my mind, it is obvious why.
like if you're a corporation and you release a tool and people with no real technical expertise
are able to use it and within a few hours discover a novel exploit in the Linux kernel
and then take over other people's machines to cause crimes, you might be held liable as a
corporation. You will get in trouble. There will be congressional hearings. So companies just in
their rational self-interest do not want to sell cyber weapons on the open market. Yes,
It's also like if this was a marketing strategy, it is a horrible marketing strategy.
Like the government already thinks you're a bunch of panicky doomers.
You have a new model that you claim is the most powerful model in the world.
So instead of selling it, you give $100 million of clod credits away to a consortium of
companies that includes many of your competitors, which is what Anthropic is doing.
That is not how I personally would market a spooky new model if I were in the business
of marketing, spooky new models.
Now, look, it may be that despite everything that we just said,
there is still some marketing benefit to Anthropic from doing this, right?
Like, we know that they saw a huge increase in their revenue
after they took that stand against the Pentagon.
And in that stand, they said, like, we are determined to do things in a really safe way.
It seemed like the business world really liked that.
And so I could imagine there being a business benefit to Anthropic of coming out and saying,
we have the most powerful model in the world and we're not releasing it.
Like, yes, I'm sure that there are.
plenty of businesses that are salivating over the chance to get their hands on it.
But they can't unless they are part of this consortium.
So they are at least claiming that they are trying to get ahead of what they envision
will be a reckoning was what was the word they used for cybersecurity.
And it seems plausible to me that in the next kind of six-ish months, every major
piece of software in the world is going to need to be patched, rewritten.
and re-released. So just an absolutely massive project. Let me ask you this. You know, Alex Stamos,
the security expert that I mentioned, told me that he sees essentially like two broad possibilities.
One is, and this is the good scenario, there are a finite number of critical bugs and vulnerabilities
to be found, and that maybe if we all work really, really hard over the next six months or however
long it turns out to be, we will be able to patch those vulnerabilities and our infrastructure
will remain safe and stable. The other possibility is that this model is already good enough
that it can just simply invent exploits that we never would have thought of. And so this will
essentially just be a really, really big problem that potentially just keeps growing in scope
because, you know, maybe eventually you hit some sort of true super intelligent point. So I'm curious
if you've talked to people about what they see the scenarios are and if you have any thought as to which
of those two is more likely? So I think it's possible that they will patch this sort of top 1% of
of critical software, right? The stuff that everyone knows is important. Your Linux, your, you know,
your very popular open source libraries, your routing equipment and networking equipment.
Like, it seems plausible to me that a couple of companies with the right resources and the right
models could like find and fix the worst security vulnerabilities. But I also talk to people who
were telling me that it's not as simple as that, because once you get outside that kind of top
1% of critical infrastructure, there's just a lot of machines that are running on old code.
Right? So it's theoretically possible that all of these fixes could be submitted to the people
who maintain these software projects, but that A, there aren't enough humans to review all of
the proposed bugs and fixes. So that just sort of as a human.
bottle neck there, or that there is just a lag in the time between when a piece of software is
patched and when the person running the router at the medium-sized business in Tulsa
decides to update the firmware or install the security patch. So people can expect a lot of
apps that are asking them to update their software or reinstall their software over the next
few months. I've started getting a few of these already. Have you started getting these?
Yeah. Yeah. So I think this is going to be a kind of forced reset for the entire cybersecurity
industry and a very significant event in the history of technology. Yeah. Well, and just to make it
concrete, like we are currently at war with Iran and Iran is currently hacking our critical infrastructure.
There's a story in Wired this week about them successfully hacking, like water and energy
infrastructure. Right now, they're able to do that without a mythos quality model. I would be
quite nervous about what they could do if something like that fell into their hand. So this really is
not an abstract concern that we're laying out. Right. And we should talk about this government
piece of this because one weird characteristic of this moment is that this very powerful,
advanced model that Anthropic claims is capable of doing autonomous cybersecurity research,
and attacks is also a company that the U.S. government has spent the last several months trying to kill.
Yeah. Yeah. And has tried to declare Anthropic a supply chain risk. They have ordered all federal agencies to stop using Claude.
And so my understanding is there have been some conversations between Anthropic and parts of the, you know, sort of national security establishment and an apparatus about this model.
but it is also simultaneously true that they cannot use this model without sort of running a foul of the administration.
So a private company right here in San Francisco currently has a technology that they claim is capable of finding critical security vulnerabilities in every major operating system and web browser in the world.
And the U.S. government, to my knowledge, does not have access to this technology.
Yeah, it does seem like something that like our national security infrastructure would want to have access to.
One more piece on the regulatory front.
It is crazy to me that model development of this scale and seriousness remains essentially
unregulated in this country, right?
Here you have a private company saying, well, we have now created software that can create
so many different kinds of novel exploits that all software might have to be rewritten,
and they are not really under any kind of regulatory regime.
And the regulatory regime that the previous administration tried to put into place
was thrown out by the current one because it might harm American competitiveness.
So I just want to say that makes me really, really uncomfortable.
I think that if you're making stuff this powerful, regulators ought to be paying attention.
Yeah.
One interesting sort of historical note that I'll make here is like, for the past few years, at least,
there has not been a significant gap between what the AI companies have built internally
and what the public has access to.
You know, maybe there's a slightly better model
that the companies are working on
that they need to spend a few months testing
before they release it.
Or it runs a little faster
than the one that you have access to.
Yeah, but there has not been kind of a significant gap
since, I think, GPT2, which was in 2019,
which involved some of the leaders of Anthropic
who were then at OpenAI,
who made a decision to hold back this model,
GPT2 out of fears that it could be,
be used for things like automating propaganda and misinformation.
Right. In reality, it could barely write a limerick.
Yes.
They erred on the side of caution.
They did. And they got a lot of crap for that.
People sort of said, oh, you're using this to hype.
Some of the same stuff we're hearing this week about Anthropic.
And I think in that case, they were probably a little over-excited about what this model
could do, but they wanted to make sure that they weren't wrong.
and so they held this back, and that created a gap of at least a couple months to maybe a year
between what the average person could see and what was happening inside the AI labs.
That gap is now open again.
There is now a model that you and I cannot use, that our listeners cannot use,
unless they work at one of these companies in cybersecurity defense,
and what the AI companies are claiming.
And I think that is just a very tenuous situation,
and I don't like it, but I also understand,
why I think in this case this was the right decision.
Well, what do you mean when you say that it's tenuous then?
I think as hostile and suspicious as people feel toward the AI industry, that only gets
worse if they think that there are secrets being kept in a basement that they can't access.
And I think that it creates paranoia and fear.
I think that it is generally responsible to have transparency from the AI companies about how
capable their models are. And I understand in this case that Anthropic felt like it had to make an
exception, but I think this gap may be here to stay is the thing that I'm wondering about.
I think it probably is. I mean, it's worth saying that Anthropic was founded on the idea that
if it could build models that were at the state of the art, at the frontier, that it could have some
influence over that frontier and it could guide it to a safer place than it otherwise might have
gone. To me, the Pentagon fight and now mythos are examples of that thesis in action, right?
Where it made the best model and that gives it some room to try to do a little bit of good.
So, you know, blocking domestic surveillance and autonomous weapons for a little while or
preventing bad actors from getting their hands on, you know, tools that could create novel
exploits. At the same time, in order to do that, they had to build the model in the first place.
And there is a risk that there is some sort of, I don't know, intellectual property leakage,
that somehow all of the innovations that they're building are going to trickle down into other
places. And my fear is just that it becomes this sort of self-fulfilling prophecy, right,
where we have to build this frontier, even though it's dangerous and we're going to guide it
to this safer place, but, you know, you did build the thing in the first place. So I just like
reminding people of that tension, because it is not actually inevitable that we build these systems,
and yet we do often act as if that were the case. Yeah. Last thing, a lot of the people I know
who are plugged into the cybersecurity world are being asked right now what people should do
about their own security. If they are worried that models like this will become public,
Should they be like locking down all their accounts and moving their cryptocurrency into cold storage?
Like what do you think people should be doing in anticipation that something like this will become public?
You know, it's funny. I had a friend asked me that just this morning as I was preparing for the podcast.
And I said, you know, a couple of things. Like one, to some extent, we're just going to have to wait.
I mean, to the extent that any of what we've just described is good news, it is that the defenders appear like they're going to have some runway to fix some really
bad problems before the bad guys catch up. So I think we should give them a little bit of room to
see what they can do. If it does emerge that there is a similar model that can wreak havoc,
like rest assured there will be segments about it on Hard Fork and we'll have some updated guidance.
But I asked my friend, do you have a password manager and do you reuse passwords for the same thing?
And she said, you know, I've never really been able to get one of those password managers to
work for me and I do sometimes reuse my passwords. So I said, like, look, if you're looking for
something that you can do, just make sure that you have done your basic online cybersecurity hygiene.
You should use a password manager. I use one password. There are many of others out there that are
just as good. Don't use the same password for anything. Your passwords should be randomly generated
and not, you know, the name of your pet or whatever. And then use multifactor authentication where you can,
right? So don't let anybody get into like your Gmail or your bank.
making account just by typing in eight letters. You should also be using an authenticator app.
And so those are some of the basic things that I would tell people to do, Kevin.
Yeah. I am planning to deal with the possibility of a massive cybersecurity breach by just
sort of selectively dribbling out incriminating things about myself.
Okay.
Just sort of trying to get ahead of any hacks that might expose my, you know, emails going back
decades or anything like that. So I'll just say in that spirit, I used to like the black-eyed
piece.
And I still do.
Let's get it started.
Now, that was a critical vulnerability that I just exposed.
When we come back, we'll talk to New Yorker writers Ronan Farrow and Andrew Morantz about their investigation into Sam Altman.
I also sent up some stuff about you.
Oh, boy.
Well, Casey, the talk of the town in San Francisco this week has been, well, there have been two talks of the town.
One, we already covered in our A.
That was the Claude Mythos.
This town conducts multiple conversations at the San Francisco.
same time. We're amazing at multitasking. The other big talker this week has been this big
piece in The New Yorker about Sam Altman. Yes, more than 16,000 words devoted to a question that has come up
once or twice on a hard fork, Kevin, which is, can Sam Altman be trusted? Yes, the writers on the
piece are Ronan Farrow, famous for his work on the Harvey Weinstein investigation and others,
and Andrew Morantz, who is a good friend of mine and a long-time writer at The New Yorker,
they worked on this piece for a very long time, talked to many, many people in and around Sam's orbit,
and try to answer the question of like, who is this guy?
Yeah, and also, why does that matter, right?
We're talking during a week where these systems have arguably experienced a step change in what they can do,
and I think those kind of advances just naturally should draw more screen.
scrutiny onto the people running these companies. What do they know about who they are, how they
operate, are they honest with each other? And this piece offers one of the more comprehensive
portraits that we have had so far, I would say, on that question. You know, Ronan Farrow investigating
you has to be one of the scariest experiences. You pick up the phone. It's like, hi, it's Ronan.
But it seems hot, too, you know. That's what everyone wants is just a really handsome man asking
them a lot of questions, you know?
Okay.
So let's bring in Ronan Farrow and Andrew Morance.
Ronan Farrow and Andrew Morantz, welcome to Hard Fork.
Thank you guys.
Happy to be here.
I mean, truly long time, first time.
And in fact, I brought receipts to that effect.
This is your show.
You can take or leave this in the edit.
But I wanted to show what a devoted long-time fan I am of Hard Fork.
I know the show well.
I know you guys like merch.
And I know you guys like disclosures.
but you don't have any disclosure merch
to my knowledge.
So I had these made for you.
Come on.
One for each.
One for you, one for you.
I'm going to put it in the mail
after we get off.
One of them says,
I work for the New York Times,
which is suing Open AI,
Microsoft, and Perplexity
for alleged copyright violations.
The other one says,
and my fiancee works at Anthropic.
Oh my gosh.
That is amazing.
So I mean, time limited.
It's going to be a time capsule.
But I mean,
I mean, made at the print shop in Brooklyn, one of a kind exists nowhere else on earth.
That's incredible.
You are a hero.
Is this payback for when I gave you a hat at your wedding?
And I gave you one at your wedding.
That's true.
We have a sort of a theme going on here.
Okay.
Right.
Well, and that's also our disclosure, which is that Kevin and I are buds and have known each other forever.
So actually, Casey, you can come to me anytime.
I know you guys like to rib and roast on the show.
so you can come to me behind the scenes for any roastable Kevin material.
My dream has been to get the New Yorker to investigate Kevin Roos.
So you guys really could not have come along at a better time.
We're on.
Don't tempt us.
I'm not picking up the phone.
Okay.
Let's talk about this big piece that you both just published in The New Yorker.
The title of the piece is, can Sam Altman be trusted?
Now, usually there's this sort of folk rule about headlines that end with questions.
which is that the answer is always no. So I want to put this question to you. Can Sam Altman be
trusted? Well, I think one important thing to note is the piece is really forensic and even. And actually
to a point where I've been happy to see there's a range of reactions, right? There's people who have
answered that question in a very severe way and looked at the fact pattern that is laid out here and the
documentation that's laid out and said, you know, this is someone who poses an acute danger
and should be kept away from an authority position. And then there's people who, I mean,
hilariously enough, my mother called me and she's like, you know, I kind of like him.
And so I think that is a true reflection of our intentions. In this case, as you might imagine,
there was deep consultation with all of the subjects of the reporting to really understand
their feelings. And any time we thought that,
there was a persuasive argument from Sam or anyone else that, you know, something shouldn't make
it in or something would be sensationalist. We really carefully discussed that editorially. So the result is
very even and I would say on the question itself, what we lay out is something that is
remarkable, I'd say, even against the backdrop of the culture of mistrust in Silicon Valley,
where everybody understands and expects, right, that being a founder means telling different
audiences, different things at times to some extent, where everyone understands that the entire
enterprise is building based on hype long before there is actual actionable, deliverable product.
Even against that backdrop, there is an extraordinary preponderance of people who emerge from
interactions with Sam Altman, including close years-long ones, with really active complaints
and allegations that he lies repeatedly about things big and small.
Well, one of my favorites was when you quote him telling you that he wears a grace,
every day to avoid decision fatigue, and then he shows up for his next interview in a green
sweater. That felt like a really satisfying detail. That was just for you, Casey. I was wondering if
people we're going to catch that. I appreciate that eye for fashion that you so rarely get in these
tech profiles. Andrew was our fashionista in the writer's room. Always. But that's the kind of thing where,
you know, we didn't want to make too much of that, right? Because it's like, oh, we caught you in
this deep hypocrisy of, you know, choosing a green sweater. Like, and this is consistent.
with a lot of the things people say throughout the piece and throughout the career of Altman
and Open AI is that there isn't this one smoking gun thing where he's like caught, you know,
with his hand in the cookie jar. It's this sort of allegedly longer, more subtle accumulation
of facts, which my kind of like glib and, you know, annoying way of describing it is like
the fabled memos and documents that were compiled that led to him being, you know, fired in
23 and that have kind of dogged him throughout his career, they really shouldn't have been like a
secret bullet pointed list. They should have been a 16,000 word New Yorker piece because when it,
they only really make sense when you like lay them all out together in narrative form. Yeah, I mean,
you guys mention in your story that there have been sort of these rap sheets that have been
circulating about Sam inside open AI and other parts of the AI industry for years. One of them was
compiled by Dario Amadeh when he worked at Open.
AI under Sam Altman. One of them you said was maybe circulated by some allies of Elon Musk and people
who are opposed to Open AI. So give us some sort of behind the scenes details about what is being
said by whom and how and to what ends about Sam Altman in Silicon Valley.
Well, it was really important to us to filter for the obvious competitive incentives out
there. There are people who are massively
incentivized to go after
Sam Alman. And
the reality is that
there are very firmly
evidence-based critiques,
many of which are promulgated
not just by the rivals,
although they're certainly amplified by them happily,
but also by more
neutral figures and people
who are just kind of technologists who aren't in the fight.
And then there
is the white-hot center
of the rivalry, the stuff you
mentioned that I think is in a very different category, which is, you know, Elon Musk and other
direct competitors really amplifying everything they can come up with. And in some cases,
we document things that are inflated or trumped up or just seem to not be true. So Elon Musk, in
particular, has intermediaries circulating some pretty spicy and pretty unsubstantiated material in Silicon
Valley. And we talk about that. I really appreciated that about the piece, because this has
has become more salient over the past year as these rivalries heat up and you hear more and more
of these scurrilous rumors.
And while I do think this winds up being a pretty damning portrait of Sam on the whole,
you do also point out that in some very real ways, he's the subject of legitimate smear campaign.
Yeah.
Oh, yeah.
I think that's absolutely accurate.
And we were trying not to go in, you know, with naivete of like, can you believe business
titans are being mean to each other?
But like the level of this really does seem kind of shocking and unprecedented.
And, you know, it's kind of consistent with people who think of this as like whoever gets the ring first will control the world.
Like it just seems like all bets are off.
And so as a reporter, it's very challenging to be like, do you bring up the scurrilous rumors to knock them down?
And so we had like months of conversations about how best to do that.
So there's been a lot of reporting on Sam Altman, especially around the board coup.
a few years ago, could you maybe give us like the two or three things that you think are new
and important from your reporting that rise above the rest in terms of people's understanding
of Sam Altman and Open AI? So I think there are things here that put to rest some of the
longstanding rumors, right? I mean, Altman has always said, and Paul Graham at White Combinator
has always said he was not pushed out. He left of his own volition. It really seems from our
reporting that that was not the case. They have talked a lot about their fundraising in the Gulf
in the Middle East as innocuous. All businesses do this. It really seems from our reporting that the
relationships that Sam has cultivated with some Emirati and Saudi Royals is deeper than was previously
realized. Ronan, what am I missing? There are several things like this. We just didn't really know
in full what was in those Ilya Sutskiver memos.
We didn't really have the detailed, multiple-sourced, heavily documented accounts of the individual proof points that were offered in those memos.
We didn't have the contents of those Dario Amadeh notes, and we didn't have a lot of these people on the record yet.
So I think actually in a way that was a disservice not only to Sam's critics, but also to Sam himself, there was a bit of a veil of mystery.
And that wasn't purely accidental.
One of the things we document that's new here is as a condition of the exit of the board members who had moved against Sam that he wanted out, they insisted on an outside investigation.
What happened there is, in my view, quite extraordinary, which is, yes, at private companies, sometimes reports of this type when a law firm is brought in to restore legitimacy can be kept out of writing.
Often it's to limit liability.
and often legal experts say it's a bit of a red flag.
This is a different kind of case.
This isn't just any private company.
This is a high-profile scandal
that engulfed Silicon Valley when Sam was fired.
And ostensibly at a nonprofit.
At a 501C3, exactly.
And so there were stakeholders, not just in the public,
but within this company,
that would be the bare minimum threshold, right,
where senior executives thought,
okay, we're going to get some kind of
at least detailed summary of,
what this law firm investigation found when they invoke it to rubber stamp Sam coming back.
And instead, what happened was an 800-word press release that said there had vaguely been a
breakdown in trust and offered very few other details. And what we reported in this piece for the
first time is there wasn't a report. For years, people were like, where's the report? Where's the
report? There wasn't a report because it was kept out of writing. And this is no longer just a speculative
of supposition, one of the two board members who Sam helped select who oversaw this process,
just explicitly says, well, you know, a written report was not needed, is now their line on this.
Yeah, I'm glad you brought it up. It was actually my favorite detail in the piece,
because it was something I'd been curious about forever.
I mean, the thing that I found most interesting from the piece were the people who spoke
on the record, or at least gave you quotes, and some of them were unattributed,
about Sam who, you know, I think previously might have supported him or at least felt like
there was no upside in sort of, you know, talking about him in a negative way in public.
There was a Microsoft executive quota in your piece as saying that there's a small but real
chance he's eventually remembered as a Bernie Madoff or Sam Bankman-Fried-level scammer.
There's another unnamed board member who said, quote, he's unconstrained by truth
and said that he has, quote, an almost sociopathic lack of concern for the consequences that may come from deceiving someone.
I haven't been on a lot of corporate boards, but I think that it's something that's quite rare to hear a board member say about a CEO of a company.
I'm just curious, like when you were weighing these statements, did you feel like there are people who used to be fans of Sam who have soured on him?
or are these people who have really held a grudge against him for a long time?
The thing that you point out about people changing their tune over time,
I think is an integral part of what we document in the piece,
which is, you know, the fact that Sam Altman comes up through this Y Combinator world
is not incidental.
The fact that he has an investment portfolio in, by his own estimation,
you know, about 400 other tech companies.
The fact that he has sat on everyone's board and everyone has sat on his board.
I think our sort of line about this in the piece is like, we spoke to people who are Sam's friends, Sam's enemies, and given the mercenary nature of Silicon Valley, some people who have been both, right?
So given that that's the landscape, you are going to have people who change their tune as the wind blows different ways, and that's a lot of how Altman's been able to weather a lot of this stuff in the past.
One thing that results from that spread of opinions is, to your question about evolving takes on Sam,
there's definitely a class of nuts and bolts investors,
prominent people in Silicon Valley who are really pragmatists,
not just safetyists,
and who are growth and business-oriented,
who told us that at the time of Sam's firing of the blip,
they gave him the benefit of the doubt,
and especially because of the factor we talked about before
where there just was a dearth of clear information.
In that void, a lot of prominent people gave him the benefit of the doubt
and saw only upside in bringing him back
and removing the board that tried to fire him.
There are a number of those prominent people
in that category now who say,
I don't know that I would have given him
the benefit of the doubt
if I knew everything then that I now know.
It just strikes me, though,
that everyone who digs into this
winds up coming back with essentially the same story.
You know what I mean?
It's like there are not like 17 versions
of Sam Altman out there,
like depending on which reporter calls which different source.
I feel like we now,
sort of know, like, the broad outlines of this person's psychology. I don't know. I want to challenge
that. Like, I do talk to people who are big fans of Sam, um, who some of whom work for him,
some of whom don't. Clearly, this is a guy who has been able to, at various points, like,
lead very important technology projects and, like, rally people behind a vision. These people are
not, like, mindless sheep, like they're critical and discerning and thoughtful people. So I, I don't
want to like seem like I'm, you know, taking Sam's side on anything, but I just like, I think that
there are a lot of people with very strong feelings about Sam Altman, positive and negative. I think
the positive side tends to be more like people defending him in private and the public side
tends to be more people criticizing him. But I don't know. I guess for, for Ron and Andrew,
like, do you feel like there are vocal supporters who you came across in reporting the story who
had sort of no direct employment relationship with Open AI or Sam or, you know, weren't
leading companies that he invested in or something who were like, yeah, this guy seems pretty good
and smart and talented. Yeah, it was an 11-year-old who used ChatGPT to pass sixth grade.
Oh, my God. No, no. There were legit defenders of Sam on a number of these fronts who we
talked to for sure. I think a lot of this has to do with like what baseline expectations.
are you starting from? Like, if you think of this as a business and you start from the premise
that people who run giant successful businesses have to say a lot of different things to a lot
of different people, like, why is anyone even, why is this a story? I think, though, there's a kind
of level setting here where one of the things you can do when you take a big sort of putting
everything in one place narrative effort like this is you can.
can start from the beginning and remember what the original pitch was. And when you go back to what
the original pitch was, the defense of, why are you guys being so naive? This is a normal competitive
business. Like, okay, so when you pitch this as a nonprofit safety focused research lab that would
aggressively comply with all regulation, like, were the people who believed that naive to believe
it at the time? Right. So that's when the defenses start to feel a little more like pressured to me.
Yeah. Also, like, for what it's worth, you know, it's like, oh, you know, is it really a story that this guy's, you know, telling different things of so many different groups? It's like, that's not like really a story that gets told about Satinadella. It's not really a story that gets told about Tuna Pichai. It's not really a story that gets told about Tim Cook, right? Like, there does seem to be something really unusual here. And my question for you guys, now that you've sort of spent so much time immersed in this company is what do you think it means for Open AI? Well, I mean, luckily we have a really robust independent tech media to, you know, so I was going to tune into TV.
and see that their independent journalistic take on this would be.
Do you want to give listeners who may not be familiar with what you're talking about some context here?
I think the day after our piece closed, Ronan, or something, like late last week, OpenAI acquired TBPN, which is this big sort of tech chat show.
So that's one aspect of this answer, right?
That as Open AI expands and grows, they seem to be sort of buying up more of the press infrastructure to tell their own story.
Relatedly, by the way, a lot of announcements over there right concentrated around when they knew we were going to be running and right developed in the period where we were in these intensive conversations with them.
And many of them sort of pointed at the topics in the piece.
You know, they announced this new safety fellowship that's very airy.
They announced this new governance plan that's very sort of airy and ethereal but are meant to, I think, you know, occupy space in the conversation on the same topics.
And look, I mean, everyone, Ronan, you should say more about this, but everyone, including Altman and the open AI execs we spoke to, recognizes the economic pressures here. I mean, I think you guys were there when he said, oh, yeah, it's definitely a bubble and someone's going to lose a phenomenal amount of money, right? Yeah. So even putting sort of the sci-fi sky net stuff aside, you know, the economic pressures are unavoidable. And a lot of it has to do with this sort of pitchman.
rhetoric, the exact thing we're talking about, right? Because these things are contingent. It's not like,
oh, will it be a bubble or not? It's like, how hyped up will the cycle get is a byproduct of how
people like Sam go around the world talking about it. Yeah. I want to ask sort of a basic question
that I think people have probably raised with you, which is like, why does it matter who Sam Altman is?
If what we are talking about is a technology that could have profound implications on national security, the economy, potentially the future of humanity, it doesn't seem obvious to a lot of people why it matters who is running these companies.
Because a very nice person who is very honest and very transparent in all their dealings could still release a rogue superintelligence that blows up the world.
and a very, you know, manipulative person could release a very aligned model.
And so what we should be paying attention to are the models themselves,
not the people running the companies that make the models.
I'm not saying I believe that, but I'm curious,
what do you make of that argument,
that we are focusing too much on the humans and not enough on the technology?
We probably both have thoughts on this.
I think I have two, the first of which is it's worth noting
that while reasonable minds could perhaps differ on the question you just posed, the answer provided
by Sam Altman and the founders of OpenAI was very clear, which is actually part of the way
the entire enterprise was structured when it was founded as a nonprofit was they talked a lot
about avoiding an AGI dictatorship. They really believed that actually the person who gets there
first and has the most power over this technology is pivotal. The individual integrity is
formative to the way the technology goes and the way it's controlled and the way it's used.
The other thought that I have is, in my mind, you raise a valid point, and more significant than any
of this is the structures around these individuals. We have a technology emerging that could really
affect us all in all of the existential ways you just mentioned, and we don't have the regulatory
guardrails to keep an eye on these folks.
We are completely seeding the power to these individual companies and their whims, the mud fight between them, the quality control that each of them has or lacks.
I think that, to me, is the big question.
And the integrity of an individual figures in that, and it's important, but it reveals the weaknesses in the system.
If you have someone who potentially lies all the time,
could in the eyes of many critics be a danger,
the important thing is to have the structures that account for that.
There's a great quote that you guys have in the piece
from one of his former coworkers who talk about how Sam now has this track record
of setting up these elaborate guardrails to keep him in check
and then skillfully navigating around them.
And it made me wonder if you had seen this piece
in the information this week about tensions that are being reported between Sam and his chief
financial officer, Sarah Fryer. She's reportedly expressed doubts that open eye will be ready for an
IPO this year. And according to the story, Sam has noticeably and awkwardly excluded her from some
conversations related to the company's financial plans, kept her out of some key meetings.
I read that and I was like, well, this is exactly what you guys are writing about in your piece,
right? You sort of bring in somebody whose job it is to look over the finance.
of the entire company, get it ready for an IPO, but then for whatever reason, we're going to
sort of exclude her from some meeting. So anyway, I just sort of feel like we really are seeing
the exact pattern that you guys are writing about now repeating in real time.
Yeah. And I mean, just to agree with all of this, I think the thing that Kevin's bringing up
about, given the power of this, why are we focusing on one personality? Like, I think that's very
legit. I think that this is way beyond one person. This is way beyond one personality. It's not like
the point of the piece is Sam shouldn't be AGI dictator, so Elon should or Demis should or whatever, right?
It's to point out the fact that we're having a discussion about AGI dictators at all is insane.
These guys know it's insane. And yet this seems to be the race that they see themselves being in.
When he was fired, he was brought back in part because I think no one could really imagine an open AI without Sam Altman.
Do you think that's still the case?
I don't think it's unimaginable anymore.
I think that part of reaching the scale that they've reached is that you can have a, you know, Steve Jobs figure be replaced by a Tim Cook figure, right?
It seems like it's inseparable from reaching this scale that that becomes at least a possibility in people's minds, right, Ron?
I mean, does that strike you that way?
Absolutely.
I think the landscape has changed substantially over the period of time we were reporting this story.
The fact that gradually more and more people were talking openly about this critique is very telling.
We report in the piece that there are periodic spasms of senior executives at OpenAI talking about succession again.
Of course, naturally the company denies this, but also very interesting that in recent forms of that discussion,
there has been talk about Fiji Simo being sort of the first potential successor candidate who could slot into any ideas of that type that circulate
between our asking about that and the piece coming out. Obviously, Simo has now gone on leave for medical reasons.
There's a lot of reshuffling. We see it in the Sarah Fryer case. I think you're right to link it to that quote that's in the article about constraints being sidelined.
and yet I think these doubts and questions persist and are now much more out in the open.
On the leadership question, it just strikes me that for somebody who I assume wants to stay CEO for a long time,
it's interesting to be that he's hired so many former public company CEOs to be his top lieutenants, right?
It's like he has the former CEO of Instacart there. He has the former CEO of Nextdoor there.
He has the former CEO of Slack there. So, you know, you're bringing a lot of really sort of
sharpen pointy elbows into the room when you do something like that.
I'm trying to tell Sam that there's danger here.
Pro tip. If you're listening, Sam.
You know, there are people in this piece talking about earlier tracks of Sam Altman's career
where they feel he was deliberately avoiding that.
Actually, part of what underpinned the terrible, terrible fumbling of the firing effort
was a feeling that Sam had kind of stacked the board with.
as one former member put it, JV people.
You know, certainly if we're being more charitable than that,
people who were unprepared for the ruthless corporate warfare that ensued.
And, you know, I think one thing that is accompanied the emergence of this
as a more openly discussed critique is that there's more people around this company,
more stakeholders wanting, you know, professionalizing influences in the mix.
I have to ask about one detail that I loved in the piece,
which is that the first time that Sam Altman and Dario Amade
were scheduled to meet.
They were going to meet at an Indian restaurant for dinner.
This was back in, I guess, 2015.
And Sam texted him and said that his Uber had gotten in a crash
and he was going to be 10 minutes late to dinner.
Now, you did not editorialize on that piece,
but knowing you both, I'm sure that you went back
through the Uber
FOIA requests and found the
logs of Sam Altman's
Uber ride that night.
Is it your belief that Sam Altman's
Uber actually got in a crash?
I think we're just going to leave that
as non-editorialized
and let it stand right there
by itself. I mean,
we also, I will say,
had this conversation
and really liked
just presenting that
uninflicted for consideration.
Okay, if you are the Uber driver who was driving Sam Altman to dinner with Dario Amadee and you are listening to this show, we do want to hear from you.
We do want to hear your side.
Hard fork at NYTimes.com.
We will get to the bottom of this.
We will.
Well, it's a great piece.
People should go read it.
Please do not investigate any other AI companies before my book comes out.
It was a very stressful week for me.
Yeah, why don't you guys take a nice long spring, summer break before you get back?
Yeah, look into some politicians or Hollywood executives or something.
We'll send you some names.
Luckily, it takes us as long to write a piece as it takes you to write a book.
So I think you'll beat us if we do anything else.
There's two of you. It should be faster.
Totally.
Ron and Andrew, thanks so much for coming.
Thanks, guys.
Thanks, guys.
Your hats are in the mail.
When we come back, what our Spanish language friends would call Unacosa Buena.
Did you just Google that?
No.
You clotted it?
Yes.
Okay.
Well, Casey, it's been a pretty heavy show today.
So we thought we wanted to end on a positive note with our segment called One Good Thing.
One Good Thing, of course, our segment where we each talk about one thing that's been tickling our fancy lately.
Kevin, why don't you go first this time?
Okay.
Casey.
I am in love with this space mission.
Yes.
The NASA Artemis II mission,
I have been totally and earnestly obsessed.
My wife was like,
you're sure I are talking about this space mission a lot.
I have been glued to this thing.
And I have been filled with a childlike glee and wonder
that I did not know I still had the capacity to feel.
Now,
what exactly are they doing on this mission?
orbiting the moon. They are going further than any humans have gone from Earth before,
252,756 miles from Earth. And if you're wondering, how many miles is that? Well, the New York Times had a
helpful comparison list. And what do they find? You would need a chain of 2.37 billion of Nathan's
famous hot dogs to cover the distance that this spacecraft has gone from Earth. That's great.
something we can all easily visualize.
Thank you for that comparison.
Casey, I am learning things that I never expected to learn.
I've been watching this with my kid.
I have become completely obsessed with concepts and terms that I did not know a week ago,
including Corona Structure, the Terminator line, which I know you're wondering,
that sounds scary.
It's actually the line that separates the sunlit side of the moon from the side that is dark.
Oh.
I also learned that we don't call it.
it the dark side of the moon. That's not the preferred astronomical term. What we call it? The far side
of the moon. The far side of the moon. I am obsessed with all of these astronauts that are four of them up there,
Victor, Christina, Jeremy, Reed. This is my Mount Rushmore. I love these people who I've never met.
They are adorable. They are incredibly brave. And I think we should go to the moon every single year.
I think we should give NASA whatever budget it needs to do because this has re-ignore.
my faith in humanity. Absolutely. You know, I also saw somebody on social media was posting that
because the mission specialist Christina Koch had communicated with Houston's Jenny Gibbons during the
mission, this mission actually passed the Bechdel test, which you don't often see on these missions.
So I thought that was cool. Also, somebody pointed out, they said, you know, the coolest thing
about going on one of these missions, Kevin, would be leaving Florida at 5,000 miles an hour.
So that resonated with me as well.
Okay, you're more interested in the jokes.
I am filled with childlike wonder over here,
and I just think this is the coolest thing imaginable.
It is very cool.
You know, recently, I had an opportunity to go stargazing.
I'm not sure if you've been stargazing recently.
I was up on Monacaea on the island of Hawaii.
Oh, flex.
And we had a really cool telescope there with our guide.
And I got to stare at the face of the moon.
and it inspired a childlike sense of wonder in me as well.
But it did not make me want to go there
because it looked quite bleak, actually.
You wouldn't go to the moon?
No, there's no Wi-Fi.
Okay, Casey, what is your one good thing this week?
Today, Kevin, I want to talk about the only thing
that can compete with the moon
when it comes to inspiring childlike wonder in a person,
and that is a weather app.
Okay, I'm listening.
So recently I was reading about these entrepreneurs,
Adam Grossman, Josh Reyes, and Dan,
Bruton, and they are the team behind Acme Weather, which you probably have not heard of yet,
but I bet you've heard of Dark Sky.
Yes.
Dark Sky was, by consensus, the best weather app on iOS, and while it rained during the 2010s,
and I'm using rained in the sort of...
The non- Meteorological sense.
It would tell you whenever it rained, and now I am using the meteorological sense.
Very good app.
So this app, yeah, this app was bought by Apple.
Apple in 2020, which was like kind of a head scratcher. Apple already had a weather app. It was fine.
And then Apple sort of integrated some of its forecasts and some of its other features into its
weather app. And then shut dark sky down in 2022. And this made people really sad because I think a
lot of us feel myself included like the Apple weather app has never lived up to what dark sky was in
its heyday. It's like a prediction mark. It's like there's, you know, maybe it's going to rain.
Exactly. Well, so these guys get back together and they say, Frick it, we're due.
weather apps again, and they make Acme Weather. And so you can download this now for iOS. It is
apparently coming later to Android. And I know what you're thinking, Kevin, which is what could you
possibly build in 2026 in a weather app that could differentiate it from all the other weather apps
that are already on the market, right? Yes. Are you wondering this? I am wondering this.
Well, let me tell you a few things. Number one, they don't just tell you the weather. They show you a range
of possibilities in a line chart. So most of the time, it'll be like, yeah, it's going to be.
be 63 degrees in San Francisco today.
But every once in a while, there's a lot of volatility
in all the different signals that they use to
predict the weather. And then you say, okay, I don't
actually know what I'm walking into today.
I better bring a couple of layers.
This is the weather app for rationalists and
other believers in Bayesian statistics.
Exactly. Some of the other things that
this app does, they will send you a push
notification if they think there's going to be
lightning in your neighborhood. Okay. They will
also do that when they think a sunset
is going to be beautiful
wherever you happen to be. Wow. They'll
send you an umbrella reminder if it's going to precipitate in the next 12 hours, and they'll send
you a sunscreen alert when the UV index is high. But I've saving my last two favorites for the end.
Number one, they will send you an alert when the Aurora Borealis may be visible where you are.
That's beautiful. I haven't gotten that notification yet, but I wake up every day hoping I'm going to
get my Aurora Borealis notification. You've got to go to Scandinavia, I think.
Number two, and this is just in time for pride, they will tell you when there is a rainbow in your
neighborhood. Are you kidding me? This is such a good idea for a weather app. Who does not want to be
sitting at your wage slave job? You haven't been outside in like seven and a half hours. And then
Acme Weather tells you, hey, guess what? There's a rainbow in your neighborhood. You're going to book it
outdoors and you are going to behold the majesty of creation. How are they possibly collecting that
data? Well, interestingly, they're taking this ways-like approach where they're inviting their community to
submit reports. And so if a bunch of people say, hey, rainbow in my
neighborhood, they're going to go out and send out a notification.
So now look, this app does cost $25 a year, and I know, you know, probably most people out
there are perfectly content with a free weather app on their phone. That is fine for you.
But as somebody who loves cool things, new ideas, people having fun, I just wanted to shout
out Acme Weather because I think it's a really cool thing.
What is the likelihood that this app will be purchased by Apple and then shut down?
I mean, if that happens, I hope these guys get paid again because somebody has to move the
weather app industry forward. And these are the folks who are doing it. I love that. Like,
Grandpa, how did you make your fortune? Well, I built 17 weather apps that were identical and then sold
them all to Apple. I just also think it's inspiring that a time when some companies are like,
we're going to make a system that is going to force the world to rewrite all software. There are other
guys who are like, what if there's a rainbow in my neighborhood? I want to find out about that.
And those are the people that I want to highlight on today's show, Kevin. Okay. Well, download Acme weather
before the heat death of the universe renders weather irrelevant.
And tell us whether you liked it.
That was a good thing.
Thank you.
Thank you for alerting me to this wonderful rainbow detector.
Well, thank you for alerting me to the existence of the moon.
I know you weren't a big believer in the moon before, but hopefully I've convinced you today.
Well, somebody told me something about a soundstage and, you know, maybe the landing was faked, so I've just been curious.
I think, you know, we're the only podcasters who actually believe in the moon.
Yeah, that's our competitive advantage.
Heart Fork, where we believe that people have been to the moon.
Before we go, we are saying goodbye this week to our wonderful executive producer, Jen Poyant.
Jen has been with the show for years since almost the very beginning, and she's been a critical force in helping us make the show and conceive the show.
So Jen is leaving the New York Times for a new adventure,
but we wanted to just give her a special shout out
and say thank you from the entire Hard Fork team
for all of the amazing work you've done.
It's true, Jen has been a friend and mentor to us both,
and we will miss her terribly,
but she will always be part of the Hard Fork family,
which means she has to bring a dish to the potluck.
Thanks, Jen.
Hard Fork is produced by Rachel Cohn and Whitney Jones.
We're edited by Vierin Pavich.
We're fact-checked by Caitlin Love.
Today's show was engineered by Chris Wood.
Our executive producer is Jen Poyant.
Original music by Marion Lazzano, Diane Wong, Rowan Nemistow,
Alyssa Moxley, and Dan Powell.
Video production by Soir Roque, Pat Gunther, Jake Nickel, and Chris Schott.
You can watch this full episode on YouTube at YouTube.com slash hardfork.
Special thanks to Paula Schumann, Puiwing Tam, and Dahlia Hadad.
As always, you can email us at Heartbreak.
Fork at NYTimes.com.
Send us your zero-day critical security vulnerabilities.
Actually, please don't.
