Hard Fork - At the Pentagon, OpenAI is In and Anthropic Is Out
Episode Date: March 1, 2026On Friday, President Trump ordered federal agencies to stop using Anthropic’s A.I. systems and Defense Secretary Pete Hegseth designated the company a “supply chain risk.” Then, just a few hours... later, the OpenAI chief executive, Sam Altman, announced that his company reached an agreement with the Pentagon. The deal ensures its technology won’t be used for the same two safety concerns Anthropic raised: domestic mass surveillance or autonomous weapons. So what is going on? Is this a political vendetta between the Pentagon and Anthropic? Or are there substantive differences between the agreement Anthropic was offered and the one OpenAI signed? We cut through the confusion. Additional Reading: OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash Trump Orders Government to Stop Using Anthropic After Pentagon Standoff We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
Casey, where are you? That beautiful background does not look like your house.
I'm in a ski chalet, in keeping with the hard fork tradition of recording bonus episodes
in the strangest places possible. But here's the good news, Kev, because while I was invited
on a ski trip, I've never in my life had any intention to ski. And so my plans for this morning
were either to talk about AI with my fiance or talk about AI with you. And we flip the coin
and it's you. How are you doing this morning? Wow, I feel so honored. Well, we have a lot
to talk about today because it has been a very crazy 48-hour period in the AI industry
and this dispute between the Pentagon and Anthropic and now Open AI sort of came out of nowhere
at the 11th hour. It is now involved. It has been truly an insane day and a half in my life.
How has it been for you? Well, let me put it this way. Listeners, Kev, imagine you get engaged.
and then one week later, your fiance is declared a supply chain risk.
So, yeah, it's been a really, really crazy few hours over here as well.
And just because we are going to talk about Anthropic and OpenAI and all of this today,
we should make our AI disclosures.
Mine is that I work for the New York Times, which is suing OpenAI, Microsoft,
and perplexity over alleged copyright violations.
Yes, and if you miss the other big breaking anthropic story from over the past week,
the man that I am now engaged to works there.
Well, where should we start, Kay?
see. Well, look, I think if you're tuning in, maybe you've heard the biggest headlines, but I think
it's worth hitting you with maybe just a few key bullet points. One is that in the story that we've
been covering over the past couple of episodes, it has come to the point of crisis where Anthropic
said it had two red lines that it would not cross. The Pentagon said that it was going to move to
declare the company a supply chain risk. And then somehow, within 24 hours of that happening,
Sam Altman and OpenAI swooped in and signed a deal that they say will observe those safeguards.
And so it was just a truly chaotic 24 hours and we should dig into it.
Yes. And none of this has been happening through like normal diplomatic channels.
Basically, as far as I can tell, the entirety of this conflict has been contained in like a handful of posts on X and a handful of blog posts and some stuff that has been leaking out from either side.
So I have been making calls for the last two days to the people who are involved in this situation, trying to get some information.
And I've gotten a little bit, and I'll happily share that with you.
But I would say confusion reigns.
Like no one, even the people who are directly involved in this situation are confused about the details here.
And so I think we should also just say up front that like there is still a lot that is unknown about what's going on right now.
Absolutely.
Maybe to start, Kevin, we could go back to a part of the story that I think is pretty well known,
which is just sort of what happened between Anthropic and the Pentagon,
particularly in those final hours where the Pentagon finally said,
hey, this isn't going to work, we're not going to give you what you want.
And time ran out, and they did not come to an agreement.
Yeah, this escalation started on Thursday, February 26th,
when basically there was a day left until this deadline that the Pentagon,
Ghana had given Anthropic, and Dario Amade, the CEO of Anthropic, put out a statement on Anthropics
website, basically saying, we are not going to compromise no matter what on these two exceptions
that we want, mass domestic surveillance and fully autonomous weapons.
He explained why they were going to compromise on those, and then he said in the line that
a lot of people have been quoting, that, quote, these threats do not change our position.
we cannot in good conscience accede to their request. Basically, we have been trying to work out a deal
while preserving these exceptions that are very important to us, but we have not been able to do so.
And probably worth saying, Kevin, that I think a reason that quote stood out so much was that
I cannot remember any tech leader invoking conscience as a reason not to do something since Trump
has been reelected. So it felt like a shift in tone for the whole discussion around tech and power
and just something we have not seen from Silicon Valley in a while.
Yes. And what I understand from talking with folks close to the situation is that even after this
post from Dario Amadeh, there were discussions happening between the Pentagon and people from
Anthropic. They were trying to work out the contours of a deal. There was some sort of
willingness to at least change some of the language around these exceptions, but
While these discussions are happening in the back channels between the officials at the Pentagon
and the people at Anthropic, President Trump posts a statement on true social late Friday afternoon
just before this deadline that the Pentagon had given Anthropic.
He said that, quote,
The United States of America will never allow a radical left woke company to dictate how our great military fights and wins wars.
He also said that he was directing every federal.
agency in the United States government to immediately cease all use of anthropics technology
with a six-month phase-out period, basically for federal agencies to switch from using Claude
to using other models. One thing the president did not mention is this idea of declaring
anthropic a supply chain risk, right? This is something that we talked about on the last show.
Basically, this is a much stricter designation, something that we don't think has ever been
applied to a major American company before. It's usually used for Chinese chip suppliers or things
like the Kaspersky Lab. But Trump did not say that he was going to designate the company a risk to
the supply chain. And so I think some folks, Anthropic and elsewhere, thought, okay, this is like
a deal that we can live with. We are going to, you know, lose our government contracts, but we're not
going to be declared essentially like an enemy of the state. And more than that, Kevin, he also did not
invoke the Defense Production Act, right? Which, like, to me, was the true worst-case scenario here
where the United States government would effectively have nationalized or partly nationalized
anthropic and forced it to make a version of Claude that did its bidding. So when I saw the
truth social post, my initial thought was like, okay, maybe they're just going to walk away from
this whole debacle and try to save some face. Yes, it did look like that. And then a little over an
hour after Trump's true social post, Pete Higgseth, the
defense secretary posted his own take on the matter on X, in which he said that he was directing
the department to designate Anthropic a supply chain risk. He said, quote, effective immediately,
no contractor, supplier, or partner that does business with the United States military may
conduct any commercial activity with Anthropic. So this was a pretty severe escalation.
And the people who thought, okay, maybe Anthropic is going to, you know, get away here with not
being declared a supply chain risk thought maybe they're not after all.
Yeah.
Now, at the moment of this recording, so far, the only evidence that we have that the Pentagon
plans to declare Anthropic a supply chain risk is this social media post, right?
Like, my understanding is that Anthropic has not been informed of any new proceeding
against the company.
Anthropic says they would fight it in court.
So while this may happen and we should talk about what it would mean if it does, for the
moment, it also appears like it could just be a threat.
So meanwhile, while all of this is going on between Anthropic and the Pentagon, OpenAI has
been working on its own deal with the Pentagon to use its models inside the government's
classified networks.
There have been some reporting on a leaked message that Sam Altman had sent to OpenAI employees
on Thursday, basically indicating that they were standing in solidarity with Anthropic,
which is very unusual because these companies do not like each other
and their leaders have not had a long history with each other.
But basically he was saying to Open AI's employees,
we are not going to sort of cave on these exceptions.
Either we are committed to not having our models used
for mass domestic surveillance or fully autonomous weapons
and actually saying some sort of supportive things about Anthropic.
But a day later on Friday night,
after this whole deal between Anthropic and the Pentagon had blown up in spectacular fashion,
Sam Altman went on X and posted that OpenAI had reached an agreement with the Pentagon
to deploy our models in their classified network, basically saying we have confidence that our models
will not be used for domestic mass surveillance and autonomous weapon systems and that the Pentagon
had agreed with those principles and then they put them into our deal.
So those are the events of the past couple of days, and I think when I summarize them, it sounds insane because what we effectively have are two companies, OpenAI and Anthropic, that claim to have identical red lines when it comes to the use of their products by the military, mass domestic surveillance and fully autonomous weapons.
One of them, Anthropic, has been declared a supply chain risk, which is a very punitive, hard.
measure that basically requires them to cut off all business with the U.S. military and the
federal government. The other OpenAI just announced a deal with the Pentagon to use its
systems in classified networks with the same two red lines that Anthropic had objected over.
There's some nuance there. There's some details that I'm sure we'll get into. But I think if you
just sort of zoom out and look at the facts of the case, it is a truly insane series of events.
It is, and I think we should just talk now, Kevin, about this nuance that you bring up.
You know, we said at the top of the show, there is some uncertainty here.
Kevin and I have not been allowed to review the contracts that Anthropic and OpenAI have with the military,
although we would love to.
We're hard for it at NYTimes.com.
But I think what we can tell you is that it appears that this conflict comes down to this all-lawful use standard, right?
Keep in mind, the Pentagon signed a deal with Anthropic that had in place the red lines that it is now freaking out about.
It went back to its AI labs and it said, hey, we want to change this.
We want you to say, we can use this for anything that is legal.
On paper, that sounds great.
Here's the problem.
We don't meaningfully regulate the use of AI in this country.
And as we've talked about on the show in the past, we do not have a national privacy law.
These are among the reasons that Anthropic has become very concerned about what powerful AI systems might do if they were given to the military in a country where there are not actually laws around how this powerful new technology can be used.
And I think domestic surveillance one is a really interesting one, Kevin.
You know, the Pentagon has said, well, you know, we're not going to domestically surveil people.
That's illegal.
Hmm.
Well, at the same time, Kev, there are other federal agencies right now.
that have mounted what amounts to a social media dragnet
looking through the social media posts
of people trying to immigrate to this country,
trying to find posts that are critical of the administration
and then using that as a pretext not to allow them to immigrate, right?
Now, maybe the Pentagon will say,
well, you know, that's not surveillance, you know,
that's just part of our immigration process.
But I think to folks at Anthropic, they would say,
well, no, no, no, if we knew how powerful tools
that can go through every social media post in real time,
that might be an area that we are uncomfortable getting into, right?
And so this is where I think we start to understand what is different between Anthropic and Open AI here, right?
Is Anthropic has said, we're serious about this stuff.
And I'm sure it's possible to write into a contract, a little bit of legalese that gives them enough cover to go back to their employees and say, hey, don't worry, we're not going to do anything untoward.
While at the same time doing a little wink-wink, nudge to the Pentagon, and the Pentagon could do these tools to do exactly what they're doing with the social media accounts of would-be immigrants, right?
And so to me, that is what I see happening here and seems like a significant part of the conflict.
Kevin, I know you've been on the phone like all weekend.
What do you make of that analysis?
Yeah, I think that's largely my understanding.
When he announced the agreement that they had made with the Pentagon, Sam Altman did put out a statement that left some room for interpretation, I think, on what Open AI had actually agreed to.
So I will be very curious to see the actual language of these contracts, if that ever makes it out into public.
Again, we are hard fork at NYTimes.com.
But what I can tell you from talking with folks on all sides of this over the past couple of days is that OpenAI is framing this as essentially an identical set of constraints, right?
They don't believe that they have agreed to anything that would require them to use their models for mass domestic surveillance or for autonomous weapons.
But in his statement, Altman said that the Pentagon, quote, agrees with these principles, reflects them in law and policy, and we put them into our agreement.
So basically, if you kind of parse that very carefully, he is just saying sort of what the Pentagon has been saying, which is that they're not going to do mass domestic.
surveillance because it is illegal. And what Anthropic has been insisting on this whole time is that
actually there are forms of mass domestic surveillance that are not illegal, as the law is currently
written. And so we want to prohibit the use of our systems for that stuff, too. More than that,
Amade has also said that during their negotiations, Anthropic was offered similar concessions,
but the Pentagon accompanied those proposed concessions with, quote, legalese that would have
made them ineffective, which is entirely consistent with what the
undersecretaries of this agency are saying on X, which is that they were not going to let any
private company dictate how they wage war, right? So I just think that's very important to say
is that Anthropic is telling us, hey, we were offered a very similar deal and it did not
protect you as an American in the way that OpenAI is now telling you that you were being
protected. Yeah. I mean, I think when you boil it all down, there are basically two options here.
One is that the administration and the Pentagon just have a political vendetta against Anthropic.
There's a bunch of language in the statements coming out of Pentagon officials' ex-accounts about how these are all, you know, a bunch of woke liberals who are unpatriotic.
And I think there is some sort of sense in which this is just about style and tone and personality.
Emil Michael, one of the undersecretaries at the Pentagon who's been negotiating this deal, just clearly does not like Dario Ammit.
at all. And I've heard that from multiple people, actually, that there's, like, particularly
bad blood between those two. And so I think that's option one, is like, this is purely a political
vendetta. OpenAI has been chosen for this contract because the administration likes them more,
and there's sort of no substantive difference between what these two companies have agreed to do.
The other option is that OpenAI has actually agreed to things that Anthropic didn't, that there
are substantive differences between these agreements and that OpenAI is sort of using this sort of
legalese, as you put it, to sort of frame this as a victory when really they have conceded
to the thing that Anthropic objected to. I'm not sure yet which of those two is more true,
but I don't think anyone in this situation, except maybe the Secretary of Defense knows.
Yeah. You know, I mean, there are two really important things about what you just said, Kevin.
One is the idea that the federal government is trying to commit what Dean Ball, who was a member of the Trump administration and helped to write its current AI policy, what Dean Ball called an attempted corporate murder, just based on ideology.
And man, if you lived through the bias and censorship debates on social media of the early 2020s, it's really crazy to hear elected officials saying that because we have a different ideology,
than you, we are going to take your contract away, designate you a supply chain risk,
and try to prevent other people working for you, right? So that is, honestly, Kevin, that is how
the Chinese government regulates its tech companies. Either you get on board with the party or
they crush you, right? So that, I think, is really chilling. And again, not just to me,
to former members of the Trump administration, okay? That feels really important to say.
Do you, I absolutely think about that. No, I've been looking back through sort of
historical examples of the U.S. government taking punitive actions against American companies.
And I think it's safe to say that this fight with Anthropic and the Pentagon is by a fairly
wide margin the most punitive action that the U.S. government has taken against a major American
company at least this century and possibly ever. We have seen this administration bully and strongarm
and jawbone companies in the tech sector before. We have even seen them try to block certain
companies from doing business with the government, but we have not seen them try to kill a company
for what, as far as I can tell, are contractual disputes and ideological differences.
It's really crazy.
But of course, but of course, this is why almost all of Silicon Valley has lurched to the
right over the past two years.
It's why Tim Cook is giving golden trophies to President Trump.
It's why Greg Brockman at OpenAI is donating $25 million to Trump's political action
committee, right?
There is this sense that you have to be in.
line with these people or they're going to try and crush you. Until now, though, we hadn't
actually tried to see the Trump administration try to crush a company. But now we have, and I just
sort of can't imagine what kind of chilling effect that is going to have across Silicon Valley.
Casey, I want to get your take on the employee activism that we've seen over the last couple of days.
There was an open letter petition, whatever you want to call it, going around that was signed
by some employees of Open AI and Google DeepMind and other leading AI companies, basically saying,
like we stand with Anthropic.
We also do not want to make tools
for mass domestic surveillance
and autonomous killing
and sort of expressing solidarity
with the stance that Dario Amadeh has taken.
Do you think that's meaningful?
Do you think that's part of what is fueling
some of the decisions that these companies are making?
Because that has been true in the past.
Employees of these companies
have had a lot of leverage
over things like military contracts.
I do think it is very meaningful.
There are a lot of very well-meaning people
at OpenAI, at Google, at DeepMind, as well as Anthropic, who truly do not want to see the most
dystopian possible AI scenarios come to pass. And so it matters that they're going to their
leadership and saying, we are not going to participate in this. I hope that those employees
get a hold of the contracts that their employers are signing and really scrutinize them.
I hope that they take note if they find out that their technology actually is being used for
something that looks pretty domestic surveillance-like that they would blow the whistle, right?
We really are going to need to rely on these employees in the coming years as the technology
improves and as the Pentagon, you know, potentially does the thing that it is telling us today
that it is not going to do. Yeah, I think one other important thing to note here is that Sam Altman
and OpenAI are trying to very carefully explain this to their employees in a way that does not
suggest that they are just capitulating to the demands of the Pentagon. Open AI is saying to its own
employees that they believe they got actually a stronger deal than the one Anthropic had in
terms of protecting against mass domestic surveillance and the use of their systems for
autonomous weapons. Several people pointed me to this sort of line in Sam Altman's post about
this about how they were going to create what he called a safety stack.
basically a set of protections built into the model itself that the Pentagon is going to be using
in classified situations that would essentially prevent the use of chat chputt presumably for the
things that they're worried about. Yeah. By the way, this is the same company that told us it was going
to build safeguards to make sure that SORA couldn't be used to make images of Brian Cranston,
Kevin. So I'm just going to suggest that sometimes when the OpenAI tells you it's going to build
guardrails, they don't actually show up on time.
Yeah. I've also talked to people who say that this is basically security theater that, you know, if you dump a bunch of data that you've collected on Americans or purchased from a data broker into an AI model, like it is not going to be able to tell whether that information was legally gathered. It is not going to be able to tell where that information came from. And so this is not really a meaningful change.
Let me underscore that point, Kevin, because it is so important. It is legal.
for data broker companies to buy up data on millions of Americans, and it is also legal for
federal agencies to buy that data. Now, that does not constitute domestic surveillance to a legal
standard, but it is functionally equivalent, right? So this is the whole ballgame here, right? The
Pentagon already has all of the tools it needs to do what is practically domestic surveillance. It's just
not called that because it's legal to buy data about Americans from data brokers. So I understand
We are so deep in the weeds here, but the reason we wanted to do this episode today is to try to persuade you.
This is very high-stakes stuff.
It is being done in the shadows, and the nuances really, really matter.
Yeah, I think the details and nuances are where the whole story lies right now, and it's hugely high stakes.
And so I think on the surface, this might look like some kind of boring contractual debate between AI companies.
But this is really about the sort of fundamental question of who controls technology.
Is it the people who build the technology or is it the militaries and the governments of the countries where that technology is built?
And I think that is sort of the high level question under debate here.
And it's one where the Pentagon and Anthropic did not see I die.
I mean, this story, Kevin, is the whole reason that you and I have just never been on the side of AI is all hype and it's fake.
and it's a bubble that's about to collapse, right?
We saw these systems improving in real time.
We knew that very soon they would be in a position where they could do the sort of instant
analysis of things like social media data, geolocation data, and other data that could just
potentially create massive new systems of oppression.
And we are now on the precipice of those systems being potentially rolled out under the guise
of a policy that is called all lawful use because there is no lawful.
to regulate them. So it really just could not be more serious, and I'm glad we're getting a chance
to talk about it today. I want to bring up one more thing, though, which is the limb that Sam Altman
may have just crawled out on, right? As I'm reading through his statement, I'm trying to square it
with what I know. You know, you were talking earlier on this show. It's like, okay, so you're telling
me that the same day the Pentagon tries to kick one company out for having two things that it will
never do. It signs a deal with another company and makes an agreement that it will never do two things.
It's so hard to square that, right? And yet, you and I have both covered Sam for a long time.
And we know that a criticism he has gotten from his former coworkers is he tells people what they
want to hear, right? This was at the root of him being fired in 2023 was his co-worker saying,
this guy is telling me what I want to hear. He's not being consistently candid. And he's just sort of
leaving me in this state of perpetual confusion. And so now we fast forward to a moment that is so
much higher stakes than that, right? Because we have to take Sam Altman's word that he has signed a
deal that will not enable mass domestic surveillance of Americans in the short term and maybe
autonomous murder bots in the medium term, which is what, I don't know, three years, five years,
who knows. So the reason that I note that, though, Kevin, is that in every case, it has always
come out in the end what the truth was, right? And I hope the truth here is that Sam got his red lines.
I hope the truth is that somehow he arm wrestled Pete Hegeseth down and Pete Hegset said,
okay, you got me, Altman. We're not going to do any domestic surveillance for real.
And we're not going to do any autonomous murder bots for real. My fear is, though,
that either through naivete or deception, he has misled us and we are going to find out sooner or
later that in fact those two use cases are not only legal, but they're happening.
Right. I think that's still a big TBD. And I would also like to know,
Sam, if you're listening, please come on and talk to us about this. Because I think there's still
a lot of unknowns here. But I would also bring up another point, which is, you know, one of the big
criticisms of Anthropic over the years has been about this idea of regulatory capture, right?
There are many people, including some very high up in the Trump administration, who believe that all of anthropics sort of warnings and statements about the risks of powerful AI systems, the speed with which they're accelerating, the things that they could potentially do have been kind of a pretext, right?
That they're not actually sincere about this, that they're just trying to get a bunch of onerous regulation passed so that they can sort of enshrine their status as an incumbent and prevent.
vent smaller startups and others from competing with them. So we've heard that term a lot,
regulatory capture. This, to me, is an example of regulatory capture, right? This is a company,
OpenAI, coming into a very hot dispute between their biggest rival and the United States government
and effectively using what seemed to be vibes, charm, possibly some, you know, some better political
instincts to get a deal done through their relationships with the government.
So call it what you want. Call it savvy politicking or negotiating, call it, you know, hair
splitting over the deals of this contract. But this is effectively a company realizing that
if it wants to do business with the U.S. government, it has to essentially abide by the terms
that the U.S. government has set. That is regulatory capture as textbook and example as you're
ever going to see.
Yeah. So where we go from here, Kev?
So I think there are a bunch of unresolved questions that I'm going to be looking at over the next few weeks and months.
One of them is like what actually happens to this supply chain risk designation?
This is something that the Pentagon has said it's going to do to Anthropic, but we have not actually seen any like formal language about that other than Pete Hague-Seth's posts.
And we have also not fully understood what that actually would mean.
mean for Anthropic or what kinds of relationships it would be forced to sever with various
other government contractors.
So that's sort of one bucket of unknowns is like all the legal and contractual details of this
supply chain risk designation for Anthropic.
We also still have a lot to learn about what the other AI companies are being asked to agree
to that Anthropic wouldn't and what companies like OpenAI may have done to get their
deal through while Anthropics was being rejected.
And then I think there's a third bucket, which is like, what does this do to the popularity of these companies with consumers?
I think we are starting to see very early signs that some consumers who are very upset about the Pentagon's demands here are switching from chat GPT to Claude.
One of those users appears to have been Katie Perry, the pop star, who posted a screenshot on X of her Claude Pro plan that she had newly purchased.
circled with a little red heart.
So, Katie, Perry really said, the anthropic employees, those are my California girls,
and they're a denial.
I should also say, like, I have to underscore that this is exactly the kind of moral conflict
that Dario Amadee has been preparing for his entire life.
One of Dario's favorite books, a book that he used to buy for all anthropic
employees is called the making of the atomic bomb. It's a very long history of the Manhattan
project during World War II. And the reason that he wanted anthropic employees to read this book
is that he believed that eventually what they were building, the AI models, the chatbots,
would become as important to national security, to the government, to the future of the global
order as nuclear weapons. And he wanted to sort of instill in them the idea that
that they were doing something with profound moral and ethical consequences.
He understood that it's not just like building technology,
that if you build something that is powerful enough,
the government is going to want to use it,
and they're going to want to use it on their terms.
And so I think this is exactly the shape of conflict
that he was envisioning when he was telling people
to read this book about the Manhattan Project.
I think you're exactly right.
It has been so amazing, honestly, to watch how many predictions
that were made by like the rationalists and the less wrong community
or in the early 2010s have started to come true.
These sort of conflicts between the government
and the big AI labs,
while they were not predicted with any degree of specificity,
there was still a thought that we were going to get here.
And now it sort of seems like that moment has arrived.
I'm sure it must feel extremely surreal to Dario
as well as, you know, many other people
who have been working on this for a long time.
I just hope that we can navigate out of it safely.
Yeah.
Well, truly unprecedented 48 hours or so.
I'm sure a lot more is going to unfold in the days ahead,
and I'm sure we'll be returning to the subject here on Hard Fork.
But perhaps by then I'll be out of this ski chalet.
Yeah, I hope you make it down safely.
And I think you should go skiing.
I know you're not a fan, but I think you should do it.
If you knew where my center of gravity was,
you would know that Kevin Roos just tried to kill me live on air.
Hard Fork is produced by Whitney Jones and Rachel Cohn.
We're edited by Veerun Pavich.
Today's show was engineered by Katie McMurrin.
Our executive producer is Jen Poyant.
Original music by Alyssa Moxley and Dan Powell.
Video production by Soya Roque, Pat Gunther, Jake Nicol, and Chris Schott.
You can watch his whole episode on YouTube at YouTube.com slash Hardfort.
Special thanks to Paula Schumann, Puiwink, Tam, and Dund.
You can email us at HeartFork at NYTimes.com with your AI redlines.
