The Journal. - The Battle Over AI in Warfare
Episode Date: March 10, 2026Anthropic is taking the Trump administration to court, after the Trump administration designated the AI company a security threat and tried to cancel its federal contracts. The move brings the ongoing... battle between the two sides to new heights. WSJ’s Keach Hagey explains Anthropic’s ‘red lines’ at the heart of the saga, how rival OpenAI stepped in to make its own deal with the Pentagon, and what all of this could mean for the future of Anthropic’s business. Jessica Mendoza hosts. Further Listening: - Anthropic’s Pentagon Problems - The AI Economic Doomsday Report That Shook Wall Street Sign up for WSJ’s free What’s News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
The U.S. attacks on Iran have unfolded at unprecedented speed and precision.
The Department of War launched Operation Epic Fury, the most lethal, most complex, and most precise aerial
operation in history.
That's thanks in part to a cutting-edge weapon never before deployed on this scale,
artificial intelligence. Defense Secretary Pete Higgs at the stress the creation of,
quote, an AI-first warfighting force.
And the tech the Pentagon has relied on most heavily is Anthropics.
But our colleague Keech Hege says there's irony in the fact that the Pentagon has used Anthropics Clawed in Iran.
So literally, hours before those strikes were ordered, the president directed the federal government to stop working with Anthropic.
President Trump also called Anthropic, quote, left-wing nut jobs.
The Trump administration is trying to cancel its contracts with Anthropic and to designate it as a
security threat that no federal agency can do business with.
Yesterday, Anthropic struck back, suing the administration.
At the heart of the dispute, which has been going on for months, is a fight over values
and how AI can be used in warfare.
And you've been reporting on this saga, right, between the Anthropic, the Pentagon.
How would you characterize the drama here?
It's completely unprecedented.
I've been asking people around in D.C. over the last week, have you ever ever been
ever seen anything like this where a vendor to the U.S. government became this public punching
bag in this way. And no one can think of an example. I think it really shows us that AI is a different
kind of technology, right? It asks new questions of our society that we just have not worked out
yet. Welcome to The Journal, our show about money, business, and power. I'm Jessica Mendoza.
It's Tuesday, March 10th. Coming up on the show, the battle over AI in warfare.
There.
This episode is brought to you by Volkswagen.
Want to go electric without sacrificing fun?
The Volkswagen ID4 is all-electric and thoughtfully designed to elevate your modern lifestyle.
It's fun to drive with instant acceleration that makes city streets feel like open roads.
Plus, a refined interior with innovative technology always at your fingertips.
The all-electric ID4.
You deserve more fun.
Visit vw.ca to learn more.
S-U-V-W, German-engineered for all.
Can you introduce us to the two men at the center of this fight,
Secretary of Defense, Pete Hegeseth,
and Anthropic CEO Dario Amadeh.
It would be hard to imagine two more different men.
Pete Hegsef, famously is a former Fox News host.
He has sort of made his brand as being a critic of
wokeness and DEI in the military.
For too long, we've promoted too many uniform leaders
for the wrong reasons, based on their race, based on gender quotas, based on historic so-called
firsts.
He has this very sort of brusque take-no-prisoners style.
And Secretary Hegstaff has made using AI and using the sort of innovation of Silicon Valley
as a sort of a key touchstone of his strategy for the department.
Dario Amade, on the other hand, is known for being more of a philosopher.
He's a lifelong vegetarian who writes in-depth about AI safety.
He's a scientist.
He has this curly hair that he sort of twirls when he thinks.
He loves to communicate in these long, and I mean long philosophical tracks.
If you look at the situations I found myself in and the situations humanity has found themselves in,
like there's so many times where it's, you know, very hard and there's this enormous suffering.
And yet there's also this incredible, you know, this incredible, this incredible inspiration that kind of
He also sort of believes in hashing out these things sort of in public, right?
Let's have a public thinking through.
Transparency.
Exactly.
Amade co-founded Anthropic with the aim of prioritizing AI safety over business goals.
The company has written a moral constitution into its AI models.
So Anthropic was founded by a bunch of dissidents from OpenAI that broke off to create
their own company in early 2021.
So there is not a lot of love lost between these folks going back a while, and that has only
become more pointed as they have become real rivals in the business realm.
One way to get ahead of the competition, scoring a contract with the government.
In 2025, Anthropics Claude became the first large language model cleared to work with classified
material. But late last year, the Pentagon began discussing adding new language to its contract with
Anthropic, language that would allow the company's tech to be used for, quote, all lawful
scenarios. And this really came down to something called the usage agreement, right? They didn't
want anything in the Pentagon's usage agreement with the tech providers that tied their hands
in any way. And it was really that usage agreement that was a sticking point between
Anthropic and the Pentagon. What is at the heart of that?
dispute, would you say? At the very heart are two issues that Anthropic says are its red lines,
that it is not willing to have its technology be used for. That is autonomous weapons and mass
domestic surveillance. And these are two things that Dario has written in his essays. He thinks
would be very problematic uses of the technology that his company has been developing, and that
were the sticking points in the negotiations with the Pentagon.
And ultimately, it was the hill of mass domestic surveillance
that Anthropic chose to die on.
There were worried that, you know,
things may become possible with AI that weren't possible before.
An example of this is something like taking data collected by private firms,
having it bought by the government,
and analyzing it en masse via AI.
Are those practices legal?
The issue is, what this whole,
fight has shown is that the law has not caught up to the state of technology. So we're learning
that a lot of what we think the government can't do. It's not because it's illegal, it's just because
it can't technologically do it. Like, for example, if we think about the Snowden revelations,
right? Edward Snowden, the 29-year-old CIA contractor responsible for what's being called
one of the biggest intelligence leaks in U.S. history. Among the things that were so
controversial in those revelations where the call logs, right, between people. That was something
the NSA had been collecting. And it was a major revelation that, oh, my gosh, every single, you know,
call to and from is being collected by the government. But what we also learned is the government
has no ability to actually look through that and make any sense of it, right? It's just too much data.
Now with AI, all of a sudden, things like that might be legible in a way that they were not before.
And I believe the folks at Anthropically, we do not currently have the laws in place sufficiently to protect us against stuff like that.
And because of the questions around the legality of it all, Anthropic wanted its usage agreement with the Pentagon to specify in writing that its tech wouldn't be used for fully autonomous weapons or for mass domestic surveillance.
And the Pentagon said, who are you to tell us how to use your tech?
And I think this was really about setting a precedent for the future, right?
They just did not want this concept that the tech companies could tell them what to do to even be in the conversation.
The Pentagon gave Anthropic a deadline to come to a new agreement without Anthropics' redline exceptions.
That deadline was Friday, February 27th.
If Anthropic didn't agree, the Pentagon would cancel their $200 million contract.
And the Pentagon didn't stop there.
They also threatened to label the company a supply chain risk.
which could limit Anthropics' ability to work with any companies that also work with the Defense Department.
That could torpedo many of Anthropics' business prospects.
As the deadline loomed, another AI company stepped in.
OpenAI, the maker of ChatGPT and Anthropics' long-time rival.
An OpenAI CEO Sam Altman and Anthropics Dario Amadeh haven't exactly been the best of friends.
There are a couple of moments in the last couple of months that really crystallized their relationship.
One of them was the Super Bowl ad that Anthropic ran against Open AI, which didn't say the word Open AI, but it was very pointedly against the fact that Open AI had announced they were going to do ads.
And then, of course, there was this epic moment when they were in India together and everyone was joining hands and those two guys were on stage and they just would not hold hands.
But I think that tells you everything about their relationship, right?
It's gotten more and more tense.
And so what Sam Altman did was begin talking to the Pentagon.
And in fact, what Sam Altman said was, okay, let's talk about that.
But also, these threats that you are hurling right now at Anthropic to label them a supply chain risk are very bad for the country and bad for the industry.
So those are like not appropriate threats.
Please don't do that.
We can find another way.
On Friday, February 27th, the same day as Anthropic,
deadline, Sam Altman announced that OpenAI now had its own deal to work on classified material
with the Pentagon. OpenAI actually says it has the same red lines as Anthropic. They don't want
their models being used for mass surveillance or autonomous weapons. But rather than rely on the usage
agreement to enforce those red lines, OpenAI said they'd build protections into the tech.
OpenAI tried to approach it technically, saying that they would build a safety layer that would make it
so that the model just wouldn't be able to do bad things if you asked it to do them.
So rather than have a usage agreement that said no domestic surveillance,
they would put a safety layer that would somehow make this impossible.
What was the reaction to Open AIs deal with the Pentagon?
The company definitely took some heat for it.
For the timing seemed opportunistic and so much so that Sam Altman came out
and kind of apologized for the timing, said it did look operational.
pertinent is dick and sloppy. At Anthropic, Amade wrote an internal memo addressing its rivals deal.
Darya Amade called those things safety theater in his scathing memo sort of responding to this whole
moment. They sort of think that that's not enough. They think that part of the problem is that,
okay, maybe the laws right now don't allow mass domestic surveillance, but there's nothing to
keep Secretary Hegesath from changing the laws or changing the interpretation.
of the laws in the future,
and they wanted something more ironclad.
Amade also said in that memo
that the Pentagon is targeting Anthropic
because it hasn't, quote,
given dictator-style praise to President Trump.
After the memo leaked last week,
Amade apologized for what he wrote in it.
But the Pentagon had already
officially designated Anthropic
as a supply chain risk.
Secretary Hegg-Seth was true to his word
and did designate them a supply chain risk.
It means that the government has determined
that it's not safe to use your technology
and that no entity in the Pentagon is allowed to use it.
Pentagon's really, really big.
So that's a lot of potential customers that you lose.
The administration said agencies would have six months
to transition to other AI models.
The move could have far-reaching consequences
for anthropic partners and investors,
especially those who are also government contractors,
including Lockheed Martin, Google, and Microsoft.
What that could mean for the future of Anthropics business is next.
The game begins in three, two, one.
Ready or not too, here I come.
Only in theaters March 20th.
After surviving one deadly game, Grace and her sister Faith must now face off against four rival families
in a fresh round of blood in games filled with more action, scares, laughs, and combustions.
Starring Samara Weaving, Catherine Newton, Sarah Michelle Geller, and Elijah Wood.
Ready or not to, here I come.
Only in theaters March 20th.
Get tickets now.
In communities across Canada,
hourly Amazon employees earn an average of over $24.50 an hour.
Employees also have the opportunity to grow their skills and their paycheck
by enrolling in free skills training programs for in-demand fields,
like software development and information technology.
Learn more at aboutamazon.ca.
Anthropic is one of the first American companies that's ever been designated a supply chain risk.
It's a label that's typically held for companies from countries that the U.S. considers foreign adversaries.
It feels very punitive and inappropriate, given the amount that we've done for U.S. national security.
In an interview on CBS, Amade described the designation from the Department of War as retaliatory.
I would have disagreed, but I would have respected them if they said,
DOW, we don't want to work with Anthropic. Our principles are not aligned with yours. We're going to go with
one of the other models. But they've both extended that to parts of the government beyond the DOW and tried
to punitively revoke our contracts beyond DOW.
An Anthropic spokesperson also said the company is committed to pursuing resolution and that
the lawsuit, quote, does not change our longstanding commitment to harnessing AI to protect our national
security. But this is a necessary step to protect our business, our customers, and our partners.
How does the Pentagon justify the supply chain risk designation?
I think the Pentagon's view is that anytime you have ideology begin to affect how technology
will be used in a national defense kind of setting, that that is a slippery slope and
dangerous on its face. But a lot of legal experts and defense experts are really skeptical that
this designation could hold up in court. Yesterday, in its lawsuit against the Trump administration,
Anthropic argued that the designation went beyond the administration's statutory authority.
In its complaint, the company asked the court to declare the moves unlawful and said that the
case is critical for other businesses that may disagree with the government. Shortly after
Anthropic filed its lawsuit, 37 AI researches.
at OpenAI and Google filed a brief urging the court to side with Anthropic,
highlighting how the fight has rippled through Silicon Valley.
A White House spokeswoman said, quote,
President Trump will never allow a radical left woke company
to jeopardize our national security by dictating how the greatest and most powerful military in the world operates.
The Department of Defense declined to comment.
So what could all of this mean for Anthropics' future business?
It's going to be, I think, harder than they thought for them to stick to their values and continue to grow.
They have made the center of their strategy enterprise business.
If you are an enterprise-focused company, you really need to be able to work with the government.
That's one of the biggest enterprises there is.
So it does seem like it really limited a major section of their potential customer base
by alienating the government to this extent.
Some companies, like Microsoft and Google, have said they continue working with
Anthropic on commercial projects that don't involve the Pentagon.
Is there any upside for Anthropic here to continue to sort of stand its ground?
Oh, I think there's been tremendous upside for Anthropic.
They have kind of touched a nerve with consumers and the broader public on this idea that
they're willing to stand their ground to defend civil liberties.
They have seen their app downloads, shoot to number one in the app store, and really
gotten applause from all kinds of different directions from folks who appreciate them standing
their ground. What sort of precedent does this set when it comes to how companies work with the
government, and particularly, you know, the Department of Defense, the Pentagon?
I think this entire fight was about setting precedents. And we've heard that what this did
is scared the living daylights out of every other potential vendor who is frightened of crossing
the Pentagon. So the Pentagon, the Pentagon. So the Pentagon,
just made an example of Anthropic, right? And that means that it's far less likely that
others will raise their voices and try to make red lions and try to push back because
the overwhelming force that came in the other direction was really something to behold.
What this whole thing really shows is that we do need new laws. We do clearly need some way
of sorting out this issue of mass domestic surveillance,
where the law has not cut up with the technology.
And this is sort of forcing this to the floor,
maybe faster than we would otherwise have it.
Right.
It's not something that, like, individual companies can sort out one by one with the Pentagon,
or the Pentagon gets to dictate what is and isn't legal.
Right.
I mean, the issues, they're like all lawful uses,
and Anthropics basically saying,
well, the law, as currently written, is insufficient.
That's the problem.
What are you keeping an eye on next?
I'm very interested in this question of what's going to happen in the interim
when the Pentagon has to use Clawed because it's so integrated.
And yet the political fight has pretty much already happened, right?
Like the guns have gone off, the memos have been sent, the insults have been hurled.
I'm not sure there's any putting the toothpaste back in the tube.
I'm really curious how this interim process is going to work
while they try to get replacements like opening eye up and running.
Especially given that we're in the middle of a pretty big conflict now.
It's not just hypothetical anymore.
Exactly.
That's all for today, Tuesday, March 10th.
The Journal is a co-production of Spotify and the Wall Street Journal.
Additional reporting in this episode from Dove Lieber,
Daniel Michaels, Amrith Ramkumar, and Marcus Weisgerber.
Thanks for listening. See you tomorrow.
