Today, Explained - AI goes to war
Episode Date: March 4, 2026The US military is using AI to wage war while AI companies are fighting about how their tech is used. This episode was produced by Hady Mawajdeh and Peter Balonon-Rosen, edited by Jolie Myers, fact c...hecked by Andrea López-Cruzado, engineered by David Tatasciore, and hosted by Sean Rameswaram. Defense Secretary Pete Hegseth. Photo by Brendan SMIALOWSKI / AFP via Getty Images. Listen to Today, Explained ad-free by becoming a Vox Member: vox.com/members. New Vox members get $20 off their membership right now. Transcript at vox.com/today-explained-podcast. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
This is what President Trump had to say about why the United States is at war with Iran.
We sought repeatedly to make a deal.
We tried.
They wanted to do it.
They didn't want to do it.
Again, they wanted to do it.
They didn't want to do it.
They didn't know what was happening.
Not the best explanation for a war of choice, sir.
I'm personally a do-my-own research kind of guy, but let's ask AI why we're at war with Iran.
Chat?
The United States attacked Iran in 2026 because it claimed Iran posed an imminent threat,
particularly due to Iran's advancing nuclear program and missile capabilities,
and aim to reduce Iran's ability to project power in the region.
Wow, that was a better explanation. Thanks, Chad.
Fitting that AI was more clear than the president of the United States,
because it turns out the United States is using AI to fight the war in Iran.
The future of war is AI, and that future is now here.
I'm Sean Bacchusferum, and that's coming up on today, Explained from Vox.
Paul Shari knows a lot about AI and how our military's using it.
He's the author of Four Battlegrounds, Power in the Age of Artificial Intelligence.
We've seen a trajectory of the military adopting AI tools over the last decade,
as AI has continued to progress.
What's newer are large language models, like ChatGPT, Anthropics Clod,
that it's been reported the military is using in operations in Iran.
And so that's a pretty significant development that we're seeing.
And the people want to know how Clod or Chat GPT might be fighting this war.
Do we know?
War in Iran?
That's a great idea.
Let me help you with that.
Well, we don't know yet.
You know, we can make some educated guesses based on what the technology could do.
AI technology is really great at processing large amounts of information.
I literally love processing.
The U.S. military has hit over a thousand targets in Iran.
As you see very well, they have no Navy.
It's been knocked out.
They have no Air Force that's been knocked out.
They have no air detection that's been knocked out.
Their radar has been knocked out.
They need to then find ways to process information about those targets.
So satellite imagery, for example, of the targets they've hit.
Just about everything's been knocked down.
Looking at new potential targets, prioritizing those, processing information,
and using AI to do that at machine speed rather than human speed.
Human so slow.
Moah ha ha.
Cheers.
Do we know any more about how the military may have used AI in, say, Venezuela
on the attack that brought Nicholas Maduro to Brooklyn of all places?
Because we've recently found out that AI was used there too.
So what we do know is that Anthropics AI tools have been integrated into the U.S. military's classified networks.
And so they can process classified information, the process intelligence, to help plan operations.
From writing emails to raiding enemy capital cities.
The Wall Street Journal reports that the Pentagon used Anthropics AI model clawed as part of its operation to capture Venezuelan President Nicholas Medell.
There's no suggestion that Claude was actually firing any of the missiles or manning any of the machine guns.
Yeah, we've had this sort of tantalizing details, okay, that these tools were used in the Maduro raid.
We don't know exactly how.
So we've seen AI technology in a broad sense using other conflicts as well in Ukraine, in Israel's operations, in Gaza, to do a couple different things.
One of the ways that AI is being used in Ukraine, in a different kind of context.
is putting autonomy onto drones themselves.
The drone now flies on autopilot mode using our software.
We assigned it with a mission and it built its own flying route.
Giving the munition instructions on where it needs to go and what it needs to look for.
And so when I was in Ukraine, one of the things that I saw Ukrainian drone operators
and engineers demonstrate is a little box,
box, like the size of a pack of cigarettes, that you could put onto a small drone that would
enable that once the human locks onto a target, the drone can then carry out the attack
all on its own. And that has been used in a small way. It's not necessarily widespread use
in Ukraine today. So we're seeing AI begin to creep into all of these aspects of military
operations in intelligence, in planning, in logistics, but also right at the edge in terms of
you know, being used where drones are completing attacks.
Okay, so we know a little bit more about how this technology was used in Ukraine.
How about with Israel and Gaza?
So there's been some reporting about how Israel Defense Forces have used AI in Gaza
and certainly large language models, but machine learning systems that can synthesize and fuse large amounts of information,
geolocation data, cell phone data, and connection, social media data, to bring this together
process all of that information very quickly to develop targeting packages, particularly in the early
phases of Israel's operations.
Which suggests specific possible targets, possible munitions, warnings.
This system produces targets in Gaza faster than a human can.
But it raises thorny questions about human involvement in these decisions.
And one of the criticisms that had come up was that humans were still approving the
targets, but that the volume of strikes and the amount of information they needed to be processed
was such that maybe human oversight in some cases was a little bit more of a rubber stamp.
The question is, where does this go?
And are we headed in a trajectory where over time humans get pushed out of the loop
and we see down the road fully autonomous weapons that are making their own decisions about whom
to kill on the battlefield?
That's the direction things are headed.
So, you know, no one's unleashing the swarm of killer robots today, but the trajectory is in that direction.
And maybe I'll make a comparison here to self-driving cars, where car companies can map the environment down to the centimeter.
They know the height of the curbs.
They know where the stoplights are.
They can test self-driving cars in the actual environment they're going to be in.
And when they do something weird, that doesn't work, they can update the algorithm.
We don't know where future wars are going to be fought.
It's an adversarial environment.
We don't know what the enemy is going to do.
I mean, the U.S. military is finding this out right now in its operations against Iran.
They're retaliating against U.S. bases, against Gulf states, against Israel, using drones and missiles.
And now we're in a phase in the Iran conflict where things become super unpredictable.
People do an okay job of adapting that unpredictability.
AI is not so great and sometimes does some strange things.
Because you drew a parallel to self-driving cars, you know, we've made an episode about self-driving cars before in which, you know, I think our guest said something like, well, if you're worried about self-driving cars, you know, what you should really be worried about is humans.
Thinking about Iran, we saw reports that a school was bombed in Iran where maybe 160 were killed, a lot of them young girls, children.
Presumably, that was a mistake made by a human. Do we think that autonomous weapons will be killed?
capable of making that same mistake, or will they be better at war than we are?
This question of will autonomous weapons be better than humans or not is like one of the core
issues of the debate surrounding this technology, because proponents of autonomous weapons will
say, look, people make mistakes all the time, and machines might be able to do better.
Part of that depends on how much the militaries that are using this technology are trying really
hard to avoid mistakes. If militaries don't care about civilian casualties, then AI can allow
militaries to simply strike targets faster, in some cases even commit atrocities faster, if that's
what militaries are trying to do. I think there is this really important potential here to use
the technology to be more precise. And if you look at the long arc of precision guided weapon,
let's say over the last century or so,
it's pointed towards much more precision in warfare.
So if you look at the example of the U.S. strikes in Iran right now,
it's worth contrasting this with the widespread aerial bombing campaigns
against cities that we saw in World War II, for example,
where whole cities were devastated in Europe and Asia
because the bombs just weren't precise at all.
And so Air Force has dropped just massive amounts of ordinance
to try to hit even a single factory.
The possibility here is that AI could make it better over time to allow militaries to hit military targets and avoid civilian casualties.
Now, if the data is wrong and they've got the wrong target on the list, they're going to hit the wrong thing very precisely.
And AI is not necessarily going to fix that.
On the other hand, I saw a piece of reporting in new scientists that was rather alarming.
The headline was AIs can't stop recommending nuclear strikes in war games simulations.
I don't know if you saw that one.
They wrote about a study in which Open AI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95% of cases, which I think is slightly more than we humans typically resort to nuclear weapons.
Should that be freaking us out?
It's a little concerning.
It's a little concerning.
So, like, I think happily, as numerous as I could tell,
no one is connecting large language models to decisions about using nuclear weapons.
But I think it points to some of the strange failure modes of AI systems.
So they tend towards sycifancy.
They tend to simply just agree with everything that you say.
And I think anyone that's interacted with some of these models,
they can do it to the point of absurdity sometimes where, you know,
oh, that's brilliant, the model will tell you.
That's a genius thing.
War in Iran.
That's a great idea.
Let me help you with that.
You know, and you're like, I don't think so.
And that's a real problem when you're talking about intelligence analysis.
Do we think, like, GPT is telling Pete Heggs at that right now?
I mean, I hope not.
But, you know, but his people might be telling him that.
You know, so, so, so like, did you start with this ultimate yes-men phenomenon with these tools where it's not just that they're prone to,
hallucinations, which are sort of a fancy way of saying they just make things up sometimes.
But also, the models could really be used in ways that either reinforce existing human biases,
that reinforce biases in the data, or that people just trust them, that there's sort of
this veneer of, oh, the AI said this, so it must be the right thing to do.
And people put faith in it, and, you know, we really shouldn't.
We should be more skeptical.
Be more skeptical, says Paul Shari.
He's the executive vice president at the center for a new American security.
There are two big stories right now in the world of AI and war.
One is the one we just talked about.
The other is the drama between Claude and Pete.
That drama is forthcoming on Today Explained.
Support for this show comes from IM8.
If you're having a hard time finishing your day strong,
then you might want to check out.
IM8's daily ultimate essentials.
It's a daily all-in-one wellness drink that gives my body the support it needs,
without juggling a bunch of different supplements.
IM8's daily ultimate essentials is a go-to for getting the benefits of 16 different supplements
in one tasty drink.
Co-founded by David Beckham, and crafted with insight from experts at Mayo Clinic,
Cedar Sinai, and a former NASA chief scientist, it can simplify your wellness routine
and make it easier to support your health.
This drink is loaded with 92 nutrient-rich ingredients.
such as vitamins, minerals, adaptogens,
CO-Q-10, MSM, and pre, pro, and postbiotics.
It's designed to help you feel good from the inside out.
Plus, it's vegan, gluten-free, and non-GMO,
so you can feel confident about what you're putting in your body,
making it a solid choice if you're focused on your health.
Feel your best self every day with IM8.
Go to IM8.com slash explained and use code explained
for a free welcome kit, five free travel sachets,
plus 10% off your order.
That's I-M-N-8-H-E-A-L-T-H-com
slash explained.
Code Explained.
For a free welcome kit,
five free travel sachets,
plus 10% off your order.
I-M-8Health.com slash explained.
Code explained.
These statements have not been evaluated
by the Food and Drug Administration.
This product is not intended to diagnose,
treat, cure, or prevent any disease.
Support for today, Explain, comes from Rippling.
No one likes running a bunch of disconnected tools to do simple tasks.
So if your company is using an all-in-one platform, it should actually be able to do it all.
Rippling says that their platform can do it all.
It's a unified platform for global HR, payroll, IT, and finance.
With Rippling, they say workflows that normally bounce across multiple tools and departments
can all just happen in one place automatically.
Say an employee gets promoted or moves.
Rippling can update payroll taxes, hand out new app permissions, ship a new laptop,
issue a new corporate card, and assign a required manager training all in one place without
you having it's put in the legwork. With Rippling, you can run your entire HR, IT, and finance
operations as one, or pick and choose the products that best fill in the gaps in your software stack.
So if you or your company want to run the backbone of your business on one unified platform with
people at the center, you can go to rippling.com slash explained and sign up today. That's r-i-p-p-l-i-n-g.com
slash explained to sign up. Support for today, explained, comes from Bambas. Perhaps you want to get
in shape this year. Bombas wants to tell you about the all-new Bambas sports socks engineered with
sport-specific comfort for running, golf, hiking, skiing, snowboarding, and all sport. Meanwhile,
While for the loungers among us, Bambas has non-sport footwear available.
But Bambas doesn't just offer sport and non-sport socks.
They also offer super soft base layers that they claim will have you rethinking your whole wardrobe,
underwear, T-shirts, flexible, breathable, buttery, smooth, premium every day, go-toes,
they say you won't want to leave the house without.
Here's Nisha Chital.
I've been wearing Bambas for several years now.
I have several pairs.
My whole family loves to wear bombas.
I have several pairs of bombas ankle socks, and I have some no-show socks as well that are great for things like loafers and ballet flats.
For every item you purchase, Bomba says an essential clothing item is donated to someone facing housing insecurity.
One purchased, one donated, over 150 million donations.
And counting, I'm told, you can go to bombas.com slash explained and use the code explained for 20% off your first purchase.
That's B-O-M-B-A-S-com.
slash explain code explained at checkout.
At Desjardin, our business is helping yours.
We are here to support your business through every stage of growth,
from your first pitch to your first acquisition.
Whether it's improving cash flow or exploring investment banking solutions,
with Desjardin business, it's all under one roof.
So join the more than 400,000 Canadian entrepreneurs who already count on us,
and contact Desjardin today.
We'd love to talk.
Business.
This is today explained.
Pete Hegseth, our Secretary of Defense, and Claude Anthropics' large language model, gotten a big fight last week.
We asked Axios Tech policy reporter Maria Curie, what happened?
So this actually goes back to before the Pentagon-related dispute.
You know, you have the CEO of Anthropic, Dario Amadai, really positioning himself as
the safety first CEO.
One way to think about
Anthropic is that it's a little
bit trying to put bumpers or
guardrails on that experiment, right?
Because if we don't, then you could end up
in the world of like the cigarette companies
or the opioid companies where they knew
there were dangers and they didn't talk
about them and certainly did not prevent them.
And he has been very vocal.
He's posted on X
and talked a lot about how he does think
there has to be a federal standard
to regulate artificial intelligence.
and that kind of put him at odds with David Sacks,
the guy that's running AI for President Trump in the White House.
They've gotten into Twitter spats before.
And so it was kind of a long time coming
before this Pentagon thing blew up.
This is essentially a situation
where the Pentagon for a while
has been trying to negotiate terms
with all of the AI labs
to bring them into their classified systems
under this standard of all lawful persons.
under this standard of all lawful purposes.
And Anthropic had kind of said, you know,
there are two specific scenarios
in which we are not comfortable with the all-lawful purposes standard.
The first one is this issue of domestic mass surveillance,
and the second one is autonomous weapons.
It doesn't show the judgment that a human soldier would show.
Friendly fire or shooting a civilian or just the wrong kind of things.
We don't want to sell something that we don't think is reliable,
and we don't want to sell something that could get our own people killed
or that could get innocent people killed.
That was not taken while by the Pentagon.
Defense Secretary Pete Hexeth is demanding that San Francisco-based Anthropic
drop a number of safeguards or risk losing its $200 million contract.
We do have a statement from the Pentagon,
and they're telling us that they are currently, quote,
reviewing its relationship with Anthropics, saying, quote,
our nation requires that our partners be willing to help our war fighters win in any fight.
We've been talking to senior officials throughout this reporting process, and they really view it as a private company telling the government how to protect the country and how to do national security and conduct operations.
And essentially what we know is that there were phone calls happening between the Pentagon and Anthropic,
nailing down final language around this contract when all of a sudden Pete Hegzef tweeted
that he would be designating Anthropica supply chain risk.
Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity
with Anthropic.
President Trump posted on Truth Social.
Truth Social.
The left-wing nut jobs at Anthropic have made a disastrous mistake trying to strong arm
the Department of War and force them to obey their terms of service instead of our
Constitution.
Their selfishness is putting American lives at risk.
Our troops in danger and our national security.
in jeopardy.
And the entire federal government was going to have to get rid of Anthropic.
Essentially, Anthropic had been asking for commercially acquired information and data
for there to be a prohibition on that collection in the Pentagon contract.
And this goes to the concern around domestic mass surveillance.
The idea here is that, according to Anthropic, the law has not caught up to artificial
intelligence, and you could have a situation where it's perfectly legal for the Pentagon to collect
commercially acquired information that could include, you know, financial information purchased
from data brokers, web browsing data, beyond that, voter registration roles, social media posts,
whether or not you attended a protest, concealed carry permits. There's all sorts of data out
there that the government can collect in a perfectly legal way, and you could see how artificial
intelligence could make it much quicker, much more efficient to have a continuous collection of
that data to really pinpoint and target individuals. That was a concern. And so they were asking for
this specific language. And they thought they were about to get it when all of a sudden Pete Hagseth
posted on X. Why did they think they were going to get it? Well, you know, they thought that this
was going to be the language, the commercially acquired information coupled with the all lawful purposes.
They thought that that was just going to be enough, but the Pentagon actually came back and said, no, that's not something that we are comfortable doing, which begs the question, how did this Open AI deal then pass muster?
Ooh, that's a spoiler, because what happens is the Pentagon drops Anthropic on Friday evening, and then within what, like minutes they pick up Open AI?
That's right.
So they pick up OpenAI for a contract that was very quickly like, you know, everybody was poking holes in it on X.
I don't see this as a meaningful improvement to the contract.
There still seem to be some big shortcomings slash loopholes.
I agree it's better, but I think the government can drive a truck through the intentionality language.
And we heard from, you know, people familiar with the negotiations too, like this isn't going to actually prevent.
domestic mass surveillance from happening. It's still too risky. So you had Sam Altman on X trying to
field all of this criticism. He didn't ask me anything on Saturday night where he had thousands and
thousands of questions of people trying to get answers on. How did you go from a tool for the
betterment of the human race to let's work with the Department of War? If the government comes back with
a memo saying that in their view, mass domestic surveillance is legal, do you do you do?
that? Were the terms that you accepted, the same ones Anthropic rejected?
And so you fast forward to Monday and you have Sam Altman saying, okay, we've gone back to the drawing board.
We shouldn't have rushed to get this out on Friday. We were genuinely trying to de-escalate things and avoid a much worse outcome.
But I think it just looked opportunistic and sloppy.
We need to essentially add some language to this contract to give people more assurances that we are not going to conduct.
domestic mass surveillance. And what they added was that commercially acquired information cannot be
collected, and that is prohibited, which is the exact words that Anthropic was looking to have in
their contract. So like so many other things with this administration, this ends up feeling
rather confusing and inconsistent because they bail an Anthropic because Anthropic has these ideals,
these standards. They bounce to Open AI, but Open AI is trying to work out a deal with the
exact standards, basically?
Well, now that we have the specific language and the legalese, it's looking like it's
the exact same standards.
You know, we've also heard from the Pentagon, from Pentagon officials, saying, like,
we were able to do this with Sam Altman because he's reasonable.
This was a reasonable negotiation.
And Anthropic has personal vendettas.
And so to your point about inconsistencies, absolutely personalities are a factor here.
And it's not all just going to come down to legalese and these two standards.
Did the Pentagon just go exclusive with Open AI and Sam Altman?
Because there's been reporting that Anthropic was actually used in these attacks on Iran that followed this drama that we had last Friday.
Yeah.
So Anthropic is the longest standing AI model that is being used in the Pentagon for classified purposes.
We've established that it was used in the Madhuo URAD.
We've established that it was used in the Iran raid.
They're very useful to the Pentagon.
You know, you have senior defense officials describing how much of a pain in the ass it would be to actually get rid of anthropic.
And reportedly, they didn't.
No, they haven't yet.
They were given this six-month off-ramp for anthropic to be phased out and for another AI lab to be phased in.
I think right now people are having these questions of, was this all just Sam Altman trying to elbow out?
his competitor from the Pentagon.
I think it's too soon to tell.
So I think what this tells us is that in the absence of a law that actually contemplates
artificial intelligence, we are left as a broader country and society relying on either
Pete Higgsat's Department of War deciding how this technology is going to be used or any one
individual company and anthropic at the end of the day is a company. And so you have all of these
different parties also saying all these companies also saying we actually do think that a law
should be passed. We would love for Congress to actually just set the rules of the road because
we have our own competitive pressures that we're also dealing with. Now whether or not Congress
is going to pass a law around this, I don't know. They've been asleep at the wheel on almost everything.
Congress has been asleep at the wheel on almost everything, says Maria Curie from axios.com.
Peter Ballin-on-Rosen and Heidi Mawaddy produced our show today.
Jolie Myers edited Patrick Boyd and David Tattashore mixed.
Andrea Lopez Cruzavo was on the fact check.
I'm Sean Ramosferrum back today explained.
