Big Technology Podcast - Pentagon Insider: What's Next For Anthropic and The Department of War — With Michael Horowitz
Episode Date: March 4, 2026Michael Horowitz is the former deputy assistant secretary of defense for force development and emerging capabilities at the Department of Defense, and currently a professor at the University of Pennsy...lvania. Horowitz joins Big Technology to discuss the Anthropic–Pentagon rupture and what it signals about how the U.S. government wants to use frontier AI. Tune in to hear his inside view on how models like Claude actually get deployed in defense workflows, why a contract fight over “mass surveillance” language escalated, and what the trust breakdown says about the future of AI partnerships with the state. We also cover autonomous weapon systems vs. “fully autonomous weapons,” what today’s AI can and can’t do on the battlefield, and how AI is likely to reshape warfare over time. Hit play for a clear-eyed look at where Silicon Valley and the national security establishment collide—and what happens next. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Where do Anthropic and the Department of War go from here now that their relationships exploded?
Let's talk about it with an actual expert who's designed AI policy for the Pentagon,
especially regarding weapons systems.
That's coming up right after this.
Every family tree holds extraordinary stories, especially those of the women who shaped who we are.
In honor of International Women's Month, Ancestry invites you to shine a light on their legacy.
Until March 10th, enjoy free access to over 4 billion.
family history records and discover where they lived, the journeys they took, and the legacy
they left behind. Start with just a name or place and let our intuitive tools guide you.
Visit Ancestry.ca to start today. No credit card required. Terms supply.
Getting ready for a game means being ready for anything, like packing a spare stick.
I like to be prepared. That's why I remember 988, Canada's Suicide Crisis Helpline.
It's good to know just in case. Anyone can call or text for free confidential support.
from a train responder anytime.
988 suicide crisis
helpline is funded by the government in Canada.
Welcome to VIG Technology Podcast,
a show for cool-headed and nuanced conversation
of the tech world and beyond.
Well, many of you have asked for an expert
who's worked intricately on matters
that might involve the anthropic Pentagon dustup,
and we definitely have the right person for you today.
Professor Michael Horowitz is here with us.
He's a professor of political science and economics
at the University of Pennsylvania.
He's also a senior fellow for technology and innovation at the Council on Foreign Relations.
And importantly, he was the Deputy Assistant Secretary of Defense for Force Development
and Emerging Capabilities at Department of Defense.
And as I said in the intro, he worked on policy at the Pentagon, especially on weapon systems.
So this is going to be a discussion that will take you deep inside what might actually be the mindset of the Pentagon
and where we will end up with this dustup with Anthropic.
Professor, great to see.
Welcome to the show.
Thank you so much for having me.
Looking forward to the conversation.
We have been surmising what might actually be the meat of the matter
between Anthropic and the Pentagon.
And I've gone back and forth.
On Friday, I thought maybe it was a marketing move by Anthropic.
Then it became clear that it's a little bit more serious than that,
now that they've been deemed a supply chain risk.
and our audience is basically centered around three different potential scenarios.
I want to throw them at you and see which one you think is closest to the truth.
And by the way, what happened for those who are just reading in,
although I'm sure many of you are caught up, Anthropic and the Department of War,
they had this contract where the Department of War would use their technology,
and Anthropic was looking for a carve out saying that we don't want our technology used
for mass surveillance or autonomous weapons,
and then that blew up the Pentagon not only came.
canceled the contract, but declared them a supply chain risk, which we'll get into.
So here's my three options of what's going on in this conflict.
One is maybe it's just a culture clash over really inconsequential details, and it's just
an ego blow up.
The second is that potentially is it the Anthropic CEO, Dario Amode, valiantly standing up
against mass surveillance and the potential of mass surveillance through AI?
Or third, is this what's really happening at the department?
of war violently pushing back against a private company dictating it how to run wars.
What do you think is closest to the truth in this scenario?
I mean, there's probably like a little column A, little column B, little column C going on,
like fundamentally.
But to me, this is about personalities and politics masquerading as a policy dispute,
although it raises really important policy issues.
And let me tell you what I mean by that.
If you look at the relationship between Anthropic and the Pentagon,
Anthropic was the first frontier AI lab willing to do classified work to support American national security.
So starting right there, like Anthropic was ready to be behind the scenes with the Pentagon in a way that other frontier AI labs weren't ready to do yet.
And Anthropic was also, there was no dispute between Anthropic and the Pentagon about any current projects that Anthropic was doing.
It wasn't like the Pentagon asked Anthropic to do something.
something and Anthropic said no or had hesitations.
It also seems as though there were not any upcoming projects that the Pentagon was going
to ask Anthropic to do, that Anthropic had questions or concerns about.
It seems like this kind of started when after the Maduro operation, when the United States
plucked the leader of Venezuela from that nation and brought him back to the United States,
that somebody from Anthropic basically called
somebody from Palantir and said, like, hey, was our tech involved there? And that's because the way
that Anthropics technology is often integrated within the Pentagon is through a Palantir product
called Maven Smart System. And so Anthropic calls up Palantir and is like, hey, it was like our
tech used. And not saying it was bad. And the Pentagon finds out and is offended that Anthropic
even asked, and that was essentially the trigger behind this. So that combined with the fact
that there was no actual current thing under dispute makes me think that this is at least as
much about personalities and politics as it is about substantive disagreements. So how do you get
from there then to this dispute over the language around surveillance? I mean, it was really one word,
right? It was the Department of War wanted Anthropic to agree.
to language in the contract that said that they wouldn't use the technology for mass surveillance
consistent to some laws that are already on the books. And Anthropic wanted that to be pursuant
to some laws on the books. You know, I, and some people say that's a very, very big difference,
not a big difference, but how do you get from sort of point A to point B where Anthropic says,
how's our technology being used to all of a sudden a litigation of like a single word in a contract?
that's not even related to the Maduro thing.
Totally, not related at all.
I think it reflects the Pentagon updated its artificial intelligence policy about a month or so ago.
And one of the things that it did was say that all future contracts that it signed with any AI
vendors, so not even necessarily just a frontier AI lab, would have to follow a quote,
all lawful uses provision, meaning that they were comfortable with.
their technology being used for, like, wait for it, all lawful uses. Now, meanwhile, like last summer,
Anthropic in the Pentagon signed a deal that the Department of War was happy to sign that said,
that contained these provisions that, you know, made Anthropic comfortable surrounding the use of
its technology. And so then the Pentagon updates its policy and starts, you know, talking
essentially about renegotiating this contract of more or like more.
more or less. And then this, you know, Maduro trigger essentially happens. And what you end up with,
I think, is fundamentally a breakdown in trust between Anthropic and the Pentagon, where the Pentagon
decided that it didn't trust Anthropic to be there for important national security use cases,
like side note, we can talk about Iran in a couple of minutes. And Anthropic didn't trust that the
Pentagon would use its technology responsibly. And the mass surveillance debate in some ways is a good
illustration of this. The Pentagon's been very clear that it follows the law and that mass
surveillance, like not surprisingly, like violates the Fourth Amendment. Like that's not like a thing
that the Pentagon is like, thinks that anybody should be worried about the Pentagon doing.
How much you trust the Pentagon in general might like reflect your views, like your views about
that. And so they think that Anthropics provision on that point is unnecessary because it's already
covered essentially as a lesser included in the obligations that the Pentagon already has.
Anthropic wants these assurances because they're worried about the way that advances in artificial
intelligence could lead to things like deanautomization of anonymized data and create real
mass surveillance issues, including for American citizens. And so you have a conflict there. And the
The crux of that conflict in some ways is that the Pentagon is thinking about artificial intelligence
of vendors and services the same way they think about buying weapons.
And when say like Lockheed sells an F-35 aircraft or a missile to the Pentagon, Lockheed
doesn't get to tell the Pentagon like, oh, you could only like use it against like this
country, but not that country.
And so from the Pentagon's perspective, what Anthropic is asking for is like unprecedented,
like how could they even?
From Anthropics perspective, AI is a service.
It's a constantly updating technology that they need to be involved in.
It's not just like selling a missile to the Pentagon.
And so that that's like a bit of, I think, what's going on behind the scenes.
So I just want to clarify here.
And this is important.
When we're talking about this dispute, we are not talking about Anthropic being used, let's say, in strikes,
like to pinpoint autonomous strikes on Iran.
And we're not talking about the Department of War wanting to,
to like from now start to create a surveillance database, right?
This is simply language that was surfaced after the Maduro thing.
And it's almost a dispute that seems to have, I don't want to say come from nowhere.
But it's not like a critical war fighting capabilities that are being discussed now, nor are these programs in the works.
I think there are a couple of different ways to think about this.
I'm not sure that the dispute necessarily came from nowhere.
If you, you know, Anthropics been very public in its criticism of some other Trump administration
activities unrelated to defense, such as sort of easing up on AI export controls with regard to China.
And so one wonders, although like, who knows, whether in some ways there were maybe some bad feelings
between Anthropic and the White House that could have played a role here.
but shifting back to the defense kind of side of the house.
Right.
Like I think there are like reasons why people may want to worry about,
from my personal perspective,
about artificial intelligence and the way advances in AI
could enable mass surveillance.
I'm not sure the Pentagon is the right locus for that concern.
Fundamentally, like I might worry about like other departments and agencies
like first in that context.
And the interesting thing about Anthropics,
other objection, you know, surrounding autonomous weapons systems is the, you know, the statement that
Anthropics leadership made on, you know, Thursday evening suggesting they actually don't have a
problem with autonomous weapon systems. They just think their tech isn't ready for it yet.
And let me tell you, as the person that drafted the Pentagon's policy on autonomous weapon systems,
Anthropic is not wrong there in that the, if you were going to train an autonomous weapon system,
what the kind of thing that you would want that weapon system to do is is generally not the things that like people fear the most, which is like can this algorithm tell whether like an individual is a legal combatant on the battlefield.
Like that'd be super hard. Like we can talk about that more if you want. What you're generally going to be doing is training an algorithm to do something, do something like say target Russian tanks or Chinese fighters, something very, you're a very specific and bespoke data. And often the kind of, you're a very specific and bespoke data. And often the kind of
algorithms that you're going to be most likely to use in that context are much more deterministic
than, say, like, Claude trained on the slap of the internet. And so, Anthropics not wrong
that their tech, like, isn't ready for prime time for autonomous weapon systems. And they even
offered to help the Pentagon get their tech ready for that kind of use case in the future,
which makes this all the more puzzling, like, how this escalated. Okay. And by the way, you're
bringing up an interesting perspective here. And this is one of the reasons why I was so thrilled
to have you on the show is because you have actual knowledge of how this technology is being used,
which by the way, up until this point, at least for me, has been sort of this, you know, big cloud
because we don't fully know exactly what's going on inside the Pentagon. And, you know, there's
been talk about how, you know, despite this dispute, the Pentagon still used anthropics tool
and tools in the Iran strike. And, well, does that mean, you know, like some of the
people have implied that Claude is out there targeting, you know, combatants on the Iranian side?
Or is it just like there, are they querying, you know, some databases and then going to triple check after Claude makes, you know, some assumption there?
So maybe that could be significant.
So I'd love to turn it to you and just get your perspective on how are Anthropics tools being used inside the Department of War?
Great question.
Anthropics tools are being used in a bunch of different ways inside the Department of War.
And what we're focused on most now in some ways are the uses in the context of the Iran operation
because that or something like that is probably like most illustrative for thinking this through.
And on the classified side, a tool like Anthropics is going to be, as I mentioned before,
plugged into something, plugged into another tool called Maven Smart System,
which, you know, imagine essentially a dashboard that designed to help a,
a combatant commander, like the person in charge of all U.S. military forces in the Middle East,
or all U.S. military forces in the Indo-Pacific, like a dashboard designed to help that person
understand what's going on in the region and understand all the different kinds of things happening,
processing, unclassified data feeds, classified data feeds, putting all that information together,
like trying to help that commander make good decisions with regards to American forces.
And Claude is one of many different inputs, essentially, into that system.
And I have no doubt, and there's been reporting suggesting that there are a couple of different
ways that something like Cloud could be used in this context.
One is just querying public databases, querying public information, like building, like,
what are the most important news services in Iran?
Like, what is the chatter like in Iranian media right now?
like all of those kinds of things.
Claude could also be doing things like helping with simulation,
helping more rapidly generate simulations of what might happen in the context of an attack.
A thing that Cloud is definitively not doing, at least as far as I know,
like I would be genuinely shocked, is autonomous targeting on the battlefield today.
Like that, I would be astounded if,
that was a Claude specific, a cloud specific task. Again, for reasons that I have to do with
technological readiness as much as anything else. And here I think is important context. There's
often a lot of concern that the Pentagon is going to take new tools like AI and use them
inappropriately be sort of overly aggressive with their implementation. And like, don't get me wrong,
accidents will happen when you integrate new technologies.
That happens all the time.
That's happened for sort of like hundreds of years.
But nobody wants America's military systems to work effectively more than the warfighter.
Because systems that aren't reliable don't work and systems that don't work, they get you killed.
So nobody wants our tools essentially to be effective more than the war fighters.
And so the U.S. military has actually been very conservative in some ways when it comes to the
integration of AI in general, let alone a tool like Claude. And so I have no doubt that any
information that's coming out of Claude in this context is going through layers of review
by humans, you know, prior to that influencing anything happening close to the battlefield.
How much of a leg up do you think using Claude would give a military?
I mean, this is sort of going to the importance of it in the battle.
Like sort of summarizing media clips from Iran seems like something that technology has been able to do for a long time.
I mean, maybe, and I'm curious to hear your perspective.
Here's one example.
It's been reported that the agencies had, you know, traffic cameras throughout Tehran packed and were able to see movements.
but is that something that you would use like a large language model for or just as, you know,
sort of more traditional computer vision system?
Right.
I guess like you could, but you could do it with computer vision.
Right.
As you, you know, like as you said.
And the military is often pretty ruthless about using the best tool for the job.
And in this case, you have tools that have been like proven out over years able, like especially
and especially computer vision tools, you know, like less.
sophisticated in some ways, AI tools, proven out over years able to do a bunch of these tasks.
And so you wouldn't, you know, might you throw cloud at that in some ways? Maybe, but you
wouldn't throw cloud at that instead of using computer vision. You might throw cloud at that maybe
to see how those things compare to each other perhaps and what the, in what the assessment
looks like. But like, honestly, this is all speculation in some ways. And in one thing I think that
it's important for people to keep in mind is that because this is filtered through a platform like
Maven Smart System and all of these tools, whether like Maven Smart System or anything else,
they're always on the back end, like more user intensive than it looks like in the movies and
television for the military. They're always a little clunkier. They're always a little bit more,
you know, user, you know, user intensive. So it's not like the like humans are being cut out of this
process. And note that the use of cloud that we're talking about at this context is what we would
say in military parlance is more operational, more looking at how what's happening on the battlefield,
how can you, what it's a decision aid essentially for a commander on the battlefield,
which is neither the mass surveillance objection that Anthropic had, nor anything involving an
an autonomous weapon system.
Right.
Yeah, just knowing what I know about these LLMs, to me, the guess was always, I mean,
maybe it was an educated guess that this was tangential.
Now, maybe useful, but largely tangential versus core to what the military is doing today.
Seems like you most agree with that.
Yeah, yeah, 100.
Like, 100%.
I mean, it wouldn't even surprise me if Cloud's being used in a way that's a little more
experimental.
Like one of the other things behind the scenes here is that,
that the, you know, because of this conflict is in, is with Iran, it's U.S. Central Command
that is running the, that's running the show for the United States military.
And U.S. Central Command of the various U.S. combat commands around the world has been
arguably the most forward leaning when it comes to experimenting and prototyping and innovation.
They've been the most excited in some ways to like, let's see what we can do with emerging
capabilities. Like, I worked with them a lot with my old hat on in the Pentagon, and they,
I have no doubt that they are taking lots of things out for a test drive, so to speak,
including but not limited to Claude, even while they're like keeping it on the straight and
narrow and using the more proving capabilities to, you know, make the big decisions.
Right. And I think, Dari, I mean, you referenced it. Dario said, we don't believe that today's
frontier AI models are reliable.
enough to be used in fully autonomous weapons.
That seems very reasonable to me.
We were talking about it on the show, like whether you let the LLM take the shot.
And, you know, for anyone who's in these tools, it's like ClaudeCode is an amazing tool.
You can build software with it without knowing how to code.
But the amount of time you spend debugging is almost, is certainly longer than the amount of time you spend giving prompts.
So it seems like a reasonable objection from Dario there.
All right.
service announcement. The phrase fully autonomous weapons, if there's anything I wish Anthropic
would stop doing, it's actually using the phrase fully autonomous weapons. Here's why. It's not a term
of art. And so it from the perspective of the Pentagon. And so when Dario says, you know, we don't
want to do fully autonomous weapons like this or like that, it frankly can be, it can be confusing
in some ways for some of the defense community because the terminology in,
in U.S. policy is autonomous weapon systems, and there's a difference between those. And here's
what it is. The U.S. military has been using autonomous weapon systems for more than 40 years.
I think people really underestimate in some ways the degree of autonomy built into modern
weapon systems, even in a world, like before what we would call like AI today, like a good
old-fashioned AI, you know, like kind of world. Let me give you two examples. One is,
something like a homing munition or a radar guided munition where somebody may believe that there's
a radar over the horizon and they fire a missile at that radar. There's no human supervision of that
missile after it's launched. It just turns on a seeker and it goes and hits the radar. Is that,
what if that radar is on top of a school? What if that radar is on top of a hospital? You don't know.
It's gone. Second example is something called the close-in weapon system, which is a weapon system that
protects ships in some military bases from essentially mass attacks. So if they're like 10 missiles
coming in and you couldn't even point and click at all of them if you were an operator, you can flip on
essentially an algorithm that automatically like the Texan shoots at those. The U.S. military has been
using that system since like 1980 as have, you know, like dozens of militaries around the world.
And so we need to be careful then when we talk about autonomous weapon systems and to be
clear about like what is the thing that we are worried about and what is the thing that we think
the technology is ready for or not ready for and as i said before i think anthropics absolutely right
that they're like tech isn't ready for prime time and incorporation like at the edge in an autonomous
weapon system also if you think about like the compute at the edge like how would you even like
fit that into a missile like i don't know but the the but like this is uh uh there are like so many
other way, like if you want an autonomous weapon system, there are so many ways you would do that
that don't involve LLMs, essentially. But public service announcement, the phrase autonomous
weapon system is the appropriate term of art. An autonomous weapon system is a weapon system
that after activation selects and engages targets without further human intervention, like period,
dot. That is the way that the Pentagon at least, different people of different definitions,
But the way the Pentagon at least defines what an autonomous weapon system is.
Can I tell you where I think so much of the confusion is coming from now that you explain this?
All right.
So this is, so that I've worked a couple years in the government.
So, and you talked about the technology.
We both know that the government technology tends to lag behind commercial use cases by a couple years.
Just a little bit.
Right.
Just a little bit.
Okay.
The AI industry has gone.
through two phases over the past year and a half. There was a chatbot phase of AI, right? And that also
includes content synthesis, summarization, these type of things. And now they're moving into an agentic
moment, right? I think there is a misconception that the government is already on agentic, right? Where the
technology takes its own decisions. But really, what I think I'm hearing from you is it's in the chatbot
face. It's still this year, two years behind commercial and this, this worry about the technology
getting to agentic is sort of misplaced because of where the government is.
I think that that's probably broadly right. Although frankly, part of what Anthropic was trying
to do in doing classified work with the Pentagon in the first place was fix that.
And getting in behind the scenes and ensuring that their, that their tech, that, you know,
that America's warfighters had access to things closer to the cutting edge. But
Another thing to keep in mind here is the way that testing and evaluation standards or what the
military calls T&E standards differ from what you would need to maybe toss a piece of technology
out in the commercial market.
Imagine you were releasing an either like last gen like chat bot kind of system or this gen kind
of agentic system into the marketplace as a company.
If there are errors and problems and whatever, like those are.
embarrassing, but you fix them on the fly and frankly like getting their first can get you market
share. There's all sorts of like economic reasons why like a for-profit company might do that.
When you release stuff that doesn't work well in the military, people die. And so the incentive
structure is very different. And so the testing and evaluation of these systems is thus very
different in a military context, like the level of reliability and cybersecurity, et cetera, you need
to hit for something to be like fieldable is very different. So people should it, at least in theory,
like if the system's working properly, like be reassured on that front.
Exactly.
Okay, I want to talk now about the government's perspective and what this supply chain risk designation might do to Anthropic.
Let's do that right after this.
You want to eat better, but you have zero time and zero energy to make it happen.
Factor doesn't ask you to meal prep or follow recipes.
It just removes the entire problem.
Two minutes, you get real food and you are done.
So remember that time where you wanted to cook healthy but just ran out of it.
of time. You're not failing at healthy eating. You're failing at having three extra hours every night.
Factor is already made by chefs, designed by dieticians, and delivered to your door. Inside,
there are lean proteins, colorful vegetables, and healthy fats. It's the stuff that you'd make
at home if you had the time. There's also this new muscle pro collection for strength and
recovery. You always get fresh and never frozen food. It's ready in two minutes and there's no
prep, no cleanup, and no mental load. Head to Factor Meals.com.com slash big tech 50 off.
and use code Big Tech 50 off to get 50% off your first Factor box, plus free breakfast for one year.
The offers only valid for new factor customers with the code and qualifying auto-renewing subscription purchase.
Make healthier eating easy with Factor.
If a driver in your fleet got an accident tomorrow, could you prove what actually happened?
Without footage, it's much harder.
So your insurance rates spike, and you're stuck paying for it.
That's why so many fleets choose Samara's AI-powered dash cams.
clear video evidence, real-time alerts, and coaching tools that help prevent accidents before they happen.
Smsara AI helps reduce crash rates by nearly 75%.
For instance, the city and county of Denver saw a 50% reduction in false claims against them
and a 94% reduction in safety events overall.
This is the kind of visibility that every operation manager needs.
Don't wait for the next accident to take action.
head to samsara.com slash big tech to request a free demo and see how somsara brings visibility and safety to your operations.
That's somsara.com slash big tech. Samsara. Operate smarter.
And we're back here on Big Technology podcast with Professor Michael Horowitz of the University of Pennsylvania.
Also, the former Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities.
All right. Let's talk a little bit about the government.
perspective, is there validity in the government's perspective of telling Anthropic, you might,
you know, you might have these thoughts about how to use your technology, but you don't tell us
what to do. We are, we should be trusted to be the ones who determine that, not you.
I think there are, the government has a point in some elements here. And let me, let me tell you what I,
let me tell you what I, what I mean. And, you know, I, I, I, I hinted at this before.
when the government's used to buying a technology,
to think about the government buys hardware,
the government buys a fighter jet or a submarine or a missile or something,
the companies that build those technology don't tell the government how to use it.
The assumption is that the government will follow the law
when it uses those technologies,
since like otherwise, like kind of what are we doing here?
And so the government viewed these requests from,
and their refusal to yield on them as essentially challenging the Pentagon's authority.
And this is, I think, part of where the, what is a little bit, the, like, culture and
personality clash that we were talking about before, like, where it comes from.
Because the Pentagon's saying, hey, like, we follow the rules.
Like, that is the, that is a thing we definitively do.
Like, you don't need to worry that we won't follow U.S. law.
You don't need to worry that we will, you know, go do crazy things that the technology isn't
ready for. We have law and policy and process designed to ensure that that doesn't happen.
We don't let other vendors tell us we can use their tech and, you know, scenario X, but not
scenario Y. So what you're asking for is unreasonable. And I understand that from the government's
perspective, like why they might, why they might say something like, why they might say something
like that. That's also why, you know, as I suggested before, I think what we're really seeing here,
it's to start us off in this part of the conversation.
But what we're really seeing here in some ways is a breakdown in, is a breakdown in trust.
Exactly.
And so the question is what happens next?
And in some ways, I do believe that if you're a government and you'd think you can't trust your
technology vendor, you should probably swap them out.
But that's not exactly, that's not where the government stopped here.
What they did was they deemed Anthropic a supply chain risk.
And that means that the company cannot work with U.S. government agencies and war secretary Hegsteth went further.
He said, effective immediately, no contractor, supplier, or partner.
That does business with the United States military may conduct any commercial activity with anthropic.
That includes Amazon, by the way, who is a U.S. government contractor and also hosts anthropic models.
I have this from a source with knowledge of the department's thinking.
The feeling inside the Department of War right now is they want to destroy Anthropic.
What do you think about this reaction?
I have a lot of thoughts about this.
Let me start with the bottom line, which is like crushing one of the most innovative companies in the world insulting the earth is not good for American innovation or the American economy.
And it's like, dear God, let's hope they work it out.
but like backing up a little bit, the, right, like you would have met in a normal marketplace
situation, like one can think that the Pentagon's view of this is reasonable or unreasonable,
but it is what it is. And in a normal market view of this, the Pentagon would do one of two things.
Either it will say, we will work with Anthropic on these use cases, but not those that, you know,
like they don't want to do. And if we want to do those in the future, and reminder, they're not
doing them right now. So there was no dispute about a current, current or
planned future use, then we'd find another AI vendor to do that, and that, you know, whether
it's XAI or OpenAI or somebody else like that would do that. Or the government could have said,
you know what, it's not worth it for us to do business with Anthropic. Let's cancel the contract.
We'll off-ramp them and we'll, you know, we'll bring XAI or Open AI or like somebody else
on to meta, whatever, like to address this. That's obviously not what happened. It's not
just that the government has labeled Anthropic as a supply chain.
chain risk, it's in some ways even more baffling than that.
And the supply chain risk designation is for companies believed to present a sort of clear
danger to U.S. national security.
Examples of companies labeled as a supply chain risk are Huawei.
You know, like Chinese companies where the fear is that if a U.S. government agency
worked with them, they might insert backdoors or vulnerabilities that could place U.S.
national security at risk.
Like, that's not really what we're talking about here.
And so I think a lot of people have wondered whether that, that designation would hold up in court.
And also, it's not clear that the supply chain designation has actually been delivered to Anthropic yet.
The, it, it hadn't as of about a day ago, although it's still been threatened.
I mean, Anthropic, I'm sure will be in court as soon as the like, as soon as like they get the letter and like actual designation.
And it was striking, of course, that, I mean, no pun intended.
that less than 24 hours after the supply chain designation,
the U.S. government was using Anthropics technology
in the context of Operation Epic Fury against Iran.
Like, how could they really be a supply chain risk
if you are using them in the context of ongoing military operations?
But the government's gone further.
They've on the one hand said they could label Anthropic as a supply chain risk
or are labeling Anthropics as supply chain risk.
They've also said that they're considering
using the Defense Production Act to compel Anthropic to work on use cases with the government
that Anthropic might not might not want to. And the Defense Production Act or DPA was designed
to ensure that say the government was first in line for vehicle manufacturers, if there was a war
going on and you needed more tanks or something like that. It was not designed for like this
kind of environment. But the government's thinking about these two different things, both
the Defense Production Act designation and the supply chain designation, and they pointed opposite
directions. One says you can't work with the government, and one says you have to work with the
government. Like, points to some of the confusion here. Now, you've worked within government agencies.
You've worked within the Department of Defense. This is from Reuters. State Department
switches to open AI as U.S. agencies start phasing out Anthropics. And this article says,
leaders not only at the Department of State, but Treasury and Health and Human Services have directed
their employees to abandon Anthropics language-trained chatbot platform clawed on orders
from President Trump. They joined the U.S. military and dropping use of the platform.
I'd love to get your perspective just about the speed that governments move. And when you think
about governments evaluating certain technologies, because you've been inside one, what sort of damage
do you think this has already done to Anthropic now that we're seeing so many agencies move
off. There are a couple of different pieces here. I would say, and again, a lot of people seem to,
I'm not a lawyer, but a lot of people seem to think that this, this won't stand up in, the,
the designation won't stand up in court. Right, but even so. Oh, absolutely. Absolutely. The,
the use capable, but it matters insofar as it's not like Anthropic can't work with AWS. It would
mean that Anthropic couldn't work with like AWS government.
It's not, it's not in theory like a death blow to like working with AWS or something like that.
But the, but from a government, from the government agency side, what this implies to me actually is that
LLM integration in U.S. government departments and agencies has still behind the power curve and
behind where, frankly, somebody like me would want it to be.
And it's been sort of, it was much announced over the.
context the last year that, you know, all the frontier AI labs, like made their, made their
technologies available either for free or for like a penny or a dollar or something like that to
the federal government trying to ramp up, trying to ramp up adoption. And so government employees then
at these agencies, in theory, have had access to multiples of these for a while and are like
choosing whichever ones they like want to use for various tasks. And it sounds to me like,
on the unclassified side, Ben, that Claude is being, people are getting instructions,
like don't use Claude, use something else, use something else instead.
This is pretty fast moving, frankly, for the government.
But it was notable in the announcement, both the Trump announcement and the Hegeseth announcement,
that they laid out this six-month off-ramp period for like real national security use cases,
in part because they rely on Anthropics technology right now because Anthropics is the only vendor,
like behind the current curtain in a classified environment.
So I think what we're seeing is that real bifurcation,
where for these unclassified use cases,
the, you know, essentially like flip this, like use, you know,
use chat GPT instead or use like grok instead or something like that.
And frankly, if there's a deal in the future,
they'll just like flip back to using cloud if they want.
On the classified side,
it's going to be a much harder slog because of the integration of cloud
and the fact that it was the first mover.
Because Anthropic was the first company
willing to do that kind of work with the defense establishment.
Then the question is also in terms of what this means for companies
thinking about working with the government
that you could potentially be declared a supply chain risk.
This is from Dean Ball, I think, worked on some AI policy
with the Trump administration.
He goes, even in the narrowest supply chain risk designation,
the government has still said that they will treat you
like a foreign adversary.
Indeed, they will treat you in some.
some ways worse than a foreign adversary, simply for refusing to capitulate to their terms of
business, simply for having different ideas, expressing those ideas in speech, and actualizing
that speech and decisions about how to deploy and not to deploy one's property.
Each one of these is a fundamental to our Republican.
Each was assaulted by the Department of War last week.
And basically, the worry is that companies will be wary of working with the Department
of War if this is what could happen to you.
I'm less worried about that, but I would love to hear your perspective as
someone who's been on the inside.
I mean, this is a rough look for a Pentagon that has worked really hard across multiple
administrations and in a bipartisan way to build ties with, build ties with Silicon Valley
across the board.
And obviously, this administration, the Trump administration has some like deep ties with
Silicon Valley in some places, like less deep ties in other places.
But certainly the notion that if you sign, you can sign a contract with the government,
they might ask you to change that contract.
And if you don't agree to it, they might attempt to destroy you is very different than in terms of the risk then for a company and getting involved with the Pentagon in the first place.
Because going back to something that we were talking about before, when it comes to the use cases that Anthropic may be concerned about in different kinds of ways, I mean, the thing to remember, it's like if you do business with the Pentagon, the business of the Pentagon is war.
So you shouldn't be surprised then that the Pentagon wants to do all the war things with your technology
because that's like the thing that the Pentagon does.
But you, but the idea that if you have a contractor suit with the Pentagon that they might, you know,
attempt to annihilate your entire business, not just cancel the contract, I do think in some cases
could lead to questions about for companies that might be on the making a kind of like marginal
choice about whether they wish to work with the government or not.
That being said, you know, the other frontier, some of the other frontier AI labs, like XAI and OpenAI, are already, are already now willing to work on the classified side. And, you know, Sam Altman is attempting to broker a piece, essentially, and create a deal that perhaps Anthropa could join as well. Now, even if he succeeds at that, will Anthropic then walk through that door? I mean, obviously, like, there's beef between Open AI and Anthropic. But the, as well as with Open AIA and
X-A-I, but the, the, there are other vendors that clearly wish to do these things,
but it's also true that America's warfighters have said very clearly through what we see in
in Operation Epic Fury that they think Anthropics delivering a good product and they wish to use it.
Right. I think, and I'm curious to hear your perspective on this, this does do long-term damage
to Anthropic because even if these laws or even if the, let's say, the supply chain risk,
designation ever makes it to them or is overruled.
Public sector companies, contractors will, just in the back of their mind, think twice before
rolling out Anthropic technology in the future.
I don't know. It kind of depends on how you, I could imagine that scenario.
If the narrow, even if the, right, if the supply chain designation gets struck down,
but all of the contracts are canceled. And after six months, the Pentagon's using other kinds of
things. And Anthropic never gets back into that business. Then one could imagine,
that occurring. Although the, you know, in the context of what we end up seeing in the midterm
elections or a future presidential election, like the politics could change in a way that also
like rejiggers this. But it's also possible that this six month off-ramp period, I mean,
I mean, maybe I'm just being, this could be like wishful thinking, frankly, from a national
security perspective, could allow for some bargaining potentially to occur. That maybe, we've seen that
with TikTok the six month never happened. Yeah, yeah, exactly. And the, and, you know,
that the supply chain letter wasn't delivered on day one made me wonder like, oh, like,
maybe is there an opportunity for bargaining here? Like, who knows? I mean, another challenge
here is like there is, if there's like any organization in the U.S. government that is like full
send all offense all the time, it is like the, it is like secretary Heggsats of Pentagon.
And so it would be, it would be challenging, I think, to figure out what.
what the win-win looks like for both Anthropic and the Pentagon from a public perspective.
But there's probably a lot of utility in that, and it wouldn't surprise me at all if there are
negotiations that, like, maybe they take a couple weeks to start, or maybe they're happening right now,
but if there are there are negotiations that lead to some kind of deal eventually.
Okay. Last question for you. You're someone who's thought a lot about autonomous warfare,
and so I don't want to end this episode.
without asking you, how do you think AI is going to change warfare?
Now, I know it's not like just a couple minute answer.
Yeah, how much time you got?
I mean, as long as you have, we have.
But I'm just curious to hear your perspective on where things go from here.
So I think about AI as a general purpose technology.
It's, you know, it's not a widget.
It's not a weapon.
It's a general purpose technology, which means the analogies to me,
if we want to imagine the impact of AI on military,
or on the balance of power, say more broadly, are other general purpose technology.
So think like electricity, combustion engine, airplane, like those kinds of computing, like those
kinds of things. And there are three different buckets that I would put the impact of AI in.
So one is a bucket that is analogous to the commercial world, which is the military's use of
AI for payroll processing, logistics, acquisition paperwork, like Lord knows the military could be
more efficient from that perspective, having spent a couple of years recently in the Pentagon
bureaucracy. And so there are potentially massive opportunities there just in the bare minimum.
Second bucket is in more that intelligence surveillance and reconnaissance kind of category,
like bleeding into something like the decision support we were talking about before,
where you already had things like computer vision algorithms that were helping the military
and intelligence agencies process all the data that they could.
get about the world and like separate the signal from the noise. But there's a real opportunity with
some of those LLMs if their reliability can be improved to make that happen much faster and much
more accurately. Because while people worry about errors from AI in this context, and it's often
the AI industry, frankly, like speculating about potential errors and in accident sort of from
AI, like humans definitely error prone and which we've seen like all the time. And that's
Like, think about, like in 1999, for example, in the context of the Kosovo bombing campaign
where the U.S. by accident, like, bombs the Chinese embassy, the, like, I don't know,
maybe the computer vision algorithm or LLM might have caught that.
Like, there's, like, lots of opportunity, essentially, in that, like, second bucket for,
for more effectiveness and essentially for buying decision makers time.
Because we tend to think in the military context that the more time people have to make decisions,
And this is like a behavioral science insight, like not a military insight.
More time people have to make decisions, generally the better of the decisions that they're going to make.
And so that's another way that AI can be up.
Then the third is like close to or on the battlefield.
And autonomous weapon systems, frankly, could be hugely important for militaries,
especially if you imagine future conflicts with great power adversaries, say if there's like a U.S. China conflict or something.
One thing people worry about in the context of that kind of conflict is, say, losing
access to satellites, losing access to space. And in what the military would call a degraded
or denied communication environment, something like an autonomous weapon system will be essential
for lots of different kinds of weapons to be able to operate. And algorithmic operational
planning to help commanders, then maybe part of the way that a military like the United States
can still compete and win in the worst case.
kind of scenario. So there's a range of different, in some ways, uses of artificial intelligence.
And what I would leave you with is like macro, I think we're talking about enormous consequences
for militaries. Like this is why this is one dimension of that macro US-China AI competition.
They're not the only dimension, certainly. But that when we get into it, I would encourage people
to think about AI in the military in the context of specific use cases rather than as a monolithic
technology because the kinds of AI you would use and what you would use them for will vary a bunch
depending on the use case. So like autonomous robot wars, not exactly around the corner.
I mean, I'm ready for our robot robot overlords. Like I have been for years. I'm not in the
short term. Okay. All right, Michael, thank you so much for coming on. This was so illuminating and
definitely gave me a deeper understanding of what's going on than any.
conversation I've had previously. So thank you so much for coming on the show.
Thanks for having me. I'm happy to chat anytime. Awesome. All right. We'll take you up on it.
All right, everybody, thank you for listening and watching. We'll be back on Friday breaking down the
week's news. Until then, we'll see you next time on Big Technology podcast.
Michael Lewis here. My best-selling book The Big Short tells the story of the build-up and burst
of the U.S. housing market back in 2008. A decade ago, the Big Short was made into an Academy Award-winning
movie. And now I'm bringing it to you for the first time as an audiobook narrated by yours truly.
The Big Short Story, what it means to bet against the market, and who really pays for an unchecked
financial system, is as relevant today as it's ever been. Get the Big Short now at pushkin.fm.
slash audiobooks or wherever audiobooks are sold.
