Big Technology Podcast - The Pentagon's AI Plan + Behind the Anthropic Fight — With Under Secretary of War Emil Michael
Episode Date: April 15, 2026Emil Michael is the Under Secretary of War for Research and Engineering at the Pentagon. Michael joins Big Technology to discuss how AI is transforming the Department of War, from targeting systems to... drone warfare to cyber defense. Tune in to hear his account of why the Pentagon designated Anthropic a supply chain risk, what actually happened in the contract negotiations, and whether the decision was wise. We also cover how the military's Maven Smart System works in practice, what the U.S. learned from drone warfare in Ukraine and Iran, and whether the Pentagon Pizza Index is credible. Hit play for one of the most candid conversations you'll hear about AI and national security. --- Questions? Feedback? Write bigtechnologypodcast@gmail.com Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
I worry about other countries using AI to take humans out of the decision-making progress.
They don't trust their generals.
If you were so close to being willing to work with them, then how could they end up being a supply chain risk?
It's just we don't want them in our supply chain. We don't want to use them.
Yeah.
President decided that he just wanted the government to use them.
If I went back to my office right now, it's like, how would I order a pizza from outside to be delivered in?
I'd have no idea.
So you're not a believer in the Pentagon Pizza Index?
I'm not a believer in the Pentagon Pizza Index.
We're here at the Pentagon because the AI story that we talk about on this show has escalated quickly, very quickly into a core national security issue.
And you saw that, of course, when the Pentagon banned Anthropic earlier this year.
So let's talk about it with Undersecretary of War Emil Michael and speak with him about how AI might change the future of warfare and how it might already be doing so.
Mr. Undersecretary, welcome to the show.
Thanks for having me.
So AI's capabilities are increasing exceptionally fast, and you're the man tasked with implementing
them at the Pentagon.
So I want to know from you, how is AI going to change war?
How do you hope it will change war?
I think one of the analogies I like to draw is having been at Uber and you look at an
autonomous vehicle and people were scared of Uber from taxis and then they were scared
of autonomous vehicles from Uber, but in reality if you look at FSD from Tesla,
or even Waymo, the safety statistics are amazing.
Self-driving.
Yeah.
And it's like people are afraid of the change, but the change is better than what we had.
The same thing with Uber, people were afraid of the change from taxis, but it made service
more reliable.
There was less drinking and driving, more availability, more reliability.
So if you would apply that to the war context, you could do much more, be more precise,
be more specific about what you're going after, what you're defending.
how you, you know, and the precision is really what's interesting to me, because if you can use AI
to detect and discriminate and discern, a decoy from a non-decoy, you could be more precise.
And the example I always give is like a drone storm is coming at a military base.
You're trying to determine what, are they armed, are they not armed?
What are these things?
How do I deal with them?
well, some of the visualization, these models can help you do a better job of taking them down
or not taking an extra or not a threat, where one human can't really absorb multiple hundreds of inputs at the same time
and make a reaction that's as precise.
Yeah, I want to make this concrete for folks.
And recently the public has been lucky because, you know, in a world where sometimes we don't get the most transparency into how this technology works,
we did get a demo. And this came from Cameron Stanley, the Department of Wars, Chief Digital, and AI officer,
and he showed what a program called Maven Smart System, which is the Pentagon's core tech platform,
looks like, and I'm pretty sure it was called Target Workbench. And this is where they select targets
and then end up going and sending, seems like they use the word action for them.
My understanding is they end up going and trying to, and sending the attacks to these targets through the system.
So the way he described it is it's this single unified visualization that allows you to look at live images and then be able to select targets.
Well, that and then imagine the context around that.
Where are my assets?
Where are my planes?
My boats?
What might happen if you took that action?
What might be the reaction?
Subsuming all that information, but still having a human make the decision at the end means that you're increasing.
increasing the human context window is one way to think about it, right?
When you talk about context windows and AI,
well, think about a human that's trying to absorb all this information
to make the best decision they can.
If you could synthesize that information so they can make that decision,
and you're using more sources by definition almost,
the data and choices are going to be better choices.
Yeah, and he showed it in action.
You're seeing this dig it overlaid on this map,
and then he says, when you find something that you want to target,
and you'll see the information, he says, it's very interesting,
very interesting. He says left click, right click, left click, and then it ends up in a targeting
workflow. What's happening in those clicks? I mean, you know, what's happening in those
clicks without knowing exactly, I mean, it could be everything from, if it's an error,
we'll say it's an error-oriented thing, there's an F-35, it could be what's the weather,
what's the drag, what do I have on board the airplane, what am I going after, where is a collateral,
potential collateral effects.
I mean, it's, again, it's less giving you one example of how that might work and more just
imagining how much input you could have into that decision when you're, when you have a
computer basically able to gather that information, help you synthesize so that you can make
the right choice.
Yeah, it's interesting.
He shows that there are toggles that the military can select, whether it's an optimization
for how much fuel you want to burn, what munitions you want to use, the distance.
you want to use, the distance that you need to travel to hit the target.
Yeah.
And then you can optimize.
You'll weather, you know, where are other assets that the adversary might have and where
and how might they react.
Just the amount of information that you could absorb is almost infinite.
So the idea of taking one person and giving them the power of 10 people makes them better
of what they do by an order, potentially an order of magnitude.
Yeah, and then within their, once that's in the workflow, the last step is whoever's looking at it can, can act, as assuming they have the permissions, can action on that target, which means send the assets.
Which they're doing anyway. So you have people whose job it is to do this. So that, right, already without any computers, it could be with paper and pen, it could be with whiteboards, it could be with PowerPoints, and now you're accelerating that and giving this person the power of more tools so that when they do,
do the right click, left click, right click, or left click, right click.
Some of those clicks, yeah.
They're way more informed.
And then you're going to lead to better outcomes.
Right. And it is interesting to see what's happened with this digitalization.
Whereas before, this is from pirate wires.
They say by the start of the conflict with Iran this year,
targeting processes were connected with PowerPoint, email, and Excel files.
I'm paraphrasing, target lists were relayed in spreadsheets.
sequence maneuvers set in Gant charts and PowerPoint.
I mean, that's probably the case historically,
because when the AI models start to become generally available.
And then to consumers, right, we're the chat GPT moment in 22.
And then you say, when is it available to enterprises?
And then one is available to government on the networks that government uses for war fighting.
And you're talking about a fairly recent phenomenon.
where these tools are even available.
And then we went to protocols, safety, testing,
the modeling and simulation for how would you use this in a complex.
There's a lot that leads up to actually using it
in a way that we feel responsible for.
What's interesting, I don't see an LLM in there.
Are large language models
are today's generative AI layers baked in that system?
Yeah, I mean, I think the genesis of what Pallentere does
There's an orchestration layer on top of data streams that we, data that we put in it and
say, here's the data we would normally use for any battlefield operation, plus an AI to help
you synthesize it.
So all those things are combined, and they provide the visualization.
But there's not like a chatbot on the side window, which is like lay out a list of
targets that I want to hit.
My objective is to win this war.
What are my targets?
It's not like, it's not a SkyNet thing.
Like, no.
It is a tool like, like any other tool that you might have on your computer or in your war room or with your team, except it's on your computer visualized.
But you still have checks and balances.
You still have to get all the authorities you need to do anything.
It just, it services the choices in a way that's more.
more consumable, if that makes sense.
Right.
And it's good to have this discussion because I think as this is a fast moving technology,
it's good to be able to talk about it so everybody understands how this works.
And I think that this is, again, going through some of the what's actually happening
versus misconceptions.
There's been some talk and we're going to get into the anthropic situation in the middle
in a bit, but just to talk specifically about what an LLM can do in this process.
been talk that like the LOM was involved in the kill chain and you know but but that is not
exactly what the LM has been.
The L.A so people have I think like let's talk about the extremes and I talk about this in
the way we're deploying AI in the department.
There's the enterprise corporate level like tons of powerpoints are generated in this building
memos you couldn't you couldn't imagine us like nothing you've seen in the corporate world
and that could all be made more efficient.
And that's sort of the mundane work
that people would prefer to do less of
so they get some more interesting work.
Then there's the intelligence layer,
which is imagine all the intelligence we gather
from satellite imagery all over the world.
How do you synthesize that?
So right now you have to have a human analyst,
look at everything, and make a judgment.
Imagine you have the historical data
of all satellite imagery.
Then you can look at it and say,
this is an anomaly.
And I can learn,
what it was so it could tell you what the anomaly detection might be,
which is a totally different paradigm for Intel analysis, if you will.
And then third is for war fighting,
where it could take all the paperwork and modeling and simulation,
all those things, not only be able to have you react faster,
but react in a more precise way.
And those are kind of some more tangible ways of AI.
And that's why I think if people understood that better,
particularly in Silicon Valley, they're like, oh, that makes sense.
Like any big company would do or any big organization, efficiency,
how do you be strategic about what you're doing and allow more analysis,
and then how to use it to execute on whatever operation you have in front of you.
Yeah, and this is, I mean, a big reason why we're here is I wanted to speak with you
because I read so many stories and they didn't comport with what I was hearing from people
close to what was happening.
And I thought, let's clear the air.
That's the way.
That's what I say here in the department.
That's right.
So just to confirm, the LLMs, what they're doing is they're summarizing different reports.
Synthesizing, interpreting, you know, taking in different forms of data and giving you alternatives.
Okay.
And most of these are very mundane because, again, you have to imagine that every single thing that the military does is, has to be audited, has to have the right command.
command and control structure like who's authorized this and that hasn't been checked through
legal system you know has to comply with all our memo or internal memos about ethics and sort of the
laws that we follow in conflict and that doesn't change it's just the tools to do that make that
better and easier if that makes sense now there's an argument among those who watch this tech
in action that sometimes a little friction is better right like that was the one thing
that made me feel somewhat uneasy when I looked at this
smart may even smart system demo is like you know maybe we want the
Excel spreadsheets and the word docs and the powerpoints when it comes to
something as serious as making a decision to attack a target like maybe you
don't want to make it that easy because the easier you make it the easier it is
just to hit action and send it away the friction's there regardless
that what again the it is
a key point. It's you have the same rules of engagement, the same approval system,
which you now have is better aggregation and synthesis of the data that you would already use
to make that decision. So it's partially about speed, but it's partially, it's more about more
data points, right? So if you think about it as we're taking as many data points as we can
to make a better decision, yes, it's going to be faster if you were going to go hunt and pack for
all those data points, but that makes, you know, there's no military in the world that doesn't
believe in speed. So that's sort of, you know, speed wins the game. Look what happened in Venezuela.
At the speed at which that execution of that operation happened meant that we didn't have any
casualties on our side. That's amazing. If you had to spend way more time, you weren't able
synthesize information as well as one could. Maybe you had to be there for 48 hours instead
of three hours, right? So you think about that speed has to be one of our prerogatives,
but better information is the goal so that the decisions are more precise and more consistent
with the operational objective we've got. Is there a limit to what this can do for you? I mean,
I'm thinking in the context of the war with Iran. Obviously, there have been many airstrikes,
lots of them quite precise.
The entire echelon of Iranian leadership taken out, but the IRGC is still in control.
There's a new Ayatollah with the same last name.
So isn't there a limit?
Yeah, there's a limit.
I mean, no one, I don't believe that there is some all-seeing, all-knowing answer to human conflict,
which has been happening since humans existed.
Right. I think that ultimately what you want is clear objectives. You need the manpower and machinery to do it. And you want to do it with the least cost, with the least amount of damage and the quickest time. Right. That's the goal. And I don't think, you know, AI or really any technology is sort of the, you know, becomes the answer. It's just one of the tools.
Yeah. And that's sort of one of the fundamental questions.
here is, does AI just become something that is a speed up, is a friction remover, or can it
fundamentally change more? I mean, you know, I don't think, you know, I don't worry about that
from our side because I believe the way the United States has structured our command and control
is you have a commander-in-chief in the Constitution. He appoints a secretary of war who's confirmed
by the Senate, and you have the generals and all their ranks. So all,
All the procedures to make sure that decisions we're making are the result of a democratically elected leader and a Congress that finances these things.
I worry about other countries who don't have that using AI to take humans out of the decision-making progress.
They don't trust their generals because of graft, because they don't have the expertise.
And they start to use machines in place of humans as opposed to using.
machines to augment human humans. So that's more of a way for me. And I think that one of the things
I've tried to explain to some of these companies is think about the alternative. What would an
adversary want to do with AI that we wouldn't because it's not consistent with our values?
And we have a chain of command, a constitutional government. If another government doesn't and
wants to use AI to eliminate risk, human risk, we're looking to augment human capability.
It's a totally different way of thinking.
Which governments are you referencing?
I mean, I think if you think about the biggest military buildup in world history in China,
and you think of what you've seen the purge of the generals and sort of the military hierarchy there,
you start to wonder, well, how do you replace all these people?
You know, what is the command and control?
What would your A high strategy be if you're running that country relative to ours?
It's just a different mindset.
And so the uses that we've talked about right now are largely, when we talk about LLMs in this world, largely their chatbot uses.
Or I put them in the chat bot bucket, right?
You have information, you synthesize the information, you get something, you know, that saves you time to make a decision.
But now the AI industry is moving towards agents, right?
Which is like the word connotes letting the AI take some action for you.
Do you have a plan for agents here?
Is that where this goes?
I think that not for things that require human judgment.
No, I mean, again, you have to have an endpoint where it ends with human oversight and human discretion on the most consequential decisions, right?
But you could imagine scenarios like I could describe with a drum swarm coming in at a military base at night and how do you deal with that.
But again, it's not an agent use case per se.
That's like a visual discrimination or discernment use case.
And maybe you have a directed energy laser that could take them down.
And it's a lot cheaper than the alternative, a lot safer, a lot less collateral damage.
But in terms of agents, we've had some agent pilots at our enterprise level.
Remember we're talking about the enterprise corporate level?
Right.
Just to do the mundane things we have to do every day.
but those things are not sort of where we're at at the war fighting level.
Okay, so if I'm hearing you right, basically the plan here is not to automate warfare.
No.
But the question is here, if you have your adversary who's doing that, let's say you're in a direct conflict.
I mean, maybe it won't be China.
Can you really afford to sit still and do it by the book?
Because that's the worry, right, is that these capabilities are out there.
They're integrated and it becomes tempting to like go into, let's say, a Maven smart system.
and say, L.M. is getting me 99% of the way there. Just finish it off.
No, and I'll tell you why. It's...
Not that I'm advocating for it. I understand. I'll tell you why. It's like, number one,
that's the reason that U.S. has to be AI dominant. So we're never facing a position
where the counterforce AI is better than our AI, and therefore we have to face those
choices at all. Secondarily, people confuse automation with some sort of
automated army, right?
And automation just described you in the drone example.
What about an automated mine sweeping
or mine detection operation?
There's no human underwater that you want to find the mines.
There's no human involved at all,
but there's an action you wanna take to do that.
Well, I don't wanna say like, well, we don't want mines
on our shores, sounds like a good idea.
Or there's a missile coming at you
and you wanna take it down from space.
Like Golden Dome, like we talked about,
How do you do that?
Right?
You have to do that in 90 seconds from when it's launched.
So those kinds of things in the most extreme circumstances, you want humans to be able to rely on some automation capabilities.
But in terms of mobilizing a whole army or a whole fleet of jets or a fleet of suites, that's not in anyone's mind.
And we've written that there's a 35-page directive at the DoD that talks about human oversight and how we manage these systems.
And we're constantly updating that and making sure we have the right controls on it.
Yeah, one more thing about LMs.
One thing that I heard is that they could be useful potentially in being another layer of data on top of strikes before they happen.
So for instance, the school in Minab, Iran, where there was markings outside of playground and hopscotch outside, maybe an LM in the future, if something like that becomes a target can be like basically flag it and say, hey, maybe don't shoot here.
Yeah, this is the point I was trying to make with the driverless cars.
is like if a driverless car ends up detecting a Jay Walker better than a human, isn't that a better option?
So when I say it's there to augment human decision, it could be on the front end or on the back end,
which is check and make sure this is something that we want to go after or hear warning signs.
It works both ways.
But ultimately, humans have to make the decision.
That's the end state, how that decision is contributed to at the decision.
like LLMs, especially the ones that are trained on visual, you know, Google has your NEST
cams, it has YouTube, has a lot of human movement.
All these things have different data sets that they're trained with to some degree that
are proprietary could be very valuable.
So that's why LLMs, I think, is going to go away as a term because they're not large language
models only.
They're visual.
They're going to be used for robotics.
They're going to be used for a lot of things.
Yeah, and that's the general side of the whole AI part.
That's right.
Let's talk about drones briefly.
You brought up a few times.
I feel like it's worth discussing since it's part of your remit.
Very interesting uses of drones in Ukraine right now.
And we saw, I think, unprecedented uses of drones in the Iran war.
Different use cases.
One is an air war.
One is a ground war.
What are the main things that you've learned watching this in action?
And how do you think it changes, again, the way fighting might happen?
Yeah, two different – you're right to point out two different scenarios.
So in Russia, Ukraine, you have a battle over territory.
And so that battle over territory, where the lines are drawn,
means that with the drone warfare, the robots are the front line,
and the humans are back.
And the idea is, well, why risk a human going front
if you could send a machine first and see if you could fight it that way?
It's still a lot of destruction death there
that's obviously sad and unnecessary,
but I don't know how much more there would be if you had a Civil War-style thing where you have, you know, humans on humans.
In Iran, the drones, I think the lesson from that is that the imbalance of costs, right?
You have a cheap drone going against very expensive targets.
Right. And also millions of dollars to shoot one of those things down.
And to protect your exquisite targets on your, on your, on your,
side against a very cheap drone, you have to use expensive countermeasures. And so the lesson there is
how do you turn the dial from maybe we should have more mass attritable weapons like drones or
counter drones that are affordable so that the cost ratios are similar as opposed to a country
that can afford cheaper stuff being able to threaten, you know, expensive assets on our side.
And for me, that's been a big push in this department, which is how do I bring we call mass-attritable weapons that are not exquisite that can be delivered quickly, that are designed to manufacture for manufacturability that are cheap that you can afford to lose, as opposed to the big stuff that we build.
That takes 10 years to build that cost billions of dollars.
Yeah.
So let's tackle both of the ways that the U.S. is working to head off these threats.
Let's start with this, the bigger drones.
shall we say yeah so there's a program here Lucas right low cost 30,000 dollars a pop you can send
them out and do they crash into the other drones I mean what's the point of or do they do
they do the same thing that the drones do they have the same idea is that the Shaheed
drones that the Iranians had a one we call it a one-way attack drone long distance can go fast
but cheap to manufacture these are sort of that and they could do a lot of things they could be
instead of take out other drones or they're going to be offensive.
And they're designed to be cheap to manufacture.
If you lose a couple, you're okay, right, just from a financial standpoint.
And, you know, they're used them the same way in theory.
Are we working with the Ukrainians on this project?
I mean, there was like some headlines that the Ukrainians offered to help.
We turn them down and now what's the story there?
You know, there's two levels of this is sort of the grand sort of United States, Ukraine relationship.
But we just launched our drone dominance program.
And I think there was two Ukrainian companies in it.
We're going to be like, you know, onshore manufacturing here and take some of the learnings with them here to help us with our kind of smaller drone, you know, scenario.
And so we're sort of agnostic to that, but we want to divest us.
of supply chains from adversaries.
So that is one of the requirements is that the drones
that we use at the drone dominance program
don't have a dependency on adversaries.
Okay, and that touches on the smaller drones
or the ones that are being used in the Land War,
the DGI-style drones that.
Yeah, first person view.
Right, exactly.
Yeah.
You know, China has been putting on these displays,
epic displays of the drone art in the sky.
Or swarms.
Drone swarms, right, to call it what they are.
And at first look, it's like, man,
like China's really innovating on fireworks.
But then you realize this is completely
a military simulation.
Could be.
I mean, I think that's this scenario that,
that, you know, I've tried to explain.
And I do think it's something that these AI companies
understand what's explained to them.
And you could say like, you see that drone art
display that you saw, imagine those were armed drones and imagine that they were communicating
with each other and they could therefore form and reform and reform in ways against your defenses.
How do you defend against those? And depending on where you are, you may have a fully
defendable garrison, let's say, but let's say it's a small military base. Let's say it's over
the border. How do you deal with these things? And that's like something, it's a new challenge
that wasn't present or we weren't, at least we weren't thinking about before the Ukraine-Russia
war.
So what is the answer?
Like, is the U.S. working on the defense side of that and on the offensive side?
Both.
Okay.
All the time, right?
drone dominance has both elements to it.
We have a counter-UAS, counter-unman systems, task force that's looking at everything from lasers,
direct energy, which is one of my critical priority areas to how do you do electronic warfare on these
things to take them down. They're all run in some way. So there's lots of different
measures and countermeasures. That's what makes this technology, this time in the department
so interesting. Yeah. The danger of warfare is changing. Technology is getting more capable.
The actual ability to access technology is becoming cheaper. The need to have these systems
interoperate is never greater because what is the drone storm? It's a set of interoperable
drones that work together and you could see them in the sky like you're talking about.
And you can imagine what their military utility might be.
So the tech problems are super interesting, right?
They're hard, but they're interesting.
Now, briefly on the cyber warfare side of things, I imagine AI could really impact that side of warfare.
It seems so.
It seems that models that are trained on code can learn vulnerabilities in code is what these companies are saying.
that presents risk and opportunity.
But yeah, I mean, they're obviously, you know,
what you've heard in the news yourself
and it's been released about the cyber capabilities
that are almost here,
are certainly going to come from every frontier model company.
At some point, certainly going to be,
you know, try to be distilled by the adversaries
are going to be, you know,
the next wave of innovation from these companies.
Okay. So it's clear, I think from the beginning of our conversation, that AI is becoming critical in what the Pentagon does.
It's helping synthesize information in some areas. It's helping with targeting. Clearly, you need it for drones, and you also need it if this is going to be a new cybersecurity front.
Yeah.
So I want to talk about how you pick the AI vendors. So we're going to talk about the situation with Anthropic and then a few other topics.
And we come back right after this.
Today's show is brought to you by Narwhal.
Look, the common robot vacuum experience can be rough.
Robot vacuums can struggle with stubborn stains and spread dirt while mopping.
They can't navigate wires, they bump into stuff all the time, and they struggle to navigate
corners.
But the Narwhal Flow 2 is my favorite new product because it just gets this stuff right.
Its flow wash mopping system uses 16 angled nozzles to spray fresh water continuously.
It's built-in maintenance-free scraper removes dirt in real time, and its wastewater
extraction and storage system prevents residue and orders.
When it finished cleaning my apartment, it felt like we just had an expensive cleaner come in,
and boy, did we need it.
It also has unlimited object recognition, so it doesn't spend all day slamming into your walls
and furniture.
If you want to treat your home to spotless, fresh floors, visit us.
dot narwhal.com slash Alex.
That's us.
n-a-r-w-A-L dot com slash Alex.
I've interviewed a lot of great tech founders on this show, and one surprisingly universal
challenge comes up again and again, finding the right domain name.
It's something I ran into myself when launching big technology.
The names you want are often taken, and it's tempting to just settle and move on.
But the founders I respect most don't settle on fundamentals, and your name is one of them.
It should immediately signal what you actually build.
That's what I appreciate about dot-tech domain names.
It just makes sense.
It tells the world your customers, your investors, anyone Googling you, that you're building
in technology.
Clean, direct, no qualifiers.
And I'm seeing more serious startups lean into it.
Nothing.
Dot Tech, Onex.com, Aurora.
dot tech, CES.com, Ultra.Tech.
Alice.com, dot tech, Neuron.
Dot tech, blaze.com, dot tech, and so many more.
If you're building something tech first, don't settle.
Secure your dot-tech domain from any registrar of your choice and make your positioning
obvious from day one.
Starting something new isn't just hard.
It's terrifying.
So much work goes into this thing that you're not entirely sure will work out,
and it can be hard to make that leap of faith.
When I started this podcast, I wasn't sure if anybody would listen.
Now I know it was the right choice.
It also helps when you have a partner like Shopify on your side to help.
Shopify is the commerce platform behind millions of business around the world
and 10% of all e-commerce in the U.S.
From household names like Allbirds and Kodopaxi to brands just getting started.
With hundreds of ready-to-use templates, Shopify helps you build a beautiful online store that matches your brand style.
Get the word out like you have a marketing team behind you.
You can easily create email and social media campaigns wherever your customers are scrolling or strolling.
It's time to turn those what-ifs into with Shopify today.
Sign up for your $1 per month trial at Shopify.com slash big tech.
Go to Shopify.com slash big tech.
That's Shopify.com slash big tech.
And we're back here on big technology podcast with Emil Michael, the Undersecretary of War for Research and Engineering.
Amel, appreciate you being here with us.
Let's talk about anthropic.
So I just want to hear from your perspective.
Describe the culture of anthropic versus the administration.
Well, I would say this, which is they were the first.
to aggressively try to provide service to the government
after the Biden administration's executive order about AI.
Because they were, and again, you see this in the marketplace, too.
Open AI was more focused on the consumer,
with CHAPGBT and the subscriptions.
Actually, I sort of hadn't been started, really,
until 18, 24 months ago.
And then Google was also focused more on the consumer.
They were focused more in enterprise.
When I say enterprise,
I mean, enterprise writ large, an enterprise like the Department of War or an enterprise like a big company.
And so they were naturally started sooner here.
And I think there's a certain portion of people at all these companies that all now have a government division that are all going to start, you know, understanding the vernacular a little bit.
And we can have conversations like you and I were having about what are the meaning of some of these advancing capabilities of the world.
but from a culture standpoint, I think we think about, you know, we live in the bureaucracy of what we have to do every day to innovate and to reform.
And I think the image that they might have, and this is not unique to them, of the Department of War or the administration, is that we don't have the safeguards that we do, that we're not paying attention.
to sort of the risks.
We are, if not more so than most Americans would understand
that we do because of the procedures that have built up
over decades and decades of being to be careful
and smart about what we do.
And so that culture clash to the extent is,
you know, we're called a lack of understanding,
a lack of confidence, lack of trust
in us and our ability to do things in a way that's consistent
with our values as a country and the laws that are passed.
And that's, I guess, how I would describe the difference.
Okay, and just to recap what's happened between the Pentagon and Anthropic recently,
they were in Maven Smart System like we discussed.
There were all these provisions in the contract that the team here didn't like.
So there was a renegotiation.
It almost came together, but there were two things that Anthropic wanted to include in the contract,
a provision against mass surveillance, a provision against autonomous warfare,
and ultimately there was not an agreement there.
Right. Although I would say the following, which is just where we could, the provisions that were in the contracts that ultimately serve the Department of War said you can't use it for planning kinetic actions. You can't use it to develop weapon systems. So all the science, engineering, aerodynamics, all. Those were the original stipulations. They agreed to throw that out.
But it took three months.
examples, hand-holding examples to say, well, what about this example?
You can't run a department of 3 million people by exception.
You have to have, especially if you think about AI as an intelligence layer,
they've applied to many things from aerodynamics and physics and math to synthesizing information,
to anomaly detection, whatever.
And we run hospitals, we run schools,
We run weapons systems, run defensive systems to protect against all these kinds of things.
So to go by exception and try to say, well, how about this scenario?
How about this scenario?
It became not tenable and took a long time to get there.
And that's where you started to say, like, are they aligned with our mission here?
And then the idea that autonomous weapons were an issue was sort of, I think,
more marketing than anything.
because we have our own policy before they showed up that talks about that.
And we affirmed that we will have human oversight on all decisions militarily that are made using their AI.
So what else can you do?
You're like, we affirm human oversight.
We have these directives already.
We have the laws.
And eventually they agreed that there was no problem there,
but they marketed it as an issue that we were disputing.
at the end, which is odd.
On domestic affairs,
we are not a domestic law enforcement agency.
We do not have authorities
to do domestic mass surveillance.
So it was sort of like,
you have Congress that passes laws
to the National Security Act in 1947,
the FISA Act,
all the civil liberties that are enshrined in law
in the Constitution.
And I said, affirmed,
we will follow all those laws
and all future laws.
And all the authorities were granted
and not granted.
We're not the FBI.
We're not Homeland Security.
But again, that wasn't enough.
They wanted us to rewrite the law because they thought Congress was just behind.
They weren't understanding that new tech allowed new capabilities.
But again, it's not our mission.
We don't have the authority to do it.
Okay, but here's a thing, right?
And so I think that, and so eventually the contract was ripped up.
They called it off, yeah.
Right.
And I think deciding not to work together makes complete sense.
if you have a value misalignment, but then the Pentagon took it a step further, deemed Anthropica
supply chain risk.
And that one I'm a little bit puzzled by it because if you were so close to being willing
to work with them if they agree to all lawful uses of the technology being used by the Pentagon,
then how could they end up being a supply chain risk, which basically means that the Pentagon
will work with them, any government contractors can't work with them, and the administration
took it a step further and said no government agency should work with that.
Well, I'll speak to what Department of War cares about in our supply chain.
With Lockheed Martin builds a weapon for me, and they're using a technology to help them do
some of these science-oriented things, physics, aerodynamics, and so on.
And the vendor has expressed an unwillingness to want that to be part of the use case.
well then what am I getting in that system that's eventually going to come upstream to our warfighters?
I don't know. What if they decide to change their red lines? What if the model hallucinates because its values are like we don't want to cause this to be used in kinetic way?
Those were the things currently in the contract. So you worry about the downstream implications of that on everything that leads to protecting the warfighter and defending a war fighter.
the country. And so it is a legit worry if their alignment with our mission is not real.
But then you also limit yourself in a way to some of the capabilities they might have. I mean,
if you think about mythos, we talked about cyber warfare. Mythos is their new model. It's in preview.
There's a project called Glass Wing that has a bunch of entities that have come together and they're
trying it. And one of the things about Glass Wing and about this Mythos model,
is that it's convincingly good at cyber security and cyber attacks.
This is from the AI Security Institute.
This week, we conducted cyber evaluations of Cloud Mythos preview
and found that it is the first model to complete an AISI cyber range end-to-end,
which means it's a 32-step corporate network attack
from initial reconnaissance to full network takeover.
We estimated it would take human experts 20 hours to complete.
Through saying it's an AI cyber weapon automated?
Autonomous cyber weapon.
Here's a thing.
I'm encouraging the use, but I'm saying that, like, you talked about the drones that are meant to hit other drones.
Wouldn't you want this tool at this disposal?
I mean, there's an argument to be made, and I'm curious to hear what you think about it,
that you sort of put yourself in a corner when you're not taking these capabilities and using the ones that you want.
The original sin was in the past administration.
choosing one AI provider and having no options because it is a gargantuan effort to get these
software things on to classified networks a lot of complexity to do that because it's secure networks right
this isn't like aWS cloud for consumers um so the original problem the original sin was not having
more than one provider so that you had more options um but i also believe if you talk
to every other of these frontier AI companies,
they're gonna have similar capabilities.
But they don't yet.
But you know, if you were to use that, yeah,
but they will soon.
Like they, if you look at the distillation attacks
that our adversaries are using, based on our models,
how long do they take to show up in deep seek
or any of these other things?
That's a couple months.
Yes.
So if you think about those timelines,
you're just thinking about the timelines.
And we're not, we'll never sacrifice capability
for national security, you know, or anything.
So I think we're cognizant of what's happening
and we're working with every model company
and we feel good about our posture there.
The other thing that people say about this
and it would be curious to get your thoughts on it
is that you can look at the history of companies
that have been deemed supply chain risk to the Pentagon.
It's very rare if unprecedented for a company
like Anthropic to be banned that way.
So why do you think it rise to the life?
level and do you think it merits this like fairly unprecedented action well I mean in
one hand you can't say that they have this mo the cyber nuclear bomb and yet we
shouldn't be worried about how those capabilities enter and remain in our supply chain
those two things are inconsistent right and I'm not blaming you I'm just saying that
if you believe they're going to cause 40% unemployment if you believe that
that these things have a capability that you put 50,000 geniuses in a data center, they're going
to coerce the world.
They could create bio and chem weapons.
Of course, the Department of War is going to want to understand and constrain those things
so that they don't do something unintended on our side.
So these companies are talking about their things in apocalyptic terms, which make it
necessary for us to judge the management teams, judge their actions, look at the terms of service,
understand how they fit in our supply chain. This technology is like nothing we've ever seen,
so you can't compare it to, you know, a chip from a foreign chip manufacturer that gets put
in the supply chain. This is a whole different thing because of just what you said, is the power
of what they're saying it's going to do, the disruption might cause an American life.
and if someone developed a nuclear bomb in their garage,
you don't think we'd have anything to say about it.
Yeah, well, of course we would, right?
Or a biological weapon or any of these things.
So I think those are things that heighten the awareness that we have
of what these models can do and where they're going.
Okay.
I just want to talk about this one more level,
which is a practical level, which is,
and you've mentioned this in interviews before,
that Anthropics models were hosted on Amazon's cloud,
their government cloud.
And so they upload the weights to the model,
and then you use it through Amazon.
So let's just take the Lockheed example.
If the Lockheed is designing some systems and Claude is baked in there,
Claude, I mean, Anthropic wouldn't have the capability to turn that off
if it's hosted somewhere else, maybe upgrade it.
And then it would be it just you flip up.
them out, but to turn it off, they don't really have that capability.
No, I mean, I think we're, we're, uh, you understand how this technology works better
than most.
How many, the upgrade cycles for these things are now compressing to now three-ish months.
Right.
So every three months, you have a new set of, uh, model weights, a new set of guardrails,
a new set of, uh, bugs, the way the model behaves, the way it hallucinates or doesn't
hallucinate the way it does refusals, where it refuses to answer certain questions.
And there's an important anecdote which was written about, which was,
Anthropics also serving the Centers for Disease Control.
And so you have some scientists going there and learning about pathogens.
Right.
And they assumed that was a bad actor.
And it took them, they refused to undo that refusal.
But that was the off-the-shelf model.
Yeah, sure.
off-the-shelf model, but what's to stop that from making the next model? We don't know.
So the point is, to have a reliable partner, you have to have to have alignment on these
issues, which is we have a national security mission. We want to use it for all lawful use
cases. In the HHS's case, it would be all lawful use cases, and it's lawful for HHS to be doing
pathogen research. Totally. Right? So if that's... I would hope that that's what they're doing.
We hope that's what they're doing. So for someone to have made the judgment,
to turn that off, and they're like, oh, well, is it an old model, this, that.
That's not how it has to work in the future.
If you are truly, you know, an American company that's trying to protect Americans
and do good things for Americans, the government has to be able to use this powerful tool
to succeed in its mission.
Yeah, but if I'm hamstrung by their choices, that's what gets in the command and control
structure.
Right.
But haven't you solved this to a degree?
Because if you just have clawed, then I totally see it.
But if you have Claude and GROC and Open AI in there,
then maybe if Claude makes an update you don't like,
you let Open AI run with it, run with the next iteration.
I mean, if we hadn't had made the original sin,
I think you'd have had them competing for the government business.
Had they been competing for the government business,
like in any non-monopolistic scenario,
power would be balanced between customer,
and vendor. And eventually we'll have that. But we didn't have that so then they could make those
choices on their own. So I asked about the culture side of things in the beginning. And there's also
this perception that, well, and I mean, I have a good, a decent read on the government and a decent
read on Anthropic. They're definitely different cultures. And the other read on this is,
okay, maybe there were some things
that the government was uncomfortable about,
but this really just came down to a culture clash
where even, I think, wasn't it Pete Heggseth,
when he tweeted about Anthropics said,
you know, we're not going to let any woke company
tell us what to do.
Is it possible that this is just a culture clash
versus the bigger thing that it turned into?
No, because, I mean, I would tell
the Anthropic guys that came to me,
this is independent of politics.
I just care about having the best system for our war fighters.
Why would I spend three months if it was a culture clash?
Andrew Ralsorkin asked me, same thing at CMEC.
He's like, you know, you're not buddies with him and your buddies do.
He's like, I've never many.
I don't know these guys.
I know the culture of Silicon Valley,
so I did take a lot of time to try to explain as a transplant to the government.
Here's why this matters.
Right.
Here's some scenarios.
And eventually we got to a point where it was just they wanted control.
And you can't have control of the Department of War's actions and activities so long as they're legal and consistent with our guidelines.
And so those on the outside who look at this and they say, okay, supply chain, risk designation, no government agency can work with them.
This is effectively the federal government attempting to destroy Anthropic because of a procurement dispute.
I mean, destroy Anthropics does tripled in revenue in three months or tripled in valuation.
They're doing okay.
They're doing okay.
That's silly.
Right.
So it's silly because the percentage of revenue that we represent of any of these high companies
is infinitesimal.
It's just we don't want them in our supply chain.
We don't want to use them.
Yeah.
President decided that his one of the government to use them.
There are great alternatives and we're going to have to fix past mistakes by ensuring that
alternatives are available and I have high confidence if not more confidence
that these other models will be the same or better over time okay last one on
this and thanks for answering all these it's good to get your perspective the judge
in the case or one of the judges that in this case because Anthropic is suing to
have that designation removed judge Rita Lynn said the Department of Wars
records show that it designated Anthropic as a supply chain risk because of its
hostile manner through the press punishing Anthropic for bringing public
scrutiny of the government's contracting position is classic illegal First Amendment retaliation.
Did have anything to do with the press, the press strategy?
I mean, I shouldn't comment on a legal case, but I think the notion that a First Amendment claim
is going to hold up is, would be shocking because that means that the government has no
choice to make, right? If a vendor, any vendor says, I don't agree.
with your term and they're like, well, that's why we're not going to hire you to do,
you know, whatever kind of work we do, translation work at Department of War. And that becomes
a First Amendment claim, then it sort of would be so overreaching that it would be not workable.
So I feel like that was a throwaway. But I will say that, you know, the thing that makes the
Department of War different than most other agencies, and I don't mean this to be dramatic,
but we really do have lives on the line. And when people talk about government bureaucrats and
them not caring, the people hear the career people, they care. They really care. They care about
the warfighter. They care about the country. It's a truly patriotic place, and it is very nonpartisan
in the middle of the Pentagon. We have three million employees. And, and, you know, and it's a really patriotic place. And,
And so that mission is very sensitive.
So we are sensitive to the relationships we have with these companies because there's a lot of
unpredictability in our business.
So something happens in Iran and we need companies to move fast.
You have to have some trust with them.
You have to have some shared values.
You have to have, you have to understand they have economic interests.
And then we have to understand that our needs are going to change based on the threat environment.
And so that kind of matters.
So they could litigate in the public all they want.
That's fine.
But do we have alignment for real?
When we get in the room and we are facing a conflict, are we aligned?
So I was pretty impervious to that stuff.
There's a website, genaI.mill that's available to the people in the military here.
And interestingly, Google is in there, Gemini's in there.
And Google went through something similar, even somewhat more explosive.
where the employee is protested, and yet here they are.
They're working with the Pentagon again.
They're forgiven into a degree.
Could that happen with Anthropic?
I think so.
I mean, I believe that when you combine, like, you know,
if you fast forward from 2018 with the Google Maven thing happened to 2026,
and you talk to people from Google who aren't involved in that,
I think they regret it.
And they regret it because probably the same reason they didn't understand
what we did here.
And what we do here in this administration
is going to carry forward to other administrations
because we're at a crucible moment for AI.
And that's going to be,
could be an administration of either party.
So whatever decisions they make,
if for us it's non-parties,
and it's for the future.
And I think, and I hope
that companies that went through that moment
in 18, like Google,
kind of as they get more mature
and more of an understanding
of what it means to work with the government,
and understand us better get to a good spot.
Hopefully sooner than eight years.
Yeah, that did take a while.
But I will say Google's been an excellent partner
before this gen.ai.com.
They shifted in a big way.
And I mean, the whole tech industry
when I was there at Uber in 2016-17.
Wouldn't touch this stuff.
Huh?
It was just the employees
and sort of
a little bit of mob mentality
where employees had a lot of say
over what their products and services were doing.
And senior leaders and founders were very sensitive to that.
I think that sensitivity has gotten a little more balanced right now.
You don't want to go work at Palantir?
Don't go work at Palantir.
There's a ton of other places.
You don't work at Uber.
Don't work at Uber.
I think the balance is in a better place.
And I think Silicon Valley because of the fact
that we're doing more outreach than them,
there's more California companies, both southern and northern, are going to succeed here.
Hopefully that knowledge transfer will happen faster.
Let me bring up one more headline.
There's a story this week that also says that you had some XAI stock.
Do you have any SpaceX? Is that a potential conflict?
No, I sold all my SpaceX.
Okay.
And I recute, so what happens when you take one of these jobs, you show your whole sort of list to the Office of Government Ethics, nonpartisan,
They go through it and they say,
we think these things are red lines.
You sell your defense company stocks which didn't have much.
SpaceX was on the list, so you have to sell that.
And then depending on your role,
here's the things you have to sell
that might be specific to your role.
And then based on the kind of connection to that,
you could recuse yourself from dealers with the company.
So I just recused myself from dealing with XAI until I could sell.
And I was pretty active about it
because I didn't have the AI portfolio until the fall.
So I got the AAR portfolio.
I was like, hey, I'd like to be involved in this.
I'd like to not recuse myself.
They said, well, you have to sell.
Great, give me permission.
Got permission, sold.
Was recused in the meantime.
Okay.
Two more things I want to speak with you about.
I'll be quick as we come to a close.
First of all, I think every time we have this conversation,
a conversation like this, we have to talk about procurement.
And it's like, I know I can tell half the audience is ready to go to sleep now.
But it's really important the way that these services are bought because, like, the Pentagon
budget for instance has been, I'll say, inflated because some of the vendors have charged
more than an arm in the leg and a leg for services. So talk a little bit about how you're working
to reform the procurement process and why that's going to be good for people. Yeah. So in the
80s, during the high of the Cold War, we had about 50 defense contractors, 5-0, and they consolidated
down to 5. So that was one sort of...
of dramatic reduction in the number of competitors for anything.
And then we outsourced a lot of the core capabilities to other countries.
So the supply chains got brittle.
And China wasn't, didn't have a military buildup to the limit in 2010.
So you put all these two things together.
And you said, well, what was happening is we have a small number of competitors.
They were taking less risk.
So we were paying them for time, you know, cost.
plus. Now, some of this, well, and it's important for me to say this over time I've asked
this, some things are so speculative that no company can economically do that unless you're
financing some of their R&D. So there are things that are 10 years out, 15 years out, 20 years
out that you have to do that. But because the nature of warfare is changing, and because there's
defense, the greatest B.C. boom and defense tech in a country's history, and because you have
Founders like Palmerlucky and all these folks who are willing to go into this business,
it's made us much more able to do business deals.
Right.
So for our audience who's bored with a curament.
Talking about business deals.
It's important.
We can now do deals where if you deliver a weapon and it works on time, you get paid.
And if you don't, you don't.
Imagine that.
Imagine that.
And guess what?
if you do it cheaper so you make a little bit more profit, I'm okay with that.
Right.
So there's a little bit more risk sharing there.
And I think ultimately, especially for things that are easier to produce and quicker,
they're not taking a huge R&D risk, like you're inventing the next, you know,
you know, space shuttle that can land on the moon and be there for, you know, three years and build a base.
Like all the things are, you know, very speculative, hard things.
I think you'll see us moving a lot more toward business-oriented contracts, which is good for them and good for us.
Definitely.
And better for the taxpayer.
Yeah, most importantly.
I think we pay enough taxes that we should know where it's going and hopefully it's not wasted.
All right.
I don't want to leave without asking you about the Pentagon Pizza Index.
Are you aware that there are people tracking how much pizza is ordered near this building?
We're at the Pentagon, and they've used it to predict military action.
I've seen that on X.
Honestly, I wouldn't have no idea where you get a pizza delivery to come into the Pentagon.
There's a specific Papa Jots.
No, I'm not doubting that, but I actually don't know.
If I went back to my office right now, it's like, how would I order a pizza from outside to be delivered in?
I'd have no idea.
So you're not a believer in the Pentagon Pizza Index?
I'm not a believer in the Pentagon Pizza Index.
We shouldn't take it seriously.
Huh?
We shouldn't take it seriously.
I'm not a believer in it because I literally don't know how you get any food delivered from the outside.
This is the Pentagon.
You're telling me the Pentagon can go to war with countries millions of thousands of miles away, but it can't get pizzas in the building.
I'm sure there's a way someone could walk out to the edge of the Pentagon, receive a pizza and bring it in.
This place is the best logistics operation in the world.
There's, there's, look, I don't know.
What if someone's messing with it to mess with the prediction markets?
I wouldn't put it past anybody.
So therefore, it's inherently an unreliable measure in my view because it's easy to corrupt it.
So the pizza around here?
I think there's, is there a pizza place here too, inside the building that close at five.
That's why they look at the late night Papa John.
I'll leave it at that.
Mr. Undersecretary.
That was the hell of the last question.
I would have guessed that.
My pleasure.
All right.
Thanks for coming all in D.C.
My pleasure.
Thanks for having us in person.
Thanks everybody for listening and watching.
You now know the secret to the Pentagon Pizza Index.
We'll see you next time on Big Technology Podcast.
