Your Undivided Attention - War is a Laboratory for AI with Paul Scharre
Episode Date: May 23, 2024Right now, militaries around the globe are investing heavily in the use of AI weapons and drones. From Ukraine to Gaza, weapons systems with increasing levels of autonomy are being used to kill peop...le and destroy infrastructure and the development of fully autonomous weapons shows little signs of slowing down. What does this mean for the future of warfare? What safeguards can we put up around these systems? And is this runaway trend toward autonomous warfare inevitable or will nations come together and choose a different path? In this episode, Tristan and Daniel sit down with Paul Scharre to try to answer some of these questions. Paul is a former Army Ranger, the author of two books on autonomous weapons and he helped the Department of Defense write a lot of its policy on the use of AI in weaponry. RECOMMENDED MEDIAFour Battlegrounds: Power in the Age of Artificial Intelligence: Paul’s book on the future of AI in war, which came out in 2023.Army of None: Autonomous Weapons and the Future of War: Paul’s 2018 book documenting and predicting the rise of autonomous and semi-autonomous weapons as part of modern warfare.The Perilous Coming Age of AI Warfare: How to Limit the Threat of Autonomous Warfare: Paul’s article in Foreign Affairs based on his recent trip to the battlefield in Ukraine.The night the world almost almost ended: A BBC documentary about Stanislav Petrov’s decision not to start nuclear war.AlphaDogfight Trials Final Event: The full simulated dogfight between an AI and human pilot. The AI pilot swept, 5-0.‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza: An investigation into the use of AI targeting systems by the IDF.RECOMMENDED YUA EPISODESThe AI ‘Race’: China vs. the US with Jeffrey Ding and Karen HaoCan We Govern AI? with Marietje SchaakeBig Food, Big Tech and Big AI with Michael MossThe Invisible Cyber-War with Nicole PerlrothYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Transcript
Discussion (0)
Hey everyone, it's Tristan.
And this is Daniel.
On September 26, 1983,
Lieutenant Colonel Stanislav Petrov was in charge of the USSR's nuclear strike detection command center.
And this was at the height of the Cold War,
where the tensions between the Soviet Union and the U.S. were running high.
Just a few weeks earlier, the Soviets had shot down a South Korean passenger jet
that had strayed into its airspace.
And then one day in September, the computers and Petrovs,
bunker went on high alert. Five intercontinental ballistic missiles were headed straight towards targets
in the Soviet Union. The official procedure was clear. At the first indication of a U.S. nuclear
strike, the USSR would launch a counterstrike on U.S. cities. The computers were unequivocal.
The missiles were coming. But Petrov waited. He didn't inform his superiors. He knew tensions
were high between the two countries, but it just didn't make sense to him that the U.S.
would strike in that particular way.
So he waited.
The missiles didn't arrive.
And when the incident was investigated,
the Soviets discovered that a rare alignment of sunlight
on high-altitude clouds and the satellites
had triggered a false alarm.
Petrov's hesitance saved the world from nuclear war.
But would an AI-based system have made the same call?
Right now, militaries around the globe
are investing heavily in the use of AI weapons and drones.
From Ukraine to Gaza, weapons systems with increasing levels of autonomous behavior
are being used to kill people and destroy infrastructure.
And the development of fully autonomous weapons is showing little signs of slowing down.
So what does that mean for the future of warfare?
What safeguards can we put up around these systems?
And is this runaway trend towards autonomous warfare inevitable?
Or will nations come together and choose a different path?
Today we're going to sit down with Paul Shari to try to answer some of these questions.
Paul is the author of two books on autonomous weapons.
He's a former Army Ranger,
and he helped the Department of Defense
write a lot of its policy on the use of AI in weaponry.
So this is a critical conversation,
and we're pleased to have an expert like Paul
help us get a sense for how AI will change the way wars are fought.
Thank you, Paul, for coming on the show.
Thank you. Thanks for having me.
So I want to start by talking about a recent trip that you made to Ukraine,
which has become something of a laboratory for AI weapons.
What did you see here?
I was in Ukraine a few weeks.
weeks ago and met with government officials and people from the Ukrainian defense industry,
anywhere from large state-owned defense companies, all the way to just a whole host of small
startups. And the level of innovation in the technology and tactics in Ukraine is really
unprecedented where the big development right now in Ukraine is autonomous terminal guidance.
So that's where a lot of these drones are remotely controlled. They're piloted by a person.
but because there's a lot of drones,
there's also a lot of jamming that's going on
where people are jamming the communications link.
Because if the drone is remotely piloted,
well, once you jam that communications link,
then the drone's pretty useless.
And so people are adding in more autonomy,
particularly for the last mile
once a person has chosen a target to complete that attack.
But that is a stepping stone towards, you know,
in the future, more autonomous weapons.
So all of a sudden,
this rapid advancement that we've seen,
in Ukraine. This is this proliferation of innovation around autonomous weaponry. Why do you think
we're seeing that right now? What's special about right now? Is it about the conflict or is it
about where we are in the technological development? Yeah, I mean, it's a great question. I think
it's both. So 10 years ago, we just couldn't see the types of things that we're seeing in Ukraine
now, but also war is a real accelerant of innovation. We're now over two years into this
war. It's settled down into a long, grinding war of attrition between Russia and Ukraine. And
doesn't have the people or the industrial production to go toe to toe to with Russia and trade
or person for person in this kind of war. So they've got to find ways to be clever, be innovative,
and that's driving it as well. And I think it's just worth remembering that, you know,
war is thankfully rare. And so most of the time in peacetime, militaries are coming up with things
that they think will be valuable, but they don't get immediate feedback on what is going to work
core is not going to war. They don't find out oftentimes until they fight a war. And so now when you
have a longer war, as we're seeing in Ukraine, on both sides, you can get really rapid feedback loops
that can accelerate innovation very quickly. So let's pause here and define some terms for our audience.
We have not covered this topic before. What exactly do we mean when we talk about autonomous weaponry?
What's the difference between, you know, remotely operated, semi-autonomous, fully autonomous, human-in-the-loop,
human on the loop, give us a little bit of a tour of categories so we know what we're dealing
with here. Yeah, no, great question. And these terms get thrown around a lot, and, you know,
they're not always used the same way. So most drones today are remotely piloted or remotely
operated. Then a lot of times it looks like a person actually just manually maneuvering the drone
and piloting it the way they would pilot an aircraft if they were on board the aircraft.
Sometimes that remote operation is a little more removed where, for some of the more advanced
drones like a global hawk, for example. It's a very large, very expensive U.S. military drone
flies up at 60,000 feet, very high altitude. Some of those more advanced drones are flown
with a keyboard and a mouse, but they're still directed by humans where to go and what to do.
We're starting to move towards more autonomy in different functions for drones, whether it's
navigation or automated takeoff and landing, for example. I would say this is analogous to what we're
seeing in cars, where a lot of new cars today have a lot of autonomous features for specific
types of driving functions. Automatic braking, self-parking, intelligent cruise control,
automatic lane keeping, and you're sort of bit by bit starting to incrementally take over
some different functions of driving. Now, for cars, there's a clear vision on the horizon
of a point in time in the future, a fully autonomous car that won't even have a steering wheel.
Now, for militaries, at least for weapons, that vision of the future is one of an autonomous weapon
that would still be built by humans and launched by humans to go out into the battle space to perform some task.
But then once launched, would be on its own in using its programming or some onboard AI,
some machine learning algorithm that it's been trained on,
would identify targets all on its own and then attack them
and complete that attack and carry out the attack all by itself.
We're not quite there yet.
There have been a couple one-off examples historically,
but certainly we don't see autonomous weapons in widespread use.
But there's a lot of advancements that are taking us in that direction,
and that seems to be the arc of the technology right now.
And there are, you know, some of the terms you use semi-autonomous, for example,
would be a weapon that has many of those functions, but a human is still choosing the target.
And sometimes people use the term supervised autonomous or a human on the loop to mean a
circumstance where the weapon could complete that engagement on its own, but a human could
supervise it and could intervene if things go wrong, just like if you had somebody maybe
sitting in a driver's seat of a Tesla on autopilot, and, you know, they're supposed to be,
at least in theory, hands on the wheel, being attentive.
they could jump in if something goes wrong.
Those are all different possibilities for autonomy as well in weapon systems.
Adding to that, there seem to be two different kinds of narratives around weapon autonomy.
One narrative is these precision narrative that says,
if these things are guided by a kind of a calculus,
then you don't have some of the things that make war awful,
people who decompensate emotionally and start targeting the wrong people.
you know, long nights of low sleep and sleep-induced poor performance that result in people dying.
On the other hand, the narrative on the other side is that this kind of cold, calculating human not being in the loop
might lead to the normalization of casualties and the normalization of violence in a way that is unchecked by some of the human instincts to avoid it.
What do you think about these two narratives?
Well, I think you captured very well the two arguments.
that are out there sort of in favor of autonomous weapons or opposed to them.
And, of course, there's been a movement of a number of different humanitarian groups and
several countries opposed to autonomous weapons and calling for a preemptive ban on them
before we see them sort of built and used in a widespread fashion.
But others have said that they could be more precise and more humane over time.
I think there's actually validity to both of those, and it's possible to envision a future
where both of those things become true, that there could be,
Some conflicts, when you have militaries that care a lot about the rule of law
and avoiding civilian casualties, where in some settings, autonomous weapons might be more beneficial.
They might be more not just militarily effective, but also more effective in avoiding civilian casualties.
There might be other settings where they become a slippery slope towards people broadening the aperture
of who's targeted, leading to civilian casualties,
or they could lead to accidents to situations
where maybe if it's not an intentional use
of an autonomous weapon in a bad way.
It's not a war crime,
but the weapon makes a mistake,
and certainly, depending on the weapon and how it's used,
there could be a lot of civilian casualties.
So I think that actually we could end up in a future
where both of those visions become true.
And then the question is,
how do we approach this technology
in a way that's thoughtful in terms of how
we use it and how we govern it and regulate it to avoid some of the worst harms.
And it may be worth taking a beat here just to back up and say that the incentives,
the reason why militaries are engaging in development of autonomous weaponry isn't even
related necessarily to the precision of the strike. The incentives for military seem like
they're multifactorial from easier logistics to faster counter-response. Can you talk for a second
about why it is that militaries are rushing headlong into this technology?
Sure. So maybe it's worth unpacking here the difference between, for example, autonomous weapon,
so a weapon system that itself would go out and attack targets on its own versus the use of autonomy or automation or AI more broadly across the military space.
And we're seeing that militaries around the world are very interested in AI and automation for a whole wide variety of tasks,
for maintenance and logistics and personnel management for the same reasons that,
a lot of industries are because it could improve efficiencies and save money and reduce
personnel requirements and make things more effective. And, you know, most of what militaries
do is not actually the fighting. Militaries talk about tooth to tail and sort of the tooth
being the fighting component of the military and the tail being everything else. And usually
it's, you know, seven or nine or ten times as many people and dollars spent on all of the support
functions. And there's enormous opportunity for AI and those things. And some of those, you know,
things like personnel applications raise the same kinds of concerns about bias and hiring and
promotion that you might get in other fields as well. The value for militaries in the autonomous
weapons base is really like, there's a lot of value for adding AI. Could you make more precise
decisions if you have image classifiers that help identify objects? Sure, there's value in that. I guess
one question is like, what's the value of taking away the human? Because we know that despite
all of the amazing things that AI could do, there's still lots of ways in which humans add a lot
of value, particularly in terms of understanding context and in novel situations where the AI
system may not perform as well. And there are two big reasons. One would be speed if there's
a circumstance where you need immediate reaction time. Just like the value of automation and
automatic braking in a car, for example, there are going to be these places and work
where split-second reaction times are really valuable.
And the other one is if the communications link is lost with some controller, with a drone,
as we started talking about.
But I do think there's a lot of value in humans, and militaries are going to find that
they're going to want to keep humans in the loop whenever they can, whenever that's feasible
for them.
Paul, is a great moment to bring in a story from when you were an Army Ranger in Afghanistan.
You mentioned in your first book there was an incident involving a shepherd girl, which
shaped a lot of your thinking about human decision-making and the importance of context.
Could you tell us that story?
Sure, absolutely.
So there was an incident when I was an Army Ranger, I was on a sniper team, and we were up
on the Afghanistan-Pakistan border, and we'd infiltrated at night, and we were setting up a
hide site when we were going to watch for insurgents coming across the border.
This turns out to be, as an aside, like an insane task because the Afghanistan-Pakistan border
is massive and unmarked and mountainous.
So it's very much like sort of a drop in an ocean.
But in any case, that was the mission.
So we hiked up this mountain at night.
And when the sun came up, we were very exposed.
And there was not a lot of vegetation in the area.
It was about eight of us piled behind a couple rocks.
So very quickly, this farmer came out into his fields and he spotted us.
So we knew, like, people were coming to get us.
So we hunkered down.
And what we did not expect is what they did next was they said.
sent a little girl to scout out her position.
She was maybe five or six.
She had a couple goats in tow, I think it's a cover that she was ostensibly hurting goats.
But I was pretty clear that she was there to watch us.
She was not super sneaky, to be honest.
So she walked this long, slow circle around us, and she stared at us, and we stared back at her.
And we heard what we later realized was the chirping of a radio that she had on her.
And she was reporting back information about us.
so we watched her for a while and she left and after that some Taliban fighters did come to attack us
so we took her of them and then the gunfight that ensued sort of brought out the whole valise we had to leave
but afterwards we were talking about how would we deal with a similar situation if we came across
somebody we didn't know if they were they looked like maybe a go-tower but we didn't know if maybe
they had a radio or something well nobody nobody suggested the idea of shooting this little girl
Like, that wasn't a topic that was raised.
And that certainly would not have been consistent with my values that I was raised with
or what we were taught in the Army.
But what's interesting is under the law of war, that would have been legal.
The law of what isn't said in age for combatants.
So by scouting for the enemy, she was participating in hostilities the same way as if she'd
been an 18-year-old male doing that same task.
So if you programmed a robot to perfectly comply with the law of war,
it would have shot this little girl.
Now, I think that would be morally wrong,
even if, you know, there could be some legal justification for it.
But it does beg the question,
how would you program a robot to know the difference between what is legal and what is right?
And how would you even write down those rules?
And how would it know to understand the context of what it's seeing?
And it just drives home for me, I guess, the just the significance of these kinds of decisions.
You know, in this instance, the fate of the war did not turn on that moment,
but it certainly meant a lot to that little girl and to us
to make sure that you were doing the right thing.
And machines may not always know what that is.
I really want to take a moment to zoom in here,
because what you're pointing at is right on,
which is the ambivalence that we all carry
around taking some of these decisions and making them procedural.
And, you know, for people outside the military,
forget about autonomy for a second.
for people outside the military, even talking about concepts like acceptable levels of collateral damage
or acceptable levels of people killed who you don't want to kill.
In a way, this isn't specific to the military.
Like doctors will talk about what is the acceptable level of death in patients from a specific intervention.
And so it's hard to talk about war without talking about how difficult it is
to take some of these incredibly deep human intuitions.
and human moral moments
and encode them into our society.
And just as one quick aside,
when I was in college,
the trolley problem was this philosophy experiment
around trying to decide
the boundaries of different meta-ethical theories.
It was useless.
And now, fast forward 20 years later,
we're having to program this into our autonomous vehicles
to decide who to kill
in case where an accident is unavoidable.
And so this isn't just about war.
It's about how much do we see,
control to things that are programmed, and how capacious can those programs be around our morals
and our ethics?
And also, can we come up with the vocabulary of philosophical distinctions as fast as we need them?
I mean, one of the things you write about Paul is, you know, our previous laws of war don't
account for, you know, drones and new kinds of automated submarines.
And basically the laws and the categories that we've been guided by thus far are constantly
getting outdated and undermined by technology, inventing millions of new categories.
underneath that.
And so what that forces us to do
is look at the spirit of those laws
and then reinterpret them,
but the kind of the meta-challenge
that we talk about in our work
at Center for Humane Technology
is that our 18th century institutions
aren't able to articulate
the new distinctions as fast
as the technology requires that of them.
To put in Nick Bostrom's terms,
AI is like philosophy on a deadline.
We have these urgent philosophical questions
and now we have a deadline
to actually answer them
because we are instrumenting our society
with more AI.
So with all of that in place,
I want to ask you, what are some of the actual philosophical questions around war that as an
expert in the automation of warfare and the automation of violence that we need to be figuring out?
Yeah, I mean, it's a great question. There's several, and I think some of the challenges look very
similar to what people are facing in other industries and professions. Certainly there's a set
of class of problems that is sort of, okay, we're having to task a machine to perform something that
humans used to do. And now the rules that were implicit for humans, we have to write down,
as you explained for vehicles, for example. In some cases, maybe those rules weren't written down
just because we trusted human judgment to figure it out. In some cases, maybe in case of like
drivers, maybe human reflexes aren't even good enough to be making like really deliberate
conscious decision to the middle of the crash. But now we're going to have to write down
those rules. So that's one set of challenges. And that exists certainly in the military space as well.
there's a sort of, you know, additional problem of just trying to figure out what are the tasks that we should be automating in what context.
And, of course, one of the challenges there is that line's going to continue moving over time as the machines keep improving.
And I think, you know, some of the things that you were saying about, humans are much better at understanding context for decisions.
And so, you know, one of the ways that I think about this, at least in the military space, is if there's a type of task where there is a clear,
clear a metric for better performance, and either we have good data on what good performance
looks like, or we can generate that data, that's probably the kind of task where we can train
a machine to do it. Whether it's landing an aircraft on an aircraft carrier or some other
kind of skill, aiming a rifle properly is a great example of this. Like, if you choose a target,
we want the bullet to hit the target. And missing the target is bad, whether the machine is
doing it or the person. Now, those are the kinds of things where we probably want to lean into
automation once it's reliable enough, and we can get there. But there's a lot of things
where there isn't a clear right answer, and it depends a lot on context, what we want to think of
as judgment. So, for example, if you have an image in, you know, an infrared camera or a video
at night of a person, and we don't know what that person's holding, are they holding a rifle
in their hands or a rake in their hands, there's a right and wrong answer to that. And we could
probably train image classifiers do a better job than humans if we have enough data. But
to then go to the next question, which is, is that person an enemy combatant?
Well, that's actually a lot trickier, and that might depend a lot on context of like,
well, what were they doing a minute ago, or 10 minutes ago, or what's their historical network,
or what's the sort of circumstances in which they're in?
It could be that they're holding an innocuous object like a shovel,
but they're digging a hole for a roadside bomb, and they are participating in hostilities.
And that's really hard for machines right now.
And so those are the kinds of things that I think we're going to need human judgment
for the foreseeable future,
and those are the things that we want to kind of hold on to.
So I think those are the kinds of problems
that are going to be challenging
as we try to figure out
where are we comfortable using this technology.
A lot of this hinges on your sort of innate view
on the reliability of a system like this, right?
Because on one hand, if you treat the machine
as kind of a clunky thing that has okay friend or foe recognition,
but it doesn't have the subtlety that a human has on the battlefield
field to make these split-second decisions.
Like, that's a very pro-humanistic view, and of course, we should wait.
On the other hand, you have the sort of dystopian version of this, where you have poorly trained
18-year-olds who are underslept, who have been emotionally decompensated in the field, making
decisions that perhaps a machine should have overseen.
And so I at least have this profound ambivalence over this question, and it relies not only
on the precision and how good we think these machines are at making these decisions, but also
under what ethical principles they get created and how those get eroded?
Well, I think you make a great point that's really important, particularly in the wartime context when we think about things like autonomous weapons, to put in perspective, what is the baseline for human performance?
It's not always great, right? And so, like, humans commit war crimes. Humans make mistakes. Humans do terrible, terrible things.
And so sometimes when I'll hear discussions about autonomous weapons, I'll hear people sort of putting what humans do up on some pedestal as the, you know,
though it's like this pristine way of people fighting where, you know, back in the day,
people would look each other in the eye and appreciate their humanity before, you know,
killing each other with battle axes or swords or something.
It's like, that's sort of a really unrealistic depiction of what's going on.
And so we need to, I think, be realistic about what that baseline is, so then we can ask,
okay, as the technology is coming along, will it be improving things?
The flip side of that is I will often hear sometimes into Thomas up,
debates, people sort of painting this vision of people using technology in the most perfect way
where everyone's careful and thoughtful. And reality is like, we look around the world. We do see
a lot of atrocities and civilian casualties. In some cases, if countries aren't trying to be
careful, technology is not going to help. It actually might make things worse.
These are big, complex questions. You've had experience in the military and worked at the Pentagon.
Are these kinds of conversations happening inside the U.S. defense establishment?
So, look, I'll admit that I'm biased here.
I've been out of the Pentagon for a decade now,
but I helped lead the working group that drafted the Pentagon's first policy
on the role of autonomy and weapons way back in 2012.
So they were fairly ahead of the curve on a lot of these issues
in terms of thinking through these challenges.
The current policy that's in place, and it was updated last year,
is fairly flexible that it lays out some categories of things
that the military has done in the past, we've good familiarity with that are fine to do.
And for anything that's sort of new, it creates a process for bringing together people from
different parts of the military community, lawyers, policy, professionals, military leaders,
engineers to think through some of these challenges that we're discussing when they have
like a practical weapon system.
And people are saying, well, can we build this?
Can we build this thing?
Can we deploy this thing?
Is it safe?
Is it going to be appropriate?
And it's mostly what the Pentagon policy does
is kind of create a process for doing that.
I think it is something that they're being really thoughtful about.
I think that the robust debate that we have publicly
about military AI really helps press the Pentagon
to be thoughtful.
And one of the best things, ironically enough,
and the military wasn't happy about this at the time,
was Google's decision not to continue to work on Project Maven,
which wasn't about autonomous weapons.
It was just about sort of being involved in AI
to support the military overall.
Just a note here for listeners,
Project Maven is the name of the Pentagon's umbrella initiative
to bring Google's AI and machine learning
into their targeting systems.
It started back in 2017,
and it sparked a lot of internal discontent within Google,
as well as a very public staff protest letter.
It forced the military, I think for the first time,
to think about, like, oh, we really need to be able to articulate
to the broader scientific and technical community in America
how are we going to approach this technology?
And that led to then the DOD's AI ethics principles,
which they developed in partnership
with the broader sort of civilian tech community,
getting feedback from them.
The Defense Department has continued to refine
their policies on AI since then
and getting more granular, right?
And this is a challenge with all of these things,
is how do you go from these, like, lofty ethics principles
to something practical that actually shapes what you're doing?
But I think they're doing that.
There is lately a lot more, like a lot of these AI concepts are just becoming more real.
The Secretary of the Air Force, Frank Kendall, recently, a bit of a stunt, flew in an F-16 fighter jet
that was being piloted by an AI agent doing simulated dogfighting, but like not in a simulator
out in the real world flying a jet around.
So that's the state of the technology now.
It's coming along pretty quickly.
I'm comfortable with where the U.S. military is.
I'm a lot less comfortable where competitors are, like China and Russia.
When we don't have the same degree of transparency,
AI technology is very global,
and we don't really know what those countries are doing.
And I don't have certainly the same level of confidence
in their ethical approach to this technology.
So looking out at especially the recent conflicts
and Russia's use of autonomous weapons in Ukraine,
which is increasingly the sort of laboratory
for innovating and iterating all these different techniques and strategies,
what worries you about how other,
other states, China and Russia, or non-state actors, are going to be using these
autonomous weapons?
Yeah, so I think, I mean, look, AI is a very global technology.
It's very democratized, very widely available.
And we're seeing that in a lot of the innovation in not just the Russia-Ukraine war,
but also in other conflicts in Nagorno-Karabakh and Libya and ISIS had a small drone army a few
years ago that they were using in Syria and Iraq.
So there's no question that we're going to see lots of countries, not just advanced
militaries and non-state groups using this technology, I think what worries me is that not
everyone is going to be thoughtful about avoiding civilian casualties, about complying with the
law of war. There have just been some recent reports about Russia using chemical weapons in Ukraine.
That's not, you know, it's not like up for debate whether chemical weapons are legal in war.
There's a global ban on chemical weapons and all, there's still some occasional uses,
by rogue dictators, Saddam Hussein, and used chemical weapons, and Bashar al-Assad in Syria.
So, you know, that doesn't give me a lot of comfort that they're going to approach this technology
in a way that's compliant with the law of war.
And, you know, similarly with China, there's not a lot of clarity about how China is approaching the technology.
Now, in the conversations that I have with Chinese scholars on this issue, there's a notable difference
in that with the U.S. military, I hear a lot of discussions that.
about the law and ethics and morals and people maybe aren't sure about what to do in the future
and what right looks like.
But that's very much the frame that they're approaching this technology, that we need to
ensure that we're being legal and moral and ethical about it.
I don't hear any of those things.
When I talk to Chinese counterparts, they are worried about control, and they are worried
about keeping humans in control.
So it's not as simple as they're going to just automate everything.
They're very intensely worried about political control and making sure that they're
that their political leadership has tight control over military operations.
But the law doesn't have the same salience within the Chinese military.
And so that does concern me in terms of where we see the technology going forward.
Building on that, so far we've sort of been talking about one-sided ethics of,
is it okay to shoot the little girl?
Is it okay to do that?
But one of the scary parts of this is when both sides begin using automation,
and the tempo begins to outpace human's ability.
to control intrinsically because the decisions are made so quickly.
Can you talk a little bit about what you call hyperwar
or this sort of scaling of warfare into these inhumane timescales?
Yeah, I mean, this is, I think, the big worry in the long run.
It's that you're right, these are not simply decisions that one military makes in a vacuum.
It's a competitive environment.
And ultimately, militaries want to fuel forces that are going to win on the battlefield.
And if they lose a war, the consequences can be catastrophic.
for that nation.
You certainly we see in Ukraine, for example, that that country is fighting for its existence
against Russia.
And so, you know, one of the concerns is that you begin to see this compression of decision
cycles, of the targeting cycle where people are identifying targets and making a decision
what sometimes is called the Uda Loop in Warfare, the Observe, Orient, Decide, Act,
loop where people are sort of understanding the battle space and then making a decision and then
acting on it. You know, for one person, AI can accelerate components of that and actually
buy a human more time to make decisions, right? So if you can compress parts of that loop
that are easy for automation to do, you can expand more space for humans. If you're the only
doing this. But when your competitors doing it, they're accelerating their time cycles, too,
and now you get into the dynamic where everyone's just having to make decisions in split seconds.
Now, we've seen this in stock trading. It's not a theoretical concept. We've seen this whole
domain of high-frequency trading emerge, where algorithms are making trades in milliseconds
at superhuman speeds, and humans could never try to be in the loop for those kinds of trades.
And then we've seen accidents like flash crashes as a result of that.
that because of, I mean, in part, because of high frequency trading and other factors, too,
of just these sort of weird interactions among algorithms, because of course, you're not going to
share with your competitor exactly how your algorithm works, whether you're in finance or in
warfare. I think what's concerning to me is the way that financial regulators have dealt
with this problem is they've installed circuit breakers to take stocks offline if the price
moves too quickly, but that doesn't exist in warfare. There's no reference.
for me to call time out in war if things got to start to get out of control. So how do you then
maintain human control over war when war is being fought at superhuman speeds? I think this is just
the heart of the conversation when you push this whole conversation to its extreme. I mean,
each military wants to increase its utal loop. Its observe, orient, decide, and act loop, John Boyd from the
Air Force came up with this concept. And basically, you're only as good as you are accurate in being
able to update your Utilup. And as militaries, you know, build an autonomy with the incentive
of tightening that decision-making chain, tightening their logistics chain, tightening their
targeting chain, tightening their execution chain, they have that incentive. And the more that they
do that, the more their competitors do that, and if even if they believe or are paranoid that their
competitors might do that, that's why even though we say we don't want, you know, these weapons
to be built, we can't guarantee the other guy's not going to build them, and so we keep accelerating
and building them ourselves. And it struck me in thinking about this, the context of the
concept of mutually assured destruction in nuclear war was a critical concept to create
essentially something that would inhibit this runaway escalation because we basically said
as soon as one nuke goes off it's going to create an exchange a nuclear exchange that will
basically create this omni lose lose scenario and we what worries i think so many people about
drones and autonomous weapons is the idea that it's kind of unclear what would happen and the
phrase that kind of came to my mind when reading your work paul was mutually assured
loss of control, that were we to sort of hit go on, okay, we think we're being attacked by
China, hit go, and then all the autonomous systems just go. Well, then they're going to set
their systems to just fully go. And both parties are going to get into a runaway escalatory
loop, and there isn't going to be a control. And I'm just trying to think about what are the
concepts that we need to prevent what we all don't want to happen, which is this kind of runaway
omni-lose-lose scenario that's more ambiguous with smaller-scale weaponry that's autonomous
versus the large-scale nuclear situation.
Yeah, I mean, I think that's exactly the right question,
and that's the challenge that we face.
I don't think it's like a today problem,
but it's coming.
As we see militaries add more and more AI in automation,
some Chinese scholars have hypothesized this idea
of a singularity on the battlefield in the future,
where the pace of AI-driven action
exceeds humans' ability to respond.
and you effectively have this situation that you're describing
where militaries have to turn over the keys to machines
in order to remain effective.
But then how do you maintain control of the warfare?
How do you end wars if the war is being fought at superhuman speeds
and then if there are accidents
or if these systems begin to escalate in ways that maybe you don't want to
trying to take a limited war, that begins to spiral out of control?
It seems like one of the paradoxes here,
you know, human judgment is both fallible,
and talk about the 18-year-olds who haven't slept and who are in the battlefield and all the mistakes that are going to get made in that environment.
But then there's also the cases where human judgment is sort of the thing that, frankly, has saved us because we wouldn't be here, but for the fact that that human judgment happened.
And, you know, I think about autonomous weapons being used by totalitarian states or dictatorships, where if you think about police officers or National Guard who are ordered to fire on their own citizens, there's something about the native human moral intuition.
my fellow countrymen. I'm not going to fire
on my own fellow human beings. How
should we be thinking about that? Well, I think
it's a very real concern, and it's one that people
have raised in terms of thinking
about autonomous weapons. Now, we've thought,
we've been talking mostly about autonomous weapons in a
wartime context, but this domestic policing
context that you raise is also very significant
because we can see historical examples,
like the fall of the eastern block at the
end of the Cold War, where
that sort of ability for soldiers
to lay down their weapon to say, I'm not going to fire,
I and my fellow citizens, is this sort of last check often on a dictator's repressive power.
And when you take those humans away, and it's robots effectively, I mean, it may not look like humanoid robots.
It could be robotic vehicles or stationary guns that are controlled by autonomy.
Or even if they're just remotely controlled, but by virtue of technology, by a much smaller number of people, then you don't, then you sort of take away that a bit of
for ordinary people to say,
I'm not going to do this
and concentrate ever more power
in the hands of a small number of people,
a dictator and those surrounding him.
So I think that's a big concern,
and that is, you know, sort of suggest
that maybe some kind of regulations
about this technology
are going to be beneficial
so that we can avoid that kind of future.
Let's talk about nukes.
You talk a lot of,
lot about the parallels with nuclear weaponry and the creeping automation around decision-making
associated with nuclear systems, including like Russia has this dead-hand system to launch
counter-strikes. Can you talk a little bit about automation within the nuclear context?
Yeah, so one of the interesting things about this is when you start thinking about,
okay, like, where is it appropriate in the military to use AI in automation? And, you know,
I think the first place that people go to is like, well, we shouldn't use it for nuclear weapons.
That seems like an easy one.
We shouldn't do that.
Now, the crazy thing is we actually have a fair amount of automation in nuclear command and control.
Already, in many cases, we've had for decades, both throughout the Cold War and the U.S. and the Soviet Union, to help speed up elements of processing.
So, for example, if the president were to make a decision to use nuclear weapons, there's elements of automation and sort of carrying those orders out to people to make sure that they're executed correctly.
Now, a lot of that's human-driven, but there are going to be places where militaries do start to use AI that touch on things like intelligence collection or early warning or parts of automation in executing decisions, for example.
But that's a place we want to get it right.
And if you can use the technology in ways that help to make sure that the information people are getting is more accurate, well, that's good, that's valuable.
We want to do that.
If we can reduce the number of false alarms that come in, for example,
that's valuable. But we don't want, I mean, this is a place where AI's unreliability is a real concern.
Now, the United States government, Defense Department, has said they have an explicit policy
that humans will always be in the loop for any decisions pertaining to the decision to use nuclear weapons
or executing a decision to use nuclear weapons by the president. And that's, I think, really foundational.
The UK, the United Kingdom, has a similar policy that they've come out with.
we've not seen that from all nuclear armed states.
And we haven't heard anything from Russia, China, for example,
there were other nuclear powers.
But that seems like a place where, as we're thinking about how do we approach this,
it seems like a low bar to set, that we can agree, okay,
humans should be in the loop here.
And I think would be important to set that expectation internationally
that people are going to be responsible
about how they use this technology as it relates to nuclear weapons.
And those commitments from the U.S. and the U.K. were only recently, right, in 2022?
That's right.
That came out in the U.S. nuclear posture review in 2022 and roughly the same time frame from the U.K. as well.
And it's been reported that Russia actually wants to automate the entire kill chain with nukes. Is that right?
So Russia has this, they've done a bunch of things that from a U.S. sort of defense analyst standpoint, generally seemed kind of crazy.
One of them is that during the Cold War, the Soviet Union had built in the 80s, a semi-automated dead hand-s.
system called perimeter. And so the way this worked was it would have a series of sensors
across the Soviet Union, and they were detecting seismic activity, light flashes, other things
that were intended to detect a nuclear detonation on Soviet soil. Now, once the system was
activated, once someone had turned it on, if it detected these nuclear detonations, it would
wait a predetermined amount of time for some kind of signal from higher authorities.
If there was no signal, presumably because Soviet command had been wiped out,
it would transfer launch authority from Soviet High Command to a relatively junior officer in a bunker who had been protected.
Now, there was still a human in the loop, but it would basically bypass the normal chain of command.
Even sort of crazier, the Soviets ever told the Americans about this?
It never came out until after the Cold War, sort of violating the Dr. Strange level of goal of,
if you make a doomsday device, tell the other people you've made a doomsday.
the device. And the wild thing about this is, this whole thing seems like very risky and why
would you do this? It had a certain logic to it. And the logic was that one of the challenges
in nuclear stability is that if you get warning that someone is launching missiles at you,
you can have this use or lose dilemma of having a very short time, you know, maybe 10, 15 minutes
to make a decision to launch your missiles before your missiles get wiped out or your command
gets wiped out. And they wanted to reduce that pressure from them. So they could, in theory,
they could turn on perimeter and say, you know what, even if the Americans get us in a first
strike, perimeter will retaliate and we'll get them back. So there's a certain logic to it,
but like a lot of things in the nuclear world, like the logic is also a little bit nuts.
And so, you know, according to some reports, the Russian military said that the system is still
operational and has been upgraded since then. We don't know a lot of details about it, but is
certainly an indication that the Russians are likely to think about risk in this space
in a very different way than, say, the U.S. military would.
So with nuclear weapons, we've entered a phase of an uneasy, perhaps, but seemingly stable,
detente. When we talk about moving towards autonomous weaponry, AI-enabled weaponry,
we at the Center for Human Technology think a lot about the way that the incentives end up shaping
the outcomes you get. And you have a set of recommendations about how to shift those incentives,
around autonomous weaponry to make sure that we arrive or hopefully arrive at a stable deterrent
regime? What are those recommendations and how do they shift those incentives?
I think there's a couple things. One is, you know, we need to have rules. If we look at rules
historically that militaries have been able to agree to and then hold in practice in warfare,
which is challenging. Sometimes there are treaties that then you get to war and then nobody follows
the treaty anyway. So that's maybe not the best case study to build your example on. They have a couple
clear patterns that follow the ones that have been successful. So one is that the rules are
very clear, and it's known sort of whether you're crossing the line or not. So rules that are
ambiguous or gray are not helpful and often are violated in war, militaries are able to
comply with these rules in practice. Political leaders have imperfect control over their military
forces. And so that's also important. So for example, in the early days of World War II,
Britain and Germany both refrained from bombing populated areas
when they connected their aerial bombing campaigns.
And in fact, Hitler poured out in a rule that the Luftwaffe was not to bomb populated areas in Britain
only to bomb industrial targets for the war.
Not because Hitler was a good person, because he was afraid of the British Air Force.
And he was worried about retaliation.
This broke down when one night German bombers got lost over London
and bombed Central London by mistake.
and Churchill retaliated with the bombing of Berlin,
and then afterwards, Hitler declared that they would bomb London,
and that London Blitz was the result.
So, militarists have to comply with these rules that they're trying to do.
And the sort of cost-benefit calculus for militaries of, like,
what are they giving up, needs to be in their favor?
And that's part of the reason why we've been so successful,
with some exceptions, as I mentioned earlier,
but generally successful with countries walking away from chemical and biological weapons today,
in part because they're not that useful on the battlefield,
and we've seen this in practice.
As militaries have used them, particularly against troops that have chemical gear,
they somewhat slow down their movements,
but they're certainly not decisive in the way that nuclear weapons are, for example.
And it's been very hard for countries that are nuclear powers
to try to get them to give those up because of their value there.
So I think when we think about far autonomous weapons or other forms of military AI,
trying to come up with rules that can meet these criteria is difficult.
And in part because a lot of these definitions of AI and autonomy themselves are slippery.
Does it cross-line? Is it autonomous enough? That actually can sometimes be really challenging.
And I think it's a hurdle here to coming up with rules that might be useful in practice.
So how do you, Paul, choose to where to spend your political capital?
Because on one hand, you've got Pollyannish proposals because people aren't willing to give up control and possibility of an edge in warfare.
And on the other hand, you have the idea that we get in a few milk toast restraints that sort of gesture at the problem, but don't fundamentally make us arrive at a stable equilibrium point.
What do you think the most effective potential international agreements are?
Well, thanks. I have given us a lot of thought over the last like 15 years or so.
So I think, look, one of the things I think is challenging in the diplomatic discussion right now is it's a very binary discussion of either we somehow have a comprehensive, preemptive, illegally,
binding treaty that would ban autonomous weapons, or we do nothing.
And there is a lot of space in between, and I think we can see that there is not at the moment
political momentum for a comprehensive ban that would be effective, because if it doesn't
include the leading military powers, why do it?
But there's a lot of space in the interim.
So, for example, you could see a narrower ban on anti-personnel autonomous weapons.
That is to say that autonomous weapons that target people.
I think that's more doable for a couple of reasons.
One is that from a military standpoint, you're giving up something that's not quite as valuable.
So you could see the rationale why maybe you could imagine a future where you need autonomous weapons to fight against fighter jets that are also autonomous.
There's no way to keep a human in the loop in that kind of world.
We're attacking radar systems with automated fire responses.
Well, humans are not that fast.
Like outrunning a machine gun has not been an effective tactic since World War I.
And so, you know, at the speed at which humans move, you can keep a human in the loop.
And for militaries, for high-end warfare, a lot of that is against machines.
You're targeting artillery and radar and ships and submarines and aircraft.
People, the infantry, I mean, I was in the army, I was in the infantry, and we think very highly of ourselves, but they're not the centerpiece of major battles between militaries.
So I think we're giving up something less valuable there.
But I think also the need is higher in terms of the risk there, right?
Because if there's, let's say there's an accident and this autonomous weapon is targeting the wrong things,
you can always get out of a tank and run away from the tank, if it's targeting the tank.
If it's targeting you, you can't stop being a person.
And so the risks to combatants, to civilians, and they are much more severe.
So that could be one approach.
The U.S. State Department recently led over 50 countries to come together.
in a political declaration, so not a legally binding statement,
but nevertheless, an international statement surrounding responsible use of military AI.
And one of the things in that agreement was about just like test and evaluation
to make sure your things are reliable and they work, don't malfunction.
That's a place where I think we could press on and get some value of giving guidance to countries
to make sure that if they're going to use AI, they do it in a way that's responsible and ethical
and is safe.
And we don't see malfunction.
So I think there's actually a lot of space to explain.
explore here that could be really beneficial.
I think this mirrors, is it called permission access controls what the U.S.
sort of distributed to other allied partners that do have nuclear technology, basically
making sure that they're permissioned appropriately and wanting to make sure that we democratize
the safest and best permissioned sort of control systems so that the world is safer
because we've increased the baseline of all of our partners.
Am I getting that right?
Yeah, so the technical term is permissive action links, which doesn't, what you said actually
is more intuitive than what a permissive action link is.
But it's, so, yeah, exactly.
It's a safeguard on nuclear weapons to make sure that they are only used by an authorized
person, when authorized by whatever their national authority is.
And the U.S. has helped, as you said, spread that technology to other nuclear states
because it's not in our interest for their nuclear weapons to fall in the wrong hands.
So, Paul, one of the things you write about in your own hands.
your book is around how AI changes the game of war. In the same way that, you know, when you
let humans play Go for thousands of years, they play it a certain way, they have certain
strategies, when they play chess, the same thing. And then when you suddenly introduce AI,
the AI discovers a new move that no humans ever done. And Go, it was move 37. You reference in
your work the recent examples of dogfighting simulations, where you have an AI F-16, I think.
What happens, what new moves sort of are discovered by AI systems that humans wouldn't do?
And how does that change the game of aerial dogfight?
Oh, absolutely right.
We see these same phenomena with military systems.
So a couple years ago, DARPA, the Defense Advanced Research Projects Agency,
the Defense Department sort of like Department of Mad Scientists,
the two kind of crazy experiments.
They trained an AI agent to compete in a dogfighting competition.
So they started out with a whole set of different companies in a simulator.
The winner was a small startup called Heron System.
that beat out defense giant Lockheed Martin in the finals,
and then they went head-to-head against a human-experienced fighter pilot.
And they absolutely crushed the human.
Now, some caveats are worth it.
Important here that was in a simulator or not in the real world,
there were a couple of things that were simplified for the simulator.
But nevertheless, they are now actually flying subsequent iterations of AI agents
in real-world aircraft in F-16s and doing simulated dogfights in the real world.
So this technology is mature.
Now, but what's exciting is not just that it was better than the human, but that it fought differently, as we see in other areas.
So in particular, one of the things that the AI agent did was, as the aircraft are circling each other, there's a moment where the aircraft are nose to nose, and they're racing at each other hundreds of miles an hour, and there's a split-second opportunity to get off a gunshot to take out the enemy.
Now, humans don't make this shot.
It's almost impossible for humans.
And in fact, it's banned in training because it's dangerous to even try
because you risk amid her collision as you're racing head-to-head at this other aircraft
at hundreds of miles an hour.
Well, the AI system very much could make this shot.
It could do so with superhuman accuracy.
But even more interesting, it learned to do it entirely on its own.
It was not programmed to do that.
It used a reinforcement learning algorithm, and it got rewards,
and it sort of discovered this tactic.
Now, humans had heard of this before.
just can't do it, but it highlights how, as in other areas, the value of AI is not just being
better than humans, but also fighting differently. And it opens up a new space of possibilities.
And in fact, when you look at gaming environments at StarCraft and Dota 2 and chess and Go and other
things, you see a lot of commonalities of ways in which AI systems play differently than humans.
Some of them are really obvious, better speed, precision,
but some of them are different in terms of things like the ability,
one of the things that comes out in a lot of gaming environments,
the ability of AI systems to look holistically at the game space.
And this is something that Chastis Grandmasters have talked about
with Alpha Zero, for example,
that it's able to balance moves across the board better than often humans can.
We often see, by some of these AI agents,
very rapid shifts in tactics and aggression.
We see this in poker, for example,
where they're able to finally calibrate
the risk that they're performing,
which certainly would have tremendous advantages in warfare.
And so there's certainly a tremendous space
of opportunity of AI changing warfare
in very significant ways.
When I hear that story,
part of what's terrifying for me in that
is if you have one side that is able to put AI
in the cockpits of their fighter jets
and get off that shot in a particularly inhuman way.
Isn't the other side basically forced to do that
because otherwise you lose?
And isn't that part of that control problem in and of itself?
That both sides are de facto racing towards implementing these systems
that we can't control.
I mean, yeah, I think you've put your figure on exactly the problem, right?
Which is in the short term, sure, we integrate AI,
and it makes military more effective.
But sort of, you know, where's the endpoint here?
And the end point is one where a lot of these,
these functions are automated, and the combat is in the hands of AI, and humans are still being
killed.
Like, to be clear, I don't think there's a future that we envision that I would envision of
these bloodless wars of robots fighting robots.
I mean, that would be great.
But I think the unfortunate realities that we'll see in the future, humans still fighting
humans, but with robots and maybe autonomous weapons, the same way that humans fight humans
with missiles and aircraft today.
and the reality is that there will need to be real human cost to warfare for wars to end.
And so we'll still be on a receiving end of some of these technology,
but if we continue to lose control over it,
I think that's a very terrifying future to imagine
when we could see these potentially really destructive tools being used in ways.
They might be hard for us to control or to stop.
How do you think war changes in the next five to ten years if we do nothing?
What are we on track to war becoming?
And then what do you see war becoming if we are able to successfully intervene and limit these technologies?
Yeah, I think maybe to just talk a little bit about time frames.
I actually think that in the next five to ten years, there will be changes.
We'll see more autonomy.
We'll probably see the introduction of autonomous weapons, at least in a limited fashion.
But I think the changes will be modest.
militaries are moving quite slowly on integrating AI for better or worse, depending, I guess,
at your point of view, but they're pretty far behind the civilian sector and the space.
But over the long one, over maybe the next several decades, I think the changes are likely to be
quite profound, at least as significant as the changes that the Industrial Revolution brought
to warfare, where we saw that the introduction of industrial age technology dramatically
increased the physical scale of warfare, of the mechanization that entire societies could bring
to World War II, for example, mobilizing their industry for war, bringing enormous amounts
of firepower, destroying, even before nuclear weapons, really entire cities in Europe and Asia.
And then AI is likely to do something similar to the cognitive aspects of warfare,
accelerating the speed and tempo of war of decision-making, slowly pushing humans out of the loop,
And I think the risk here is that if we do nothing, we end up in a situation where we have
militaries that are quite effective and then go out and fight wars that affect us and that have
real human consequences, but that humans are not in control of once they begin, that we
could see situations where machines escalate wars in ways that humans aren't prepared for,
even start wars or cause crises to spiral out of control, that it makes it more challenging
to limit conflicts and it makes it more challenging to end wars. It's frankly hard for humans
oftentimes to end wars because of political commitments and because leaders don't maybe recognize
when they're losing. But if you add in a layer where they have, there's an ability to control
their military forces effectively, that gets much, much more challenging. And, you know, we've been
fortunate enough to live for almost a century now without a large scale global war. But the
consequences of such a war would be absolutely catastrophic to humanity, even if it were not
in a nuclear space. I mean, the scale of destruction that we have already in our inventory,
if we really were to see great powers mobilized for war, would cause enormous human suffering.
And I don't think that we should take for granted the peace that we live in, and we want to be
mindful of how emerging technology is changing some of those dynamics. And if we do things right,
the goal would be to sort of find a way
to skate through these dangers.
I think they'll continue to hang over our head
just the same way they do with nuclear weapons.
Like we have a lot of nuclear weapons out there on the world
and we've been able to avoid a nuclear war.
We don't know what the future holds.
And we don't know that we'll be able to look, you know,
70 years from now and say that that remains true.
But we try to find a way to navigate through those kinds of threats
and have as stable as possible a situation.
And with autonomy and AI,
If we can come up with a set of rules that countries can agree upon that are pragmatic,
that are realistic, they take into account the realities of warfare and how militaries fight,
and that are achievable, then maybe we can find ways to buy down some of that risk and reduce it
and avoid some of the most catastrophic harms.
You're here. That was good. Thank you, Paul.
It's a bummer. There's so much more I want to talk to you about.
But this has been a great conversation, Paul, super appreciative of your time.
and I hope that the policymakers in our audience of this podcast
really take what you have shared to heart.
Thank you. Thanks for having me.
Paul Shari is Executive Vice President of the Center for New American Security
and the author of Four Battlegrounds, Power in the Age of Artificial Intelligence.
Your undivided attention is produced by the Center for Humane Technology,
a nonprofit working to catalyze a humane future.
Our senior producer is Julia Scott.
Josh Lash is our researcher and producer
Kirsten McMurray is our associate producer
and our executive producer is Sasha Fegan
mixing on this episode by Jeff Sudaken
Original music by Ryan and Hayes Holiday
and a special thanks to the whole Center for Humane Technology team
for making this podcast possible.
You can find show notes, transcripts, and much more
at humanetech.com.
And if you like the podcast, we'd be grateful
if you could rate it on Apple Podcast
because it helps other people find the show.
And if you made it all the way here,
Let me give one more thank you to you for giving us your undivided attention.
