Angry Planet - The Horror of AI Generals Making Command Decisions
Episode Date: May 28, 2025Listen to this episode commercial free at https://angryplanetpod.comPalantir, Anduril and a suite of other Tolkien-inspired tech nightmares want to integrate artificial intelligence into every aspect ...of the U.S. military. Both companies have software suites they’re pitching as agents that will help make command decisions during combat. An AI general, if you will.Yes, that’s a terrible idea.On this episode of Angry Planet, Cameron Hunter and Bleddyn Bowen will tell us why. Hunter is a researcher at the University of Copenhagen and Bowen is a professor of Astropolitics at Durham University. They’ve just written a paper that skewers the idea that AI will ever be able to make command decisions.The narrow definition of AIThe folly of the AI generalThe games AI can’t win“Targeting things is a command decision”The IDF’s use of Microsoft’s use of AI systems“The enemy gets a vote”Killing more doesn’t mean winning moreAmerican military as a “glass tank”Matthew gets lost in a rant“They don’t even have an animal’s intelligence”The very real military uses of AIWe’ll never have a model of an AI major-general: Artificial Intelligence, command decisions, and kitsch visions of warPalantir’s pitchPalmer Luckey on 60 MinutesScientists Explain Why Trump's $175 Billion Golden Dome Is a FantasyOpenAI Employees Say Firm's Chief Scientist Has Been Making Strange Spiritual ClaimsEastern Europe Wants to Build a ‘Drone Wall’ to Keep Out RussiaHow Palantir Is Using AI in UkraineSupport this show http://supporter.acast.com/warcollege. Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
Love this podcast.
Support this show through the ACAST supporter feature.
It's up to you how much you give, and there's no regular commitment.
Just click the link in the show description to support now.
Hey there, Angry Planet listeners.
Did you know that Angry Planet is almost entirely listeners supported?
If you go to Angry PlanetPod.com, you can get instant access to all of the mainline episodes,
commercial free and early.
There's one up right now that is all about the birth of the Old West and why America
loves a gunfighter. Again, that's at angry planetpod.com, and we hope that you sign up.
Hello and welcome to another conversation about conflict on an angry planet. I am Matthew Galt.
Jason Fields is visiting Philadelphia for some reason, but that's all right. I've got two
wonderful guests with me here today. We're going to talk about artificial intelligence
and the big beautiful generals, folks, and why artificial intelligence perhaps can never be one.
Can you all introduce yourselves, Cameron, if you want to go first.
Hello, I'm Cameron Hunter.
I am a post-doctoral researcher at the University of Copenhagen, where I mainly
analyzed the People's Liberation Army Rocket Force of China, but through a sociological
perspective.
And occasionally, I get together with Bledon and complain about artificial intelligence.
Hi, I'm Bledon-Bowen.
I'm an associate professor.
of astropolitics at Durham University in the UK, where I also co-direct the Space Research Centre.
My background is in war studies and international relations, and I've had a career-long
specialism in war studies and outer space, and of course I keep an eye on modern military
technologies in general, and I keep an interest in military theory, hence this article came about
and my contributions to it from the military theory perspective.
So, yeah, I was cruising around online one day, and the algorithm was serving me things as its want to do.
And of course, I'm very interested in mid-century musicals.
And so I had a little Gilbert and Sullivan reference thrown at me, but it was about artificial intelligence and war.
And it caught my eye, and it's a great paper.
And I will link to it in the show notes.
but I wanted to have you on because I am, although I think artificial intelligence is probably here to stay, maybe not in its current form.
I do think it's being overhyped in a grand way.
And I think y'all's paper really kind of cuts to the heart of it.
So I want to get into that.
Can I first, I guess really when we talk about artificial intelligence, one thing we need to do at the very top is defined terms.
So you guys are kind of talking about, I think, a very specific phenomenon.
And to be fair, it's a specific phenomenon that I've seen being pitched quite frequently lately by companies like Palantir and Anderil.
You know, I think after you all published, Mr. Palmer Lucky was on 60 Minutes, kind of showcasing a little bit of what you, what you criticize in this work?
So when we say AI, what exactly are we talking about in this conversation?
I mean, that's a time, right?
Okay, well, I think for me and Bledin, the lucky thing for us on this question is that we don't have to define it for all applications.
We're just interested in it in a military context.
And the current paradigm really is predominantly taking large language models, the kind of generative AI, and trying to apply that to military affairs in some way.
there's also the side aspect of image recognition as well, but the underlying logic of that
is kind of the same from a philosophical perspective, which I guess we'll dig into in a bit.
In the paper, we use the phrase narrow AI, which is interesting because now I don't see
anyone using the phrase narrow AI to talk about these things, but when we were writing this
in the, well, in the pandemic years really, 21, something like that, narrow AI,
was the word that kept being used.
So, yeah, it's a pattern recognition and talking about the automation of decision-making around that.
And, yeah, in the military context, we don't really make a distinction between command and tactics and command and strategy.
They require the same kind of logic.
So we really get into the weeds of the logic of these systems, which apply regardless of their applications.
With maybe, so when we're talking about AI here, when we say LLMs, I think most people would be familiar with like chat GPT.
It is the thing that a lot of people are using and talking about when they say AI right now.
And what are kind of the pitched military applications of LLMs that y'all are maybe throwing a little bit of cold water on here?
The most important is probably Palantir's artificial intelligence platform, which they specifically
have designed for a defense application in the US and with US allies.
It's a kind of dashboard of sorts, according to their promotional materials.
You can run it on a laptop sized device, and it will do admin tasks for you, sort of writing
our orders up, things like this, but it will also suggest courses of action in the marketing
video they've supplied, including suggesting what you should blow up first and whether you
should take this path forward or that path. And that's really where Bledon and I are seeing
that this is a sort of kind of robot general of sort, even if at this stage what they are
pitching is a sort of advisor to your military officer.
Yes, and there's, as you said in your earlier comments,
we recognise that there are areas in NATO militaries in particular
that they are using automatic software
and things you can call pattern recognising algorithms or LLMs
or image recognition stuff already,
but in very limited and tightly bounded scenarios or situations.
so closing air defense weapon systems
that's an area where you can start automating things
when the authorization to start firing at things they pick up is given
because you're in that moment where anything coming in at a particular speed
to your ship is clearly not good news
and sometimes machines can actually detect
and start firing on those much faster than human operators could
and in those very tight areas
it's where automation makes sense
but it's a very clearly defined,
a mechanical situation
where it's just, yeah, find this target,
fire that high-powered machine gun at it.
The trouble we found then was
with things like pattern recognition software
and things that looked like that Palantir dashboard
was that people in international relations
and strategic studies or war studies
were then starting to extrapolate that
and personify them as commanders or generals.
or kernels in their own right
or making them advisors
to generals or commanders
and drawing a lot on
the experience of
various kinds of computers
whether they are expert
programmed systems
or neural network program systems
or LLMs playing games
and taking that
as evidence that
well,
Laozovitz said war is like a game
therefore AIs will be good at war
because they can play games of beat humans.
And for us, that was a deeply, logically flawed and empirically flawed arguments because
war is not a game in the real world.
That's just a metaphor that Klaus Witt seems very briefly in the opening pages of his treatise.
So there were many sort of extrapolations being made by some of these systems to AI commanders
that we didn't like and we wanted to pull apart with a deep.
logics because people when talking about the logics and the knowledge and the limit of these
technologies really.
There's a lot of slippage that goes on, you know, where they'll talk about one type of machine
system like Deep Blue, which is quite old and played chess if you've not heard of it,
and then compare it immediately to AlphaGo, which uses a completely different kind of
logical basis and then say, well, they're both good at games and war is like a
game. So, you know, but they're actually sort of two quite different technologies.
And also, that was being lost. And also two very different games with, uh, very different
different amounts of things to memorize, right? Like one of the, like if you really want to
get into chess, like before your intuition kind of kicks in, you spend a lot of time like memorizing
all the plays and all the possible different.
combinations of things.
And like it is easy to translate that into,
maybe not easy, but it is possible to translate that into,
like machine code very easily.
AlphaGo, you're going up a scale.
It's much more complicated.
And I believe the AlphaGo team then subsequently attempted to train a Dota 2 bot,
which is a far more complicated game.
And it was not going so hot.
because there was way more variables.
And so kind of what in my understanding is looking at like what Palantir has pitched is that you have,
you feed all of the information from the battlefield and from your own like logistics network
into like a big database.
And then an LLM has access to that database and access to real time information about the battlefield.
field and then if you want can then make recommendations about what to do in the moment.
And people are talking about, although it has not been like, as far as we know, has not been set up yet,
people using this kind of thing to make like command decisions in the moment.
That is something that people are talking about, right?
Yeah, absolutely.
They're using them in war games, which are preparations for, for, for real.
real practice. And they're also using them for targeting, right? Which one of the things that
Plevin and I really wanted to make clear in the article, and of course we do it in an academic way,
which means that it's not actually clear in any way at all, so sorry for that, is to say that
targeting things is a kind of command decision. It really matters how you go about targeting,
because behind that is a theory of how the violence you're using achieves your objective.
and ultimately will bring you closer to military victory.
So when we hear these fairly credible rumors of the Israeli military
using some kind of induction-based AI in the form of labelling.
Those aren't rumors, I don't think.
Like, you know, the Associated Press has reported that out.
We've had those reporters on the show before.
Like, I would say that that's happening.
the IDF is using Microsoft's AI systems to decide targets to strike in Gaza.
And I think northern, and I think southern Lebanon as well.
Right.
So those are command decisions being taken by a machine.
I believe reading the report, the suggestion was that there'd been some kind of human oversight,
but it said something like 20 seconds per target.
So, I mean, this is just, this is not meaningful human control by any definition.
And, you know, what you mentioned earlier about, you know, putting in all the data you gather into some sort of model.
A key problem with our way of building computers and doing logic is that so much about real war can't be quantified, can't be translated into binary code.
code. It can't beat ones or zeros.
So there's the immaterial stuff about war that can never be coded.
And that's on top of things you can observe, but then can necessarily express in a way that a machine can really pass in a way that it can.
So in the paper we have the example of the Battle of 73 Easting, which is that was a Gulf 4 wasn't at Cameron, 1990.
weren't built for.
And the Americans had some of the most detailed and comprehensive technical data of
like any sort of major conventional battle.
And they put it through their sort of big computer systems and battles.
And they actually didn't find anything extra out, just a few extra footnotes, a few
qualifications with all the technical data that they had about the performance of
each vehicle and platform of weapon systems relating them to command, etc.
and and even then there's a limit to how good your material knowledge is in terms of how useful it is for the other things
how people act on the day what goes wrong what Klausvis refers to its friction which you know you can never predict and also what you don't know that you don't know as well to start sounding like Donald Rumsfeld
where you know we're straying into the world of military history and as any branch of you know we're straying into the world of military history and as any branch of
history will tell you you never have complete knowledge and that history is always shifting
and changing as new information comes to light generations later or new interpretations
of knowledge and debates and arguments come through as well so it's not like the past there's a
solid unchanging data set like just pump into a machine into ones and zeros that's just not how
the real world works and how we try to understand the real world so there are real limits on what you
can pump into these information and what the Israelis are using.
It's very specific kind of information being given to a very specific algorithm to generate
effects that ultimately the leaders are happy with or they don't care about the negative
effects of.
And maybe you all don't know, this is one of those things that's maybe a little bit more shadowy.
Something like this is maybe also being operated in Ukraine as well, right?
I've heard some of the TEPBros talk about how so-and-so piece of technology that they happen to be selling has been road tested in the Ukraine.
But this is one of those, I think, you know, as academics we can be, there's another area that we're annoying.
It's kind of in our level for evidence can be quite high.
And so there's not anything I've come across so far that I could really get my teeth into and pull apart.
But yeah, I've had these same rumors.
Even so, it would be mostly around targeting uses, then I presume.
And a problem in a lot of the especially pro-AI literature,
but this is not restricted to the pro-military AI literature,
is that it reduces war to battles and targeting.
And, you know, if you do a course in war studies,
one of the first things you should learn is that strategy is not just about
winning battles. So even if you keep hitting the targets you think you want to hit with the help
of an LLN or something choosing for you or speeding up your processes or having a rapid UduLUP,
which is another tech pro fantasy, that made its big way into British politics in the Boris Johnson
and Dominic Cummings conservative government, it doesn't mean you're making better decisions. It doesn't
mean you will get that strategic victory or win that your polity needs or that is better for
everyone or the better for people you're okay about or your interests just because you can make
certain decisions faster or you can start automating things so a lot of the literature that
we were critiquing again just it just looks at targeting a very little else yeah the history
is littered with militaries that won a bunch of battles and then ultimately
ultimately lost the war, right?
Yeah.
I mean, I deal with the same problem.
Yeah, I deal with the same problem in, you know, my core area in academia, which is, you know,
military space, space power theory and space warfare.
It's still so much of the thinking is about this satellite versus this interceptor
or these ballistic missiles going on this flight bath with that kind of decoy, etc.
There's no systems thinking about the large-scale campaigns.
no operational art in thinking.
There's no thinking of the strategy and the politics behind it.
And you just see thinking of we just need to keep winning these battles or we can't afford to have any kind of threat in this domain because then we will be useless.
Despite the fact that in the last 20 years, the Americans have totally dominated, say, outer space and yet have lost in Afghanistan and Iraq.
I think this is, if you'll go down a tangent with me here,
just because it's in the news this week,
that's the core problem, I would say,
with the proposed U.S. missile defense shield,
the Golden Dome, right?
He just made a face audience.
Putting aside, like, even putting aside the physics of,
can you build enough stuff to shoot down a nuke
that's coming, like, from North Korea.
Setting that aside.
building something like that ignores the geopolitical implications of constructing it, right?
Yes.
Can you talk about like why building a system?
There's something that we call the fallacy of the last move.
Yeah.
In strategic studies, right?
So one of the first things you're trying to drum into students is that the enemy gets a vote.
They will, they will react.
and then you will have to react to the reaction.
And again, this comes back to this,
the same mindset that makes you think that an LLM
is going to be a great asset on the battlefield
is the same kind of mindset that makes you think
that you'll do this one masterstroke
and then, you know, you'll win everything from then on.
Yes, and there are plenty of countermeasures
to any such as just in this field.
And it's not, you know, it's not beyond the reach of any,
you know, even the smallest, smallest nuclear power.
So, you know, even for North Korea,
the answers for dealing with a possible Golden Dome kind of system
is pretty straightforward.
And the ideas are really old as well.
So you can read the classic nuclear strategists who were talking about very advanced missile
defense systems, even back in the 50s and the 60s,
because they were thinking ahead about where the technology was going with ballistics and mucs.
And the big ideas haven't really changed that much.
and the ability to feel those technologies has only gotten easier over the years
as better sensor technologies and high-tech systems are kept on spreading.
So, you know, and that will make the geopolitical blowback for the Americans much worse
if they go ahead with putting space-based interceptors, you know, up there.
They'll be much better used as anti-satellite weapons, though.
They won't protect America from a nuclear war,
but they might be a better place to disrupt other.
satellite operations.
Which might be part of the gag, I think.
But we don't have
we don't have a lot of
concrete details about what they're actually going
to do. I mean,
they didn't even tell us which contractors
have won the bid. I have some guesses.
Yeah, he gets to no prizes
if he guessed that correctly.
Elon Musk, audience.
The guy that's
already got a bunch of satellites,
in orbit has proven that he can put satellites in orbit.
Anyway, why do you think, back to, back to AI,
why is this from a military perspective?
Why is this attractive?
Why is this gaining traction?
Like, I understand why, like, tech companies are pushing it,
because they can make money, right?
Like, that makes sense to me.
And it's another use case for the thing that they're pushing kind of on all
aspects of life at the moment. Why, if you're a military commander, would you want to use one of
these systems? Bledon used a great term in the article, talking about the primordial soup
from whence this horrible idea kind of emerged. And one of the things that we like to
complain about over a pint is the so-called revolutionary, revolution in military affairs,
which kind of began in the late 1970s and fed into all of the famous Reagan defense programs
that ended up being sort of the symbols of American military power
eventually used to invade Iraq and Afghanistan.
And the idea there was to crank up the quality
and dominate the opponent on a kind of pound-for-pound basis
to such a degree that it just wasn't even anything really.
resembling a fair fight. And it relied on seeing better than the enemy, being able to respond
fast to the enemy, and very precisely destroying the targets, as opposed to the form of war
that had preceded it, which was kind of this more area-orientated approach where you're probably
firing a lot of artillery shells to deny an area. You don't really know where the enemy is in that
area, but it doesn't matter because you're more trying to sort of pin them down than kill them.
And in a way, AI for the battlefield is just the next step in this dream.
And as Bledem will tell you, the reason that they keep having this dream is that it's never realized.
There's always some snag that comes out of the woodwork and ruins their dreams.
You can read the 1990s and early 2000s literature on net-centric warfare, information warfare, you know, the RMA.
the same problems is what, you know, AI or LLMs are running up against now because you never have perfect information.
You'll never have machines that can really do what you need humans to do in, you know, anything that needs creativity and judgment.
So there's also the wide political context then where, you know, since a Cold War in the West, you know, you've had a downsizing of military forces.
and NATO countries, including the United States, their way of compensating for the reduction in the mass of troops and personnel,
was to have, you know, more capable individual platforms and systems and network systems that can be smaller and more dispersed,
but far more coordinated and effective in what they do.
So you don't need, you know, to destroy an entire block of a city to hit one government building.
building, you just have a couple of tomahawks and you've had your best shot at killing Saddam
before today. You don't need to waste so much resources and kill many other innocent people
in the process. So that trend has always been there. Technology has the answer to downsizing.
We'll see if that changes, given the situation now in Europe and how Europe is now actually
spending a lot more on defence. Personnel recruitment is still going to be a big issue even if we
throw a lot of money
of things and even
the United States
has happened
trouble of recruitment
perhaps even
more so
given policies
coming through
with the
HECSeth
as a sect
F now.
So,
you know,
and what's really
weird as well
with Hegsef
is that he's
obsessed with
the word
lethality
and yet
you look at
the way the US
military and
NATO forces
have gone
since the end
of the Cold War
the way
it's been
expressed
has been
increased
lethality
which is
probably where he's
gotten that
from really because that literature keeps talking about more lethality per platform or missile or
trooper or something like that.
So that's been the direction.
So I just don't really know what it means by saying we're not lethal enough.
Like, hang on, pound for pound.
You've never been so lethal as a military force.
But there's no depth.
It feels like NATO militaries are a bit of a glass tank, you know, huge firepower capability.
but if you know, you just hit it in the right way
or enough in the right way, the whole thing will shatter
because there's no depth to it,
especially thinking of the British as well.
And, well, maybe just ammunition as well.
So those are the wider political and social forces
and the peace dividend of the cold was lent themselves to that.
There's also, and we're seeing this now in managerial
and corporate culture, which also, you know,
infects political leadership.
Technological solutions for everything
or what's your answer for getting rid of staff?
Oh, there'll be some software package
we can get that do things now.
Automated checkouts.
Oh, you don't need as many people now.
So the constant drive for automation
that's been in an industrial world,
again, now trying to come through in the managerial world.
So, oh, we can get rid of these people
and there are fewer people to manage.
I just need to deal with a one company
and gives me that software now as an outsourced capability.
So that is coming through in government.
And here with the key of Starved government
to the Labour government in the UK,
there's a real obsession in getting efficiency gains in government
by deploying AI everywhere.
What that really means, I don't know.
But there is that techno-fetishism around LLMs in particular now,
which I fear is infest,
every sector of a public life.
Matthew, you asked what would make this sort of attractive to a military officer.
Anecdotally, obviously it's a self-selecting group,
but staff officers from the kind of captain up to the lieutenant colonel kind of range
have come up to me at conferences and things
and said how much they appreciated the argument we made in the article
and that they just, they hoped that this kind of line of reasoning would actually land
with the defense bureaucrats and some of the senior generals
who don't appreciate the kind of the craft
and the skill that's involved in the staff officer kind of profession
and that it can't just be automated away as a sort of annoying expense.
All right, I've got so many thoughts, so many, like, lines I want to chase.
One would be, it seems like this is something,
and correct me if I'm wrong here,
that that kind of began in the modern context
during World War II with bombing campaigns,
that there was this idea that if we dropped enough bombs,
which was a pretty new form of warfare,
and we just knocked out the right places,
we would maybe wouldn't,
the wars would be over faster,
we maybe wouldn't have to occupy as much territory.
And it seemed to the end.
answer when that didn't work out was just to drop more bombs and bigger bombs. And here we are.
And now, now we're talking about kind of doing a similar thing, but with artificial intelligence,
I think. And on like a large cultural level, like set that aside. Just thinking about that.
is there also an aspect of this?
And I think that this was also part of bombing campaigns too.
And this recurred when America started using drone forces,
that there is a remove, there's now a greater distance from the violence.
The person who is doing the violence, who's dropping the bomb,
who is okaying the target through the AI system,
who is firing the Hellfire missile from their storage container in Nevada,
is removed from the, like, from actually seeing the physical violence
and experiencing kind of the direct consequences of it.
Do you think that that makes some of this attractive, too?
Like, putting machines a little bit more in charge alleviates us of the responsibility
of having conducted the war?
I think we'd both agree with you.
We'd also point to a longer tradition, even than that.
And that not all kinds of warfare, even in the 20th century,
involve getting up close and personal,
including from the air, you know,
the artilleryman's work is done at a distance.
It has, you know, been out of the line of sight of the target
potentially for quite a long time.
And other kinds of, you know,
the fire and forget missile also seems to be a sort of relatively recent thing,
but you also have landmines and booby traps and things like that,
where the act of violence is sort of split into two from the initial setting of the trap.
And this is a kind of automation as well, right?
This is the other thing about automation not being new.
You're seeding the control of that violence to a technical device.
It's not smart, but in a way it still makes a decision.
You know, am I a landmineer?
Am I being stepped on or am I not being stepped on?
and this is just my two sort of decision states.
But it's grown over time, right?
The drone is qualitatively different.
An experience of war to the experience of the World War I
artillerymen.
And I think possibly, again, not for the actual operator,
but for the commanders, it looks more desirable.
Because for the operators, so many of them have terrible, you know,
PTSD issues after doing this job,
because they are physically removed,
but often they have extremely high fidelity views of the violence that they're inflicting.
And then this very strange lived experience where they then walk out of, you know,
the box in Nevada and they drive home to their kids,
which is super strange.
And the AI sort of advocates are pushing war further in that direction if they get their way.
And if the tech actually works, of course.
Yeah.
I mean, you know, you can look at, you know, automation, you know, World War II with a V1, where, you know, they just sent it off, flies for a predetermined amount of time, so the fuel runs out and then drops somewhere over London, southeast England, you know, similarly with the V2s.
But, you know, aerial bombers, you know, the bomber crew would just see the city, but they wouldn't see individual building they destroy unless they were particular low altitude bombers.
So there's always that kind of disconnect really.
Of course, it's greater now when you're seeing it on a screen in Nevada
when you're bonging somewhere on the other side of the world.
But then weirdly you are more connected because you have that, you know,
first person view of what you're doing in many ways.
So, yeah, so that's a paradox about how it's operated rather than it being, I don't know,
remote control, I guess, because there are different kinds of remote control or automation.
and automation versus AI is something I struggle with still in terms of, you know,
I tend to think of LLMs as software-defined automation because it's not like a V-1,
which is totally, you just have mechanical parameters on a machine designed to work in it in a certain way,
whereas with the narrow AIs we're talking about this article,
it's, well, you can create anything to govern any kind of system that's hooked up to the right software.
and if it understands the software and the inputs,
you have software-defined automation.
So like software-defined autopiloting,
software-defined target acquisition,
because it's just doing the same thing over and over again
to set parameters,
and in many cases it seems without much human intervention.
So automation has that much longer trend, really,
if you interpret it that way.
Well, and also you hit on another problem there.
oftentimes, like I mean, I report on AI all the time, and even I struggle to understand, like, exactly how a generative adversarial network works and, like, you know, what exactly is going on underneath the hood of any of these LLMs.
And I would, you know, if you believe even half of the reporting on them, a lot of the people that are working on the systems also struggle to understand exactly how they work.
and like what the what the decision-making process is under the hood so much to the point where like
it makes some people who work on them very odd and there is like a beyond just a anthro I always
can't say the word attributing human characteristics to the to the machine intelligence that are not there
maybe even ascribing, like, religious stuff to these machine intelligences that are not there.
And, like, I know that sounds wackadoo, but the, you know, the lead, one of the lead scientists who was on the board for, uh, open AI before he got ousted, like, believed that.
Um, and was conducting religious ceremonies in the office.
Um, so does the fact that, like,
Like, we don't know how this stuff works.
Also, kind of, I kind of went on a rant, and I'm sorry.
I had a point at the beginning of it, and I think I maybe have lost it.
So let me ask this.
Well, I think on personification, yeah, you raise a real danger with, and we've seen it with chat GPT or Gemini,
and you know all the other ones, people are having conversations with these things,
because LLMs in particular, as we know them, as they are dominating the market now and public life
and private lives as well, disturbingly,
is that they are just designed to give you plausible answers and responses,
not necessarily the right ones or the best ones or what you need.
So it's plausibility,
and it's meant to convince us.
They're biased towards making you happy too.
They're biased towards giving you an answer at all.
Yeah.
And so then we start personifying it and treating it as an entity with
sort of, well,
they don't even have like an animal's
intelligence. So like, you know,
you know, there's a cat. The cat
has a personality. The cat
makes decisions. You have a relationship with a cat.
You know,
you gamble them, they have their own quirks
and they do their own thing and
you know, and you get a meaningful
relationship out of a pet
of any kind. With an other land,
it seems people are starting to talk to them
as real people, but it's just putting
mathematically plausible things
together for you and as we know increasingly getting things wrong and this is another problem
with and this is the problem with automation that is not restricted to LLMs in that once you know how
the automation works or are supposed to work you then find ways of breaking that automation or
working around it so once the RAF started learning how they could properly intercept and detect
the V1s, their effectiveness really went down. And it was difficult to alter the parameters of the
automation of the V1s. What we're seeing with LLMs similarly are people, now that they have the
codes or they can see it's being used and how is using it to do things they were never designed
to do or to do things against the interests of others. So already we see people using LLMs to
reveal corporate secrets of the people who built them because they didn't think to tell the LLMs
or don't tell them how you really work or what we told you how to work
because it might give them our ulterior motives away
and we've seen that with some of these LLM programs
which is really quite interesting
and you know fraud is using these systems as well
and we see it in them the voice fakes the deep fakes as well
and you can start just manipulating the code
also then there's the unintentional problems
from, well, I don't know how many people are calling it this,
but it's mad cow disease for LLNs.
So it's eating its own stuff
and therefore develops more bullshit tendencies.
I mean, generous people call it AI hallucinations.
It's bullshit.
Like, it's, you know, the academic definition of bullshit
by a Hoffman, if I remember correctly,
it's just it puts out things in the moment
that sound plausible that answers,
the question with no care as to whether it's true or not.
And the more the AI is trained on AI generated data,
the worse, the outcomes or the AI get.
And that's been the long-established problem.
And as our internet and our media landscape,
our words and literature now has been poisoned by LLMs,
LLMs are reading it and they're getting worse.
And I just keep seeing more stuff come out of the time.
I just confirmed that because we were writing this three years ago, and we feel quite vindicated by what's happened since.
I think maybe you've been too nice to us so far, Matthew.
So I'll start criticising.
Please.
Ourselves.
And it's something that has been raised to me actually by military practitioners about the article, is that if you read it, you might think that Bledon and I think that generals never make bad decisions because they're humans.
and they use this magnificent military genius that Glassfitt talks about.
And when Blevins talking about bullshit,
I politically came of age during the war on terror.
It was my political awakening to try and understand what on earth was going on
and why this was supposed to be a good idea.
And, yeah, there was a lot of bullshit in all sorts of places,
coming from generals or retired generals, or the bosses of generals.
It's just, I think what we're trying to say is that if you seed this to some kind of LLM,
then what you will get is just constant bullshit.
And any hope that we have of actually being able to produce some kind of peace, justice,
security is just going to be totally hobbled by them.
And a key claim in not all of the sort of military A-alcitabist literature,
but in some and a lot in media
is that they sort of move towards the argument
that these AI generals or LLMs,
if not now, one day will be as good as if not better than people.
That's the claim we don't like.
Because, yeah, people are inferring that we think humans are flawless.
No, we never say that.
Humans are deeply flawed.
But the problem is, is if the LLMs are never going to be as good as people,
then why have them?
Unless they provide very specific useful services in certain areas, which we say they do.
It's just that they're not going to supplant or improve on what humans can do.
As flawed as we are, as many mistakes that we make, we're going to make even more mistakes or have any more problems
if we believe that these machines can do a business of command better than humans or as good as,
which there is no evidence or even logic that supports that.
Give me some of the narrow uses where AI is good in war.
Like, what are the things that it would be used for and be good at?
Can we just clarify that by good?
Effective.
Effective, not morally upright.
Noble, pure, beautiful.
Yeah.
Yeah, so, yeah, Bledon and I are not trying to say that AI doesn't work ever.
it's just it's a matter of application right so yeah this is an important area to to consider
image recognition in and of itself works very well and given the right kind of institutional context
you know if you've got a battlefield and you want to find Russian tanks to then shoot
anti-tank guided missiles out later on then yeah your your induction based
Inference, statistical analysis of reams and reams of satellite photos you have are going to be helpful for that.
And they're going to do it faster and probably more effectively than a human being.
But you want to check the results so that it doesn't, you know, due to some quirk in its training that nobody's seen up until then,
it actually also thinks that school buses look like tanks.
And the only way you find that out is after you've blown up a load of school bosses.
And on the clerical work as well, we're also hearing from serving offices that there's a lot of paperwork to do in these kinds of jobs.
And AI, in terms of its ability to sort of draft emails quickly or, you know, summarize things if used responsibly.
And again, being checked carefully could be something useful.
With that latter one, to me that sounds like there should just be administrative reforms.
You shouldn't need to have something summarizing doctrinal documents.
It should have been written well enough in the first place that you can rapidly draw on it.
Given that supposedly you're supposed to be using this stuff potentially being shot at.
So some of this is a fix for problems that could be fixed, as Blythe was hinting at,
rather than a technological fix, actually a political fix, a social fix.
and we just, we don't have the appetite collectively to do that
and for some reason find the technology easier.
But I think that the target recognition,
as opposed to target decision,
is actually somewhere that, you know,
we do concede is going to be helpful to military.
Yeah, and so it's where you need perhaps just far more complicated,
automated systems that can be safely automated to be effective.
So I mentioned, you know, air defence systems.
You also do missile defence where you just have to have something that can make much more sense of complicated sensor data, including optical and radar imagery.
So that's where you need a computing power and where you can attach things to clear patterns or not.
So, you know, we're near an hour where there's far more satellite imagery being generated than any single human-powered spy agency or military.
intelligence department can go through.
So the only way you're going to make use of these huge constellations of hundreds of satellites
taking pictures now is to have better search functions.
So, you know, in many ways, and people already are using these LLMs as the way they used to
use Google.
We, like, we grew up with Google search, you know, in the 90s and 2000s.
That's how people are now using LLMs as a, I'm not convinced if it's better than the old
Google, but it's using
as a more sophisticated search system
for the amount of data that's out there.
So the trouble
is that with a lot of these technologies that
come about in the military sphere,
there's a lot of hype,
transformative claims are made
and the bubble bursts,
they go away, but then forms of
these technologies will carry on
and will find uses, but
they're not going to be in
the show stopping stuff.
They're not going to undermine
existing force structures completely.
Again, it's that evolutionary incremental stuff.
So, you know, I mentioned information warfare
or the revolution in military affairs earlier.
Those technologies that were the hype back then,
they're still there and they have had impacts,
but they have made the battlefield transparent.
But...
If we bring it back to your friend Parmelocchi, Matthew,
you know, he talks almost exclusively
in these terms of, you know,
victory through one massive glorious attack.
You know, we'll AI automate loads of undersea drones and we'll fire off, you know,
two and a half thousand all at once and then we'll, that will win the war.
And that's not going to happen.
But what will happen is the US Navy will end up with a load of, you know, fairly mediocre
autonomous underwater drones that Parmel Occup made, you know.
So that this is the kind of cycle that Bledan and I observe in our day jobs.
Um, it's a great salesman, Mr. Palmer lucky.
Uh, and I think that that is his primary motivation, as you said.
Uh, he's definitely trying, I always feel like he's trying to sell me something when I see him talk.
Um, so it, again, you can't get around the problem in war of needing to take and hold the territory.
to ensure the victory,
no matter how many autonomous drones
you fire at something.
Another use case,
kind of on this pattern recognition
in like missile defense systems,
two things that I've heard
kind of floated from people
who are smarter than me.
One, I'm talking to some people in Estonia
who are looking at building
what they were calling it,
similar to what Palmer Lucky's helped
construct on a U.S. southern border,
a digital wall
is the way they describe it,
along Estonia's border with Russia.
The point being that
preferably you would have a human being on a physical wall
in the old days, like watching every ounce
or every square inch of the place.
And if the Russians come, you shout it down the wire.
Instead, you build patterns that are hooked up to some sort of AI system
that has pattern recognition.
And if it sees something, it thinks is a Russian tank.
It alerts a human being down the chain.
Similarly, I've heard about this being, this has been floated as being part of deterrence systems.
As you said, missile defense.
Similarly, cameras watching the skies, watching a known ICBM area of launching platforms in other countries.
if something, you know, shifts in the right way, a human being is alerted.
So what you described there is a militarized border.
It'll use a lot of things that are old-fashioned.
So the Baltic states and Poland are withdrawing from the Ottawa Treaty.
They're going to be bringing landmines back.
So they'll be part of any such defense.
So, you know, that digital war thing just sounds like a marketing pitch for,
you just want to add some new things onto.
well-established ideas
or what a militarized border is
and it's going to have various kinds of early
warning systems and comm systems
and like any
if it's a good system it'll be difficult
for the Russians to find
weaknesses or you know
single point of failure in such a system
they might want to look at what the
South Koreans are doing with the North Koreans
where they have automated gun
and placements along their borders
so yeah so again
it's like as part of a system
and if some automation of camera feeds can help
so that it triggers human analysts
to look at this camera,
say, hey, I found something that looks like a tank moving
or some special forces vehicle or something.
Great, it can work as an early warning system.
But again, it's like, how was it done?
Do the people who deal with the automated system
know the limits of that automated system
and know all its quirks and where it might get fooled and things like that.
And do the Russians know the limits?
Because, you know, there are some wonderful videos of people trying to fool image recognition bots, you know, hiding in cardboard boxes.
Doing a Wiley Coyote.
Or putting a label on their head.
Have you seen the Wiley Coyote where, like, they put, it's like when Wiley Cote you tried to kill the Roadrunner by having, like, a painted on road.
but it's actually a cliff edge.
And the road driving computer thinks there's a road,
but there isn't because somebody just painted it on.
But it's either like a solid wall or something or a cliff edge.
So it's like, you know, it's a cartoon setup.
But it's like, yeah, these pattern recognition systems can be fooled
if people understand how they work
and people don't pay attention to the action-reaction of these systems.
One of the bits of wisdom you get from reading strategic theory,
properly, we should add,
is that
nothing in war
is ever truly unprecedented,
but nor is it ever really truly
a repetition.
And it's the kind of thing,
if you have this sort of very sensitive,
super digital data fusion border,
the thing that you have to worry about
is not the way that the Russians
attacked or scared you the last 25 times.
It's the one unique thing
that nobody's ever seen
in military history before. And they never
need to do again. So, you know, training your AI on that then doesn't help you next time
because they're going to come up with some other scheme. And this is just the story of war.
And, you know, if you have that fancy digital border, but you actually don't have enough
people there to shoot guns or fire off those javelin anti-tank missiles, you're just going to
have a really good view of the Russians coming at you in all sorts of real-time high resolution,
but you haven't got anyone to do anything about it.
or you've run out of ammunition.
So with that digital border stuff,
I would hope that they're not neglecting
the material need of borders and walls.
So, you know,
so this would not go,
you know, in isolation from a landmine stuff.
But it's like you need those concrete borders.
You need units ready to respond eventually.
Because to me,
it just sounds like the automated stuff
will just be better as an early warning system.
Not much else.
Bledin mentioned the Klausovitz makes this metaphor of war is like a game of cards because he's saying it's got this element of chance.
But the guy was a fan of dialectics, so you always have to look for the other side of the dialectic in Klauswitz's writing.
And the flip side of that is that war is nothing but a duel on a larger scale.
And he goes on to talk about it's more like a wrestling match or a boxing match.
So you always have these two sides of war.
of the clean, the rational, the somewhat predictable.
And then you also have pure primordial hatred, blood and guts, you know, basically, you know,
that we could have the most technologically advanced war in human history,
and you can guarantee that at some point two humans are going to kill each other with their
bare hands or rocks or something like this.
And it's that just the deep depths of human violence can't be abolished simply by
building a fancier and fancier machine.
We're ending a little bit early, but I think that that's probably the capstone quote.
I think that's kind of the note to go out on if that works for everybody else.
Thank you both so much for coming on to Angry Planet and walking us through this.
Where can people find your work?
So people can find it on the Journal of Strategic Studies.
And it is open access.
So you can download the PDF with no paywalls.
And the research for that was in part funded by the European Research Council Third Nuclear Age Project,
which Cameron was a postdoc researcher on as well.
So they can find and download it there.
I believe the article is now on the Journal of Strategic Studies' most read all-time list,
which we're very thrilled about.
So yeah, and yeah, thanks very much for having us.
song. That's all for this episode of Angry Planet. As always, Angry Planet is me, Matthew
Galt, Jason Fields, and Kevin Nodell. If you like the show, Angry Planetpod.com,
get another project launching there soon. I think it has a good time to sign up. Give us a
follow. As always, we will be back again soon with another conversation about conflict on
an Angry Planet. Stay safe until then.
