Angry Planet - How Israel Is Using Microsoft AI to Pick Targets in Gaza
Episode Date: March 10, 2025Listen to this episode commercial free at https://angryplanetpod.comThe Israeli military is using AI products from Microsoft to conduct its war in Gaza. Off the shelf AI products powered by the tech c...ompany’s Azure cloud computing system and OpenAI are helping the IDF sort through data, translate Arabic, and even pick targets. But AI translations aren’t perfect and these systems often make mistakes.What happens when the consequences are life and death?Associated Press global investigative reporters Garance Burke and Michael Biesecker are part of a team that broke the story about Israel’s use of Microsoft’s commercial AI in a war. They’re on the show today to help us sort through it all.Defining “AI”Off the shelf solutions for warMicrosoft Azure as war’s translator and search engine“The Israel military is one of the leading militaries in the world adopting the use of AI to assist in its war efforts. They’ve been doing this for years.”AI is picking targetsStudents to militantsAI’s mistranslationsBlaming AI for human sinsMatthew’s vision of a hellish automated futureEmployees push backThe money is just too goodHow US tech giants supplied Israel with AI models, raising questions about tech’s role in warfareAs Israel uses US-made AI models in war, concerns arise about tech’s role in who lives and who diesMicrosoft workers protest sale of AI and cloud services to Israeli militarySupport this show http://supporter.acast.com/warcollege. Hosted on Acast. See acast.com/privacy for more information.
Transcript
Discussion (0)
Love this podcast. Support this show through the ACAST supporter feature. It's up to you how much you give, and there's no regular commitment. Just click the link in the show description to support now.
Hello and welcome to another conversation about conflict on an angry planet. I am Matthew Galt. Jason Fields is off doing God knows what on the streets of New York City today. And we are jumping right into it. I'm here with Garence Burke and Michael Bisecker of the Associated Press. They've done some incredible.
reporting on how Israel is using AI in its war in its war on Gaza and against Hamas.
Thank you both so much for joining me today.
Thank you for having.
So I want to be kind of specific up at the top.
I think when we talk about AI, there's a tendency to kind of lump a whole bunch of stuff together under that banner.
So when we say AI here, what exactly do we mean?
Sure.
So, I mean, people may know that for years, militaries have been hiring,
companies to build custom autonomous weapons, drones, etc. But what we reported on is this leading
instance in which commercial AI models, so think the kinds of models made in Silicon Valley that can
do everything from translate or transcribe what people say to find patterns in big troves of data
are being used by the Israeli military. So it's not something purpose-based.
built for the military. It's something kind of commercial. What we would say is like off the shelf
that's been picked up by them. That's right. Yeah. And I think that's one of the things that really
interested us here is talking with some of the Israeli military officials who we were able to
reach about their use of these general purpose AI models, which hasn't really happened much
in an act of war before now. So which commercial models are being used?
Well, the data that we obtained showing usage after October 7
that shows that they've been using an array of AI models and bots
that are available through Microsoft's Azure platform,
which is a cloud computing platform that comes with an AI toolbox.
So I think both cloud storage as being an element of this,
as well as processing power, allowing a customer to basically run complex tasks
task on Microsoft servers, and then a toolbox of AI products.
And some of those tools are for indexing video, for example, or searching through vast
amounts of documents and pulling out specific threads of information.
You know, as Garantz mentioned, it's for translating text, transcribing audio,
translating between languages.
And there's also OpenAI, through.
Azure provides products to Microsoft customers as part of its relationship between those two companies.
So what exactly, I know you kind of went into it there, but can you kind of drill down a little bit more on what exactly this looks like?
Is there like a front end that someone pulls up and they say like, will you translate this document for me or do we have any idea?
Well, I mean, there's not a lot of transparency here because you're talking about classified uses in a military
setting, but as we understand it, based on the sources that we were able to interview,
there are two target rooms, one dealing with the Northern Theater of the War and one dealing
with Gaza, which is the Southern Theater of the War, the Northern Theater being Lebanon.
And each of those target rooms, there are computer terminals, they are Microsoft-based
machines.
Through those, there's an intranet system where, you know, through a web browser, they can
access Azure products. One intelligence officer who works in these rooms told us that he would use
Azure, for example, to send a query through intelligence documents to come back with specific
sorts of information or specific sorts of information. We know based on the usage data that we're
seeing that translating and transcribing is a big usage category, the majority of the usage that the
Israeli military was using. And, you know, we can infer some from that.
that, we know it's being used to process intelligence and to aid an intelligence gathering that
is then used to help pick target packages and strike specific targets in Gaza and Lebanon. So if you're
getting a lot of signals intelligence, for example, you're intercepting telephone calls, you are
intercepting text messages, you have databases of where people live in a specific country, or maybe
you're tracking specific cell phones, you have human intelligence, although all that data goes
into a vast pool that then becomes very difficult for an individual to try to search through
and get specific strands of information about an individual or to spot a trend. And that's what
these AIs are helping the Israelis do. They develop their own internal AIs. One's called gospel,
another is called Lavender in 2021 to help select targets. And, you know, the Israeli military
has been one of the leading militaries in the world
and adopting the use of AI to assist in its war efforts.
They've been doing this for years.
As we mentioned earlier, what's new here
is that it's off-the-shelf commercial AIs
are also being incorporated into this process
of reviewing intelligence and translating intercepts
and then trying to ferret out
how that can be used to make individual targets
in the battlefield.
Yeah, it's very interesting because I remember, you know, in the early days of the war on terror, and kind of the establishment of this vast signals intelligence apparatus that America and other countries built.
The criticism was always, well, you could collect all this data, but you can't sort through it.
You know, you don't have enough human beings to go through all of this stuff and pick out the information you need.
Well, that seems to be a solved problem now.
Well, you know, one of the things that we really wanted to look at here is, does AI solve this problem that you're identifying?
I mean, I think that especially if, you know, as Israeli sources told our colleagues Sam Mednick, they are using these off-the-shelf commercial models that were purpose built for, you know, general tasks in society, not necessarily for helping to determine.
who should be a target of a bomb. I think it's a real open question as to how well these models are
performing. Now, the Israeli military says that its analysts always use AI-enabled systems to help
identify targets, but then they independently look at those recommendations with high-ranking officers,
you know, actual human beings to make sure that they meet international law. So the Israeli military says
that even when AI plays a role, there's always several layers of humans in the loop.
That said, you know, there are questions about the relatively high civilian death toll in the wars in Gaza and Lebanon that I think are, you know, animating some of these discussions about how well AI models work in active warfare.
And, you know, there are inherent limitations in how AI works that can lead to some reliability issues.
I can't think logically, for example. So, you know, some specific examples that Israeli intelligence officials relate to us.
One of them was they had a vast spreadsheet that was included in their data that was a list of Gosen students, high school students, that had taken an exam.
The list was called finals.
But, you know, it was put in the system.
It was sorted through an intelligence, and suddenly those names that were on that list were being generated as suspected militants.
And it took a human being to spot that, no, this doesn't mean that they're a militant.
But to an artificial intelligence model, it was included information, a list of names.
It didn't necessarily think through, like, does this mean that just because they took an exam on a specific date in their high school students,
in Gaza, this is the information it had, so it included that in a list of potential militants.
Yeah, this is an excellent point, actually, and thank you for bringing it up.
As a tech reporter, I should know a little bit better, that these systems are kind of only as good as their inputs, and even then they tend to hallucinate a whole lot, right?
can you kind of tell me what the vibe was, for lack of a better word, when you talked to Israelis.
Is this a thing that they liked and thought was good?
How worried are they about it?
I mean, I think one thing that stood out to us is just the way in which these models are being implemented is pretty untested.
It's pretty new.
The Israeli military has been using some of these systems.
since 2021, but since October 7th, really, their usage of AI has ramped up very significantly.
And so some of the younger soldiers who were working with these systems told our colleague that,
you know, sometimes they felt pressure to approve more targets more quickly. I think, as you
mentioned, there's also some concern about hallucinations with the models that translate
and transcribe from Arabic into, say, English or Hebrew.
So I think it's a series of real, you know, uncharted territories, right, that people are grappling with,
both in the Israeli military, obviously, you know, in Gaza and Lebanon, where these systems,
you know, have killed many civilians.
And I think generally amongst militaries around the world,
people are really looking at this example
and trying to sort out, you know, how,
what this means for the future of warfare.
Matthew, you spoke back to the American experience
when we built a terror watch list based on, you know,
vast amounts of intelligence we were gathering in the post-9-11 world.
and there was a lot of difficulty having enough manpower,
for lack of a better term,
to look through all this intelligence.
So, you know, one thing that caused a lot of problems
with the Terra Watch list is Arabic is language
with multiple dialects, multiple ways to spell specific names,
lots of synonyms, things that sound,
one word sounds like another,
but can mean things that are completely different.
And if you aren't familiar with the dialect or the context of the speaker,
it is easy for a mistranslation to occur.
And one of the examples that one of the Israeli intelligence officers cited to us was that the word for payment in Arabic sounds exactly the same as the word for grip, as in the grip on a rocket-propelled grenade launched you.
And so that a conversation in Arabic between two individuals could be easily misidentified as being talking about weapons instead of talking about financial payment.
if you didn't understand the context of the conversation.
And another issue, as Garans brought up, you know,
a lot of these soldiers are very young, relatively untrained, 21, 22 years old.
They felt that there was a certain amount of pressure to approve strikes.
The Israeli military's own accounting,
they used to take maybe a day of 20 individuals working all day
to approve one target in the 2020-2020-21 time frame.
And with the addition of the automation from these AI bots to help sift through the intelligence, while humans are still reviewing each target package, they were producing hundreds a day. They were carrying out four or five hundred strikes a day. And that is just an exponential growth in the number of strikes that they're carrying out. And experts we talked with raised concerns that there would be confirmation bias, that essentially in the rush to approve strikes, if the machine is saying it's a good target, then the human is more
likely to believe it's a good target and kind of set aside any doubts they might have.
Yeah, this is a question I have, actually, is there a sense, and this is a speculative question,
so I apologize in advance, but kind of is there a sense that you are deferring part of the
decision-making process here to a machine? And not only the thinking that like the machine is
better at it than I am, but also that the machine is taking a certain part of,
the moral or ethical consideration away from me. They are taking part of that burden away.
I'm not sure we have direct reporting on that. I mean, you know, the Israeli military affirmed again and
again that it is human beings approving each and every strike, that human beings review this
material. They would say that if there's machine translation involved, then an Arabic linguist
will then be reviewing those translations for accuracy.
When we spoke to people who actually work in the target rooms, they raise concerns that that might not always happen just due to time constraints, that someone is going to listen back to the original audio and confirm that the translation is accurate.
You know, what we do know is that there's been a huge increase in the number of strikes under this system and that there's been also just a phenomenal amount of civilian casualties in Gaza.
70% of the buildings in Gaza are damaged to the point that they're not had bull right now,
according to various reviews of the battle damage over more than a year of war.
And, you know, there were, bombing indiscriminately of civilian targets is supposed to be illegal in war.
And the Israelis would argue that this is not indiscriminate, that each and every target is chosen, selected, and reviewed by a human for every bomb dropped.
But at the same time, if the end result is the same, you have to ask whether, you know, it's real an improvement.
And I would just say also, you know, it's extremely difficult to identify exactly when AI systems enable errors because they're combined with so many other forms of intelligence, including human intel, right?
But together, they can lead to wrongful deaths.
And so we took a deeper look at the deaths of a family in life.
Lebanon in November 2023, a mother who was fleeing with her young daughters and her mother from
clashes between Israel and Hezbollah on the Lebanese border when their car was bombed.
And we did ascertain that AI, you know, has been used to help pinpoint all of the targets in the
past three years.
And in that particular case, AI likely pinpointed a residence and then other intel
could have placed people there.
And then at some point, you know, this car is struck by a bomb, right?
So maybe humans in the target room would have decided to strike.
But, you know, there was confirmation that this was a mistake.
The Israeli military actually told us that they expressed sorrow for the outcome.
They didn't answer whether AI helped select the target exactly.
or whether it was wrong, but I think that it's, you know, it's important to take a look at how AI is being used in conjunction with human decision making because these models have such high consequences, you know, when they're used in active war.
Literally life and death. In the life and death of innocent people.
Well, you know, there's the greater ethical question of are we crossing the line?
mind when we, when machines are starting to decide who lives and who dies. Now, the Israeli military
would say machines aren't deciding. Machines are making suggestions and then humans decide. But it's a very
short jump at, you know, to machines making those decisions, especially as the pace of war increases.
You cited the Ukraine war as example. We're seeing the rise of drone warfare there, suicide drones,
attacking individual vehicles or individual soldiers even, a soldier in a trench.
could be targeted by a drone.
And we have technologies right now to make drones very autonomous.
We have facial recognition software.
We have various forms of AI that can be incorporated into images being selected from the battlefield.
And it's, you know, a combination of existing technologies right now could make a very
autonomous weapon, like a suicide drone.
That's essentially a flying killer robot that could recognize a specific uniform.
as opposed to another uniform, you know, in Ukraine, for example, soldiers tend to wear
armbands indicating which side they're on because their uniforms are so similar.
But it's not very difficult to imagine with current technology that you could build a suicide
drone that would attack one-side soldiers and not the others.
And we know that Ukraine is using similar systems.
Like we said earlier in the conversation, it is different because these are,
Ukraine is using purpose, as far as we know, as far as what's been reported out in places like time and other places, those are purpose-built systems.
That's Palantir going in there and saying, we can help you do war with this specifically.
It's not an off-the-shelf solution like what Microsoft is providing.
Oh, I have this fear in this, I have this kind of picture in my head of the future.
where a lot of this stuff becomes automated
and also becomes on the back end
like propaganda tools
because we see in Ukraine
there are lots of websites on the internet
and lots of social media platforms that are filled with
FPV camera footage set to music
mixed with memes
coming from both sides of the conflict
I hate to imagine this future where automated drones are picking targets, killing someone,
and then automatically uploading the footage onto telegram over the Benny Hill theme.
And I just fear that that is where we're going.
Sorry, that's a tangent.
Can you talk about the amount of data that Israel is processing here?
Do we have any idea like what that looks like, how much Microsoft is handling?
Well, I mean, part of the issue here is the lack of transparency.
You know, the Silicon Valley companies have not been very transparent about what they're doing with the Israeli military,
what they're selling to the Israeli military or other militaries, including the U.S. military.
What we do know from the data that we were able to get a glimpse of is that from the week before the October 7th attack
and then looking at weekly usage data for the weeks after,
we were able to see that by last March,
the use of Microsoft and Open AI artificial intelligence models
by the Israeli military had grown 200 times
from what it was the week before October 7th,
and that's by the end of March when it was at its highest.
Now, there were a lot of peaks as specific tasks
appear to have been run one week and maybe not as many the next week.
But overall, the usage increased tremendous.
dramatically over the opening months of the war.
And we were able to see that, you know, they were storing cloud storage, the amount of data that they were storing through Azure's cloud storage service.
The Israeli military doubled between October 7th and July 2024.
So it grew to 13.6 petabytes.
That's roughly 350 times the digital memory you'd need to store every book in the Library of Congress.
It's just an absolute huge amount of data that the Israelis were storing on Microsoft servers.
We could also see that the amount of compute that they were using,
which is basically using Microsoft systems to run complex computer tasks on Microsoft servers,
also rose by more than two-thirds in the first two months of the war alone.
Are there any other companies involved here, or is it just Microsoft?
No, I mean, we should be clear.
Microsoft and OpenAI are really,
you know, part of a legion of U.S. tech firms that have supported Israel's wars in recent years.
You know, might recall Google and Amazon provide some cloud computing and AI services to the Israeli military under Project Nimbus, which is a $1.2 billion contract that was signed in 2021.
The Israeli military has also used Cisco and Dell server farms or data centers.
Red Hat, which is a subsidiary of IBM, has also provided.
cloud computing tech, and Palantir too, which is also a Microsoft partner in U.S. defense contracts,
has a strategic partnership to provide AI systems to help Israel's war efforts. So I think it's a
situation in which this is really a growing market, and we're seeing that in the United States
with the Department of Defense as well, which, you know, has recently signed a new deal with
scale AI. So I think this is really a space to watch where lots of tech companies and,
you know, now their officials are really seeking to gain more government contracts around
the world in the current context. Do we have any idea how much Microsoft is making off of these
contracts or like what the nature of them is? Like what are the contract to provide and what is
Israel paying. One of the documents that we received was a summary of a 2021 contract. It was a three-year
contract between Microsoft and Israel's Ministry of Defense. And the base amount of that contract was
$133 million. Now, the way Azure works is it's a use fee. So essentially, the more of it you use,
the more of it you pay. But the base level of the contract, the guaranteed minimum, was $133 million for that
three-year period. So it's like a cell phone plan almost. You pay a minimum and get a certain
amount of usage, and then you pay as you go afterwards. But Microsoft has not said publicly.
What is, you know, business with the Israeli military is worth. A document that we saw said that
the Israeli military is the second largest defense customer for Microsoft after the U.S. military.
So for a long time, big tech companies at least played lip service to the idea that they were not,
that their products were not going to be used for war.
We're certainly not going to be used against civilian populations.
That has changed a lot.
What do you think has shifted over the last few years that they have gone all in on this kind of thing?
Like, as you said, Microsoft is certainly not the only company, right?
Yeah, I mean, we reached out to Microsoft with a whole raft of detailed questions about the cloud and AI services that they provided to the Israeli military.
They declined to comment for this story.
They do say on their website that they're committed to, quote, champion the positive role of technology across the globe.
And, you know, they have a transparency report from last year where they say they want to reduce the risk of harm.
But they don't talk about their lucrative military contracts in that transparency report.
So, you know, some of what we really wanted to bring forward here is just the new reporting about how these models are being used by the Israeli military.
And we know that, you know, other militaries around the globe are contemplating this sort of thing as well.
And numerous tech companies have sort of quietly changed the terms of use.
for their AI models over the last few years.
Open AI told us in response to our reporting that they didn't have a partnership with Israel's military
and that its usage policies say its customers shouldn't use its products to make weapons or harm people.
But about a year ago, Open AI also changed the terms of use for its products,
you know, how customers should use GPT and other models that Open AI makes to allow for, quote,
certain national security uses that align with our mission. So I think we're just seeing a real
change in the industry here where many AI companies are, you know, signaling a real openness
to collaborating with military agencies, which is a change from even a couple years ago.
And increasingly, it's causing some schisms between the people who run the tech company,
in Silicon Valley and their employees.
At Microsoft, for example, there have been some employee protest,
including one that we covered last week,
that was during the town hall of the CEO,
wearing T-shirts, questioning whether their code was killing children.
You know, there's a lot of money at stake here.
While Silicon Valley, in its public-facing statements,
including, like, on its website of AI principles,
which Microsoft has, for examples,
talks about how they try to monitor the usage of their products through their entire
lifecycle to make sure they don't cause harm.
And if they're being used in the defense industry, obviously the intent is to cause harm.
The question is against what people or what targets.
I'm not as familiar with the Israeli national budget, but I know that last year,
the U.S. defense budget was $83 billion.
It's pretty much the largest single sector of discretionary spending.
in the federal budget. So, you know, while we can talk about ideals and using AI as to create some
sort of utopian future for humanity, really there's billions and billions and billions of dollars
at defense contracts here. And I don't think these companies, any one of them, wants to be uncompetitive
or left out of fighting for that pool of money. You know, it's a few minutes early, but I think
that's kind of the perfect end note for the conversation. Thank you both so much for coming on to
Angry Planet and walking us through this. Where can people find your work? APNews.com, the website for
the Associated Press or download our mobile app. Our stories are available there if you search
either of our names or you search, you know, any of the search terms are likely to be used in
this story, Microsoft or Israel. Garence and Michael, thank you so much for coming on. And I hope to
have you all on again. Thank you so much, Matthew. Thank you so much.
That's all for this week. Angry Planet listeners, as always, Angry Planet is me, Matthew Gold, Jason Fields, and Kevin O'Dow. It's created by myself and Jason Fields. Go to Angry Planet pod.com. If you would like to get a early and commercial free version of the show, we will be back very soon with another conversation about conflict on an Angry Planet. Stay safe until that.
