Fresh Air - America's first AI-fueled war is unfolding. How'd we get here?
Episode Date: March 26, 2026‘Project Maven’ is the story of how the U.S. spent a decade building an AI warfare system that's now being used in the war in Iran. Author and Bloomberg journalist Katrina Manson reveals the peopl...e behind that mission, and their belief that AI could make war more precise and save lives. She spoke with Tonya Mosley about the ethics of this technology. A troubling research study found AI models placed in simulated nuclear crisis scenarios chose the nuclear option 95% of the time. Also, Carolina Miranda reviews a Los Angeles art installation that harkens to the old days of cinema.Also, Carolina Miranda reviews a Los Angeles art installation that harkens to the old days of cinema.To manage podcast ad preferences, review the links below:See pcm.adswizz.com for information about our collection and use of personal data for sponsorship and to manage your podcast sponsorship preferences.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Transcript
Discussion (0)
This is fresh air. I'm Tanya Mosley.
America's first AI-fueled war is unfolding right now.
Over the last three weeks, the U.S. and Israel have launched strikes against Iran,
hitting 1,000 targets in the first 24 hours alone,
nearly double the scale of the 2003 shock and awe campaign in Iraq.
The system helping to enable much of this is called the Maven Smart System,
and running inside of it is clawed from the company anthra.
an AI model that millions of people interact with every single day.
On the very first day of the war, a U.S. Tomahawk missile struck a girls' elementary school in southern Iran,
killing more than 165 people, most of them schoolgirls.
A preliminary military investigation found the strike likely resulted from outdated intelligence.
And while the role of AI has not been confirmed, the Pentagon is still investigating whether Maven played any part.
At the center of this story is a little-known Marine colonel named Drew Kukor,
who spent decades fighting to bring AI to the battlefield and whose obsession has quietly changed
the future of war. My guest today has been reporting on Kukor for years and how we got here.
Katrina Manson is an award-winning Bloomberg reporter who covers cyber, emerging tech, and national
security. Her new book is Project Maven, a Marine colonel, his team.
and the dawn of AI warfare.
Katrina Manson, welcome to Fresh Air.
Thanks for having me.
You have been reporting on this Maven Smart System
for a couple of years now,
and now you are watching it used in a real-time war.
Take us a little bit into how the Maven Smart System
actually works,
and specifically what Claude's role is inside of it.
How do those two things work together?
If you imagine looking at something like Google Earth, you begin to have an idea of the display that US military operators will be looking at.
Some people have described this to me as Windows for War or an operating system for war.
It's essentially a digital map.
What makes that map special from the US military point of view is the number of intelligence feeds that are coming into it.
At one public event, it was made clear that is more than 160 separate intelligence feeds.
Now, to crunch that data, they're using digital data analytics, but they are also using a few other tools that rely on AI.
There's computer vision to analyze some of the objects that are showing up on the maps that could be potential targets, also where US forces are.
And then Claude is doing something different, that is not computer vision.
That is an AI tool based on a large language model that can crunch data.
And what I've been told before is that Claude and LLMs inside Maven's smart system helps speed processes.
So there's sorts of processes you need to get sign off on a target.
Everything short of sign off, Claude can help with.
And it can also help plan courses of action, help pair weapons to targets.
It can assist everything that the U.S. military needs to do when it comes to making a decision short of actually making the decision.
On the very first day of the war, this missile struck the girl school.
And there is some reporting about this case.
that the United States was likely responsible.
There is no indication yet that AI had a role to play in here,
but the coordinates they used were more than a decade out of date.
What does that specific incident tell us about some of the lapses in data keeping
and potentially what could be a challenge for AI models
as they move through and are used more often in war?
Adherents of AI warfare regularly emphasised to me how important accountability is.
In every war, there are bad strikes.
Whether the US is prepared to investigate it and make public what has gone wrong in this case,
if the US is responsible, will be a real test for those claims of accountability.
AI is meant to make warfare more auditable.
Now, whether this is a case that the school was on a targeting list that predates AI and wasn't updated, and whether AI drew from that targeting list, all of that will be important to reveal.
Any system, particularly one that uses AI, will only ever be as good as the data that feeds it.
and if they are drawing on a database that is old, the AI, if it's set up that way, can't do anything about that.
And in numerous occasions, I've found examples of poor, weak, or flagrantly erroneous data that have fed systems.
If this is a US attack, it won't be the first one against a mistaken civilian target in 1999, the year.
US struck the Beijing embassy in Belgrade. And that case, the CIA came out in public and said,
we had the map labeled wrong. And if a map is labeled wrong, which we don't yet know is the final
analysis of what happened here. But if that girl's school was in a database, no AI can beat that
unless you start using AI in other places.
If Google Maps, for example,
showed that it was a girls' school,
it would be quite simple to draw from that information potentially
if there were a way to analyze other location data
that might indicate there were children in the area.
And an additional factor will be,
where are the checks and balances on an old?
database and what role could AI play in checking work and in cross-referencing other data,
if indeed the girls' school is labeled on something as accessible as something like Google Maps?
I want to talk about some news this week that is coming to bear because of a court case.
The Pentagon blacklisted Anthropic for refusing to allow Claw to be used in autonomous weapons.
And within hours, Open AI stepped in.
They publicly then announced the same exact restrictions.
Anthropic was punished for holding.
Is that an accurate way to describe this?
It's one way it's been described, but not in my reporting.
The Open AI deal I've reported is slightly different.
It's not clear if it maintains exactly the same safeguards as Anthropic.
And Anthropic also, of course,
course, it's really important to frame, really lent in to working on classified cloud for the
Pentagon. They were the first AI company to decide to offer AI on a classified platform. And from
my reporting, it is not possible for them to know every use case, every specific example of the
way their AI tool is used in classified operations. And the classified level is where America
fights its wars. So that decision to lean in to what the American military calls war fighting
was already a very significant decision. Open AI had not taken that decision. It was not on
classified cloud. It now will be. It does seem to have allowed a more open acceptance of how its
tool could be used. But I think we'll have to see because it's a very politicized divide when you
have the president calling anthropic left-wing nut jobs, calling them a radical left company,
even though they were working on classified cloud, clearly there's a technical debate, there's
a policy debate, but there is also a political flavor to this falling out.
Can you explain maybe in layman's terms how a classified cloud actually works?
Almost. If you imagine the cloud that we all use,
for, let's say, our email or for documents that are loaded up into the cloud,
the same can be done for military data and it can be accessed and shared.
Now, for a US military or for the intelligence services,
they don't want that information to get hacked.
And so there are a number of safeguards that are introduced
that can uphold a higher classification, so information that the US system deems
secret or top secret or compartmentalized information that only a few people can access,
even at that top secret level.
And each has its own network that can, in theory, secure that information so that it can't be
hacked, penetrated, ruined in some other way.
Of course, multiple times in history that's gone wrong.
All the time those systems are under strain from hackers, potentially also from insider threats.
So the U.S. is constantly trying to safeguard its information.
I was reading about some researchers at King's College London,
who recently put Claude and Chat GPT and Gemini into simulated nuclear crisis scenarios.
And 95% of the time, the AI reached for tactical nuclear weapons as a strategic option.
You have spent years inside of this world of these people who are building these systems for war.
And I just am curious, what do you think when you hear a finding like that?
Well, I also reach for the word terrifying.
Clearly, that kind of tool is one that you really need to put safeguards around.
So the U.S. has said it doesn't want to put AI into the nuclear controls.
So that's one step.
but there will be pressure on that system and decision-making is already speeding up.
But I've certainly spoken to US military advisors who've brought me similar information.
They emphasize that AI can be escalatory, as you just described,
and also almost a more problematic issue, sycophantic.
There is a tendency to agree with the person asking the question.
So shall I go to war?
would it be a good idea to launch this missile?
If the question is asked in that way, assuming an intent or an action,
there is a tendency within AI also to buttress that opinion.
So as a check on opinion forming, you need to consider AI in a really careful way.
Now, the US military knows this.
This was a very advanced computer scientist telling me this.
And he had been an advisor to US Central Command,
the very command that is now using these chatbots.
What he and others have told me at the National Geospatial Intelligence Agency is that they're aware of these risks,
and they are trying to add in checks and safeguards what they call underneath the hood.
So if a commander said, shall I strike this now, is it a good idea?
Even if they were to prompt the chatbot in that way, the claim was made to me that the chatbot runs through a very fast series of checks at the chatbot.
It red teams the question, which is to say it pretends it is an attacker.
It checks for escalation bias.
It checks for a number of different things.
And by the time it spits out the answer, all of those potential problems have been factored in.
Now, I haven't seen that happen in real life.
And I've certainly come across a lot of people who are very frustrated by the answers that chatbots give, even within the military, sometimes fabricating attacks that haven't even happened.
And if you can imagine the US needs to respond to attacks, if they're responding to an attack that was fabricated, there is constantly this risk for escalation.
And in that sense, it's always about that critical thinking, that framing.
What question are they asking of AI?
Can I win quickly if I start a war with Iran?
Or what are the risks that this could proliferate, that US service members will be harmed, that civilians will get hit?
What are the chances of achieving regime change if I seek a quick war?
How many quick wars become medium-term wars and long wars?
Is there still that human hubris?
Where AI is put will only ever be as good as the data and the question.
And there may still be a gap.
And all of this testing is happening during an active war right now.
A lot of this testing that I've reported on happened before then, but even at the time in February
2024, I was able to report that the US did use this system to narrow down some of the 85
targets that the US military struck in Iraq and Syria. This was in reprisal for the death of
three US military personnel. And that is the first large scale up until today's operations.
case I know of of U.S. Central Command using this system to try and bring speed and scale to war.
It had been used before to assist others.
It had been used on a piecemeal scale for U.S. Special Operations Command,
but they tend to be much smaller.
Getting into the big army, the big military formations, this really is war at a very joined-up and connected scale involving every service.
And as we speak today, CENTCOM has hit more than 9,000 targets.
And that certainly has relied on the system, Maven Smart System.
Katrina, there's a man at the center of your book in this story that most people have never heard of.
A Marine colonel named Drew Cucor.
Tell us who he is and why this moment basically exists because of him.
Drew Cucor is this very absorbing.
retired Marine who I met, who was chief of this project called Project Maven.
He wasn't the director.
He was the doer, the leader of this effort to bring AI to the way that America makes war.
And it started publicly, at least, as a very narrow effort.
The idea was to bring AI to rifling through drone footage, copious video that the US was taking in various countries.
around the world as part of what many military operators called their GWAT, the global war
on terror. Now, Drew Kukor had had a long and frustrating career inside the Marine Corps as an
intelligence officer, and he was repeatedly fed up with the tools that he had to go into battle
and to support other military operators. He was in Afghanistan in October 2001 after 9-11.
lugging around a large computer, he felt that he couldn't support the US military operators
that intelligence was meant to keep safe. And they simply weren't able to get frontline troops
the kind of information they needed as these very rudimentary, unsophisticated, improvised
explosive devices started to maim and kill American service members. And so there was a
constant frustration that the US could bring to bear enormous firepower, precision firepower,
but couldn't put it in the right place. And you see, as you see in all wars, what's known as
friendly fire, allied fire, so the US mistakenly harming their own, harming partners and allies,
and also harming and killing civilians by mistake. And this number of problems he began to feel could be
solved with better intelligence. And if there was a way to reduce that loss when he was in
Afghanistan, there were Marines dying the whole time. When he was in Iraq, there were hundreds
of Marines dying. And he simply felt not that AI so much was the solution, but better information.
And in the modern world, better information has come to mean AI. And in 2011, he worked
on an effort to bring technology from a company named Palantir technologies to Afghanistan,
to start to track where these improvised explosive devices had been before.
So we're 10 years into this 20-year project that Kukor envisions.
He has always said that he feels the Department of War, which during the time of you're talking,
was the Defense Department, needed to function more like a software company than a weapons factory.
But looking at Iran right now, the scale and the speed, is this the war he envisioned?
There's no doubt that this is an AI-infused war.
And the other element of safety, accurate scope scale is people are claiming that AI makes war more efficient.
Often what happens when things are more efficient is you can simply do more of it.
And to hit a thousand targets in the first day, now 9,000 targets and not yet have finished the war.
The Iranians are still continuing.
The Strait of Hormuz is closed.
There is a question about overconfidence, about how much you can rely on these systems.
And if expanding the pace of war gets you there.
And this is a long-term debate.
if you go back to 1899, there was a Polish banker, Ivan Block, who brought out a paper called Is War Now Impossible?
Because he looked at these claims for mass-produced rifles that now the ways of killing were so industrialized at such scale,
no one would dare declare war against someone else.
And instead, he argued long before World War I started, that actually the mass production of weaponry,
would lead to stalemate, human harm, long wars.
And it raises this idea of, is there ever a way to deliver palatable killing?
Our guest today is Bloomberg journalist Katrina Manson.
We'll be right back after a short break.
I'm Tanya Mosley, and this is Fresh Air.
This is Fresh Air. I'm Tanya Mosley, and my guest today is Bloomberg journalist Katrina Manson.
She's written a new book titled Project Maven, a Marine Colonel, his team, and the dawn of AI warfare.
The book traces how Marine Colonel Drew Kukor became instrumental in the decade-long creation of America's AI warfare capabilities, which are now being used in the act of war in Iran.
I want to talk to you a little bit about Kuker's relationship with Palantir.
It seems to be one of the most complicated threads in your book.
And Palantir, for those who aren't familiar, is a data analytics company.
It helps organizations make sense of massive amounts of information.
And Kukor became one of the most powerful internal advocates for Palantir.
How did that relationship begin?
And why was it so controversial?
Kukor learned about Palantir in the late 2000s when it was really quite a young company.
And he was looking for this data analytic solution.
that could bring data together and deliver him a picture of war.
As he said to me, it's just a very hard question to know where the enemy is and where your own people are.
And this for him became a tool that he really believed in.
And others in the defence tech world, who in their military service relied on Palantir,
have spoken favourably of it as a tool to me.
He continues this relationship and he flies over to see them.
And he explains his entire vision for what becomes Maven Smart system,
a digital map and operating system with white dots, with coordinates,
that ultimately can pair a target to a weapon and shoot it.
And at the time, Palantir doesn't really want to do this
because he's asking them to do two things they don't see themselves as.
One, AI, and two, to create a user face.
And they didn't see themselves as creating pretty user.
their interfaces, they saw themselves as their data analytics, the crunching of that aspect.
But they went along with it and a very senior person at Palantir, Aki Jane, told me that it really
is Kukor himself who convinced Aki Jane to what he said is revisit his priors. He had a bias
against AI, so did all of Palantir. And they begin to listen to Drew Kukor to understand how AI
might support their data analytics. In addition to that, Palantir was already controversial within
the Pentagon. They had actually sued the army in 2016 to gain access to a contract. This is a time
where you really have young, hungry companies beginning to say, give us a contract. There's this
sense that contract awards in the Pentagon are very old-fashioned, things function too slowly. So Palantir
has succeeded in getting a foothold in the Pentagon,
but was seen as very arrogant by many because it had sued
and it continued to claim its tech was the best.
Whether that was true or not,
the manner in which they said this irked several people.
And Kukor himself guided them not only on AI and what he wanted,
but also on the manner in which they should conduct themselves.
He said, we think you're great, but you need to tone it down.
How would you characterize Palantir in this story?
Is it an honest actor in it?
I think it's really fair to see it as a very divisive company.
You have got people who cheerlead for it with great passion
who feel that Palantir's tech save their lives.
You also have people who think they are arrogant,
risk being monopolistic, charging too much,
and simply make tech that is good, but not as good as everyone makes out,
even as late as
2003,
a senior commander
who was using
Maven Smart System
awards it a grade
C-plus.
So right the way
through, you have
problems of Palantir
and multiple
members of the military
lined up to tell me
okay, we're using
Palantir,
but if something
else better comes along,
we'll switch.
I want to talk
a little bit
about some other
active wars,
particularly the war
in Ukraine.
It seems
the way
that you've been writing about this, that that's where AI warfare kind of became real at scale.
When Russia invaded back in 2022, the U.S. deployed Maven in support of Ukrainian forces,
but it almost immediately fell apart.
What happened and how did they fix it?
The computer vision had been trained on the Middle East, think hot, think of sand,
and suddenly it was being asked to identify Russian tanks in the snow in Ukraine.
So it wasn't delivering the detections that the US wanted to rely on.
Secondly, the system wasn't loading.
I found out there were often eight second delays, which in a war is a lifetime.
And that was because, after a lot of investigation, it turned out that the networking just wasn't up
it. It was in fact crisscrossing the Atlantic sometimes as much as four times. So that created
delays and sometimes even packets of data could fall off the network and you might miss
crucial information. So they really needed to work on the networking, the sort of arteries of
information and they also needed to very quickly gather up imagery of Russian equipment.
and retrain the algorithms.
And that was going on at a very fast pace.
People complained about getting phone calls at 2 in the morning.
Others welcome them in order to be part of this effort to support Ukraine.
You know, in reading about that from you,
one of the maybe legal lines in warfare is kind of this difference between supporting an ally
and fighting their war for them.
and you report that the U.S., as you said, was passing targeting coordinates directly to Ukraine,
sometimes through signal, sometimes literally printed paper and walking it across.
By that measure, how close was the United States to actually being in that war?
I suppose that becomes a diplomatic question, and certainly the U.S. wanted to frame itself as being a supporter,
but not a direct participant
and that knife edge is really in the eye of the beholder.
Does Russia choose to see it that way?
Or does Russia say you've gone too far?
And so the US was very, very, very sensitive to that.
And the actual Project Maven operators
and those in the army who were using this system
were even more sensitive
because some people among their group said,
we are going too far.
And others said, we have to help Ukraine with everything we have.
And at the time, that debate was not public.
There's also some elegant language, which is this term point of interest.
So rather than saying we're sharing targets, we're passing targets to Ukraine,
they settled on this language of we're passing points of interest to Ukraine.
Everything short of the decision to target, which was a Ukrainian-owned decision.
But as even some of the people I spoke to for the book framed it,
it was almost a sort of Pinocchio.
like relationship, the Americans potentially pulling the strings on Ukrainian decisions.
And it got tighter and tighter and tighter.
One reason the Pinocchio metaphor isn't fully fair is because also both sides have
emphasized to me in interviews that they really developed trust.
And so the Americans ultimately were finding pieces of military equipment that on Ukrainian
information just looked like a truck.
But on US information, they were able to say, trust us, hit it.
And it was, in fact, a transporter, erector launcher, essentially a mobile missile launcher.
And that relationship got faster and faster and faster until at one point the US identified a target.
And one example, I'm told about, and 18 minutes later, the Ukrainians were able to hit it.
Let's take a short break.
If you're just joining us, I'm talking with Bloomberg journalist Katrina Manson,
whose new book Project Maven traces how the United States built its AI warfare capabilities
and how those capabilities are being used right now in an act of war in Iran.
We'll be back after a break. This is Fresh Air.
This is Fresh Air. I'm Tanya Mosley, and my guest is Katrina Manson,
a Bloomberg journalist and author of Project Maven.
We've been talking about how the United States built its AI Warfare.
system. Let's talk about Gaza for a moment. Israel reportedly used AI targeting systems,
gospel and lavender in its campaign there. What does Gaza tell us about where the guardrails
on AI warfare actually are? Some defend the AI, saying the way it is used is down solely to policy.
And others have suggested that the way the IDF was prepared to potentially accept collateral damage,
meaning civilian harm, and that speed would not be palatable to the US.
It just isn't the way that the US currently operates.
And I should say the IDF defends its action saying they have not broken.
the law of war. They have been proportionate and discriminate. That's their position.
There are also these very stark numbers of 70,000 dead.
For me, a key question was to understand, was this defence of AI? Was it fair to try and separate
AI from policy? So for those who've expressed concern at the way the IDF pursued targets and
civilian harm, they've blamed policy rather than AI. So for several of the experts I've
spoken to who make the distinction that they're totally separate, the tech and the policy,
many others are there arguing that the more you have an AI-infused killing machine, the more
you can use it. Which brings up something else for me. You report that the U.S. has already
built weapons that can fly and select their own targets and kill without a human making the
final call, so autonomous weapons. And you name these two classified programs in the book,
Goalkeeper and Whiplash. Can you tell me briefly what they are and what does it mean
that they already exist? These are efforts to bring drones in the air and on the sea into life.
And this is for a very different conflict scenario.
This is the U.S. thinking about the defense of Taiwan.
So if China were ever to attempt an invasion of Taiwan and if another big if the U.S. were to decide to help defend Taiwan,
there could be a very different scenario from the one that Ukraine is facing in Russia because of jamming.
So the fear is that China would disrupt.
U.S. satellite communications such that it couldn't control its own drones, and the drones that
would protect and defend Taiwan against a maritime onslaught would need to be able to function
autonomously without any internet connection. And so the U.S. has been developing these drones
in a pursuit of autonomy for several years. Whiplash is an effort to put weapons on a jet ski
that can move autonomously,
and goalkeeper is an effort to weaponise drones
and have them fly about
and be able to select and hit a target under its own steam.
Exactly what campaigners from Human Rights Watch
argued against at the dawn of Project Maven
and what UN Secretary General has called a pursuit of something
morally repugnant and politically unacceptable. That is the pursuit of lethal autonomous weapon systems.
Well, I mean, what is standing in the way then of any meaningful international regulation?
Because what does it actually mean that we're already at war while these particular conversations are still happening?
That's such a fascinating tension. There have been discussions at the UN, a UN body, for more than 10 years now.
And they are still trying to define what is an autonomous weapon system.
And the U.S. position has been, let's make it first and then let's work out what we need to regulate.
That, of course, speaks to a fear that China might get there first.
The U.S. has wanted to dominate this technology and to be the ones who could deliver it in a way that they felt they could use it and win.
But there is a push now to turn some of that work into a treaty.
and a treaty would by all accounts not include the likes of the U.S. or China or Israel or Russia.
Katrina, tell me if I'm right in this.
I mean, everything that we've been discussing, Maven, the autonomous weapons, this arms race between tech companies to supply the Pentagon.
I mean, all of this exists in large part because the U.S. is preparing for a potential conflict with China over Taiwan.
So what does this moment tell us about whether we're actually ready for that?
The U.S. has assessed that China wants to be capable of taking Taiwan by 2027.
So next year.
So this date has become this sort of drumbeat for the U.S. to make sure that if it wanted to,
it could fend off a Chinese invasion of Taiwan as soon as next year, but any time after that.
And there's been an increasing focus since 2018 on the prospect.
of China being a potential adversary, not just a competitor on the global stage, but also a military adversary.
And you see now senior U.S. military commanders saying quite clearly, China is rehearsing for an invasion of Taiwan.
And how the U.S. could prevent that or help partners and allies prevent that is a subject of some anguish within those quite tight military circles that look at this.
There's a group that has really pushed for autonomy to say there's no way we can defend Taiwan without it.
We need to do much more.
And I was told that often pentagon officials reassure allies and say, look, there is nothing inevitable or imminent about a Chinese invasion of Taiwan.
And if there is, we'll make sure we're ready.
But then they drop their voice in the corridors of the Pentagon and whisper, we're not ready.
And so there is this constant concern that the U.S. needs to go faster in developing autonomy that could withstand the sort of onslaught that might be involved in an attempt to take Taiwan.
One of the other things we're all kind of asking is whether we are the best custodians of this technology.
And after everything that you've reported, what is your feeling?
What do you come down to?
I know you're a journalist, but you're also greatly informed.
you have all of these facts in front of you.
When you meet people whose business is the business of war, your perspective changes because
there is so much risk and there is such a long tale of experience of these forever wars.
Many of the people involved in Project Maven were involved in the Forever Wars in Afghanistan,
Iraq and saw their friends die.
And they put this trust and belief in AI that that could save their friends, that could save
them that could save America and it could prevent if AI were big enough and bad enough
China from ever daring go to war with America. So there's this deep belief in AI as some kind of
panacea. I think for me it raises the question of what is this idea of a costless war?
If you can make killing more remote, is that more palatable? We know that droid. We know that
operators and drone screeners, drone analysts, also experience post-traumatic stress.
And AI won't have those same reactions to watching the gore.
So there is that argument that you can protect operators.
I question whether you also can protect civilians by pursuing that notion of remote war.
And the bigger question I have is does remote war make war more possible, more likely?
Does it mean that war option will, someone will press play on it, not understanding the long, deep impacts?
So for me, there is a lot more to be done by the people who advocate for AI to use it in this way they claim it can be used.
to deliver a better outcome.
Katrina Manson, thank you so much for your reporting,
and thank you for this book.
Thank you.
Katrina Manson is a reporter for Bloomberg.
Her new book is Project Maven,
a Marine colonel, his team,
and the dawn of AI warfare.
This is fresh air.
This is fresh air.
The rise of AI has had seismic implications for Hollywood.
Movie scripts can be written by bots,
and one AI company has even created a computer-generated actor.
But amid this transformation, one director has created an art installation
that harkens to the old days of cinema.
In 2000, an unknown Mexican filmmaker made waves at Cannes with a film about a car crash
titled Amores Peros.
The director Alejandro Inariu has now turned the film's extra footage
into an art installation.
Contributor Carolina Miranda reviews the show.
Walk into the first floor gallery at the Los Angeles County Museum of Art, and you'll be forgiven for thinking that you've wandered into the building's machine room.
The clatter of industrial appliances makes normal conversation a challenge, and the room is hot, even a bit steamy.
But move deeper into the space, and you'll find that you're actually in the middle of a movie.
Large projectors display looped scenes on six screens staged around the room, all featuring snippets from direct.
Alehandra Injarrito's first film, Amores Perros, which debuted to much acclaim in 2000.
On one screen, you catch a piece of one of the movie's brutal dogfights.
On another, a hand reaches up a woman's skirt.
A car chase ensues and a brutal crash.
Then that same crash plays again from another angle.
This is Swayno Perro, devised by Iniarito with the help of a robust production team.
The installation takes the opportunity.
unused scraps of his groundbreaking film and transforms them into an environment that not only plunges
the viewer right into the movie, but into the act of filmmaking. You see slates marking the beginning
of the action. You see takes and retakes. Occasionally, the strips of colored film at the end of a
reel come into view, casting an orange light on the room. Sweno Perro, in Spanish, translates to
dog dream. And Iniaritu's installation certainly feels like.
like a dream of the original movie.
Fragmented, chaotic, out of order.
At times you hear the convulsive explosion of the film's climactic car wreck.
Sometimes that same crash occurs in eerie silence.
Like an actual dream, it's then up to the viewer to make sense of what the bits might mean.
Like any movie, the images also function as a timestamp of the past.
The old sedans look dated.
one of the film stars Gail Garcia-Vernal is still a teenager.
And the Mexico City of the film is one that has not yet been gentrified by the digital nomads of the 21st century.
As Inaritu writes in a book about the project, a film is made of time and light.
But what makes Sueño Perro truly remarkable is its analog nature.
Amores Perros was made before digital cameras had completely transformed movie making.
Iniarito, a storyteller who embraces excess, shot a million feet of film to make the movie.
But the final cut, which clocks in at about two hours and 30 minutes, used only about 13,000 feet of that footage.
That left about 187 miles of film on the cutting room floor.
In an era when the word movie has come to mean a video you can shoot and edit on your phone,
Swayno Perro is a reminder that films once carried,
physical weight. A 35-millimeter reel weighs about five pounds, and the average film was about
two reels long. The use of celluloid film also involves photochemical processing, and displaying the
work requires large projectors that generate heat and noise. Making a movie is a creative process.
It used to be an industrial one, too. Suño Perro makes this industrial nature visible and visceral. In the gallery,
massive reels rotate on the large format projectors typically used in old movie houses.
Long strips of 35 millimeter film travel through elaborate looping systems that reach a height of more than 6 feet.
In addition, the designers have pumped a small amount of fog into the gallery,
making visible the beams of light projected onto each screen.
To enter the space isn't simply to be surrounded by the images of Inarito's movie,
but the mechanics that make it possible.
It's a reminder of all the physical things that have been lost to the immaterial pixel.
Vinyl records have given way to streaming, newspapers to websites and apps.
Directors used to haul around heavy reels to display at film festivals.
Now at most they carry a small hard drive.
And as acts of creation have been turned over to artificial intelligence,
Swayno Perro stands as a reminder of what could go missing when you take out the human touch.
The physical world, full of love and pain, can be a really enthralling place.
Carolina Miranda Miranda reviewed Swayno Perro on view at the Los Angeles County Museum of Art through July 26th.
If you'd like to catch up on interviews you've missed, like our conversation with Riz Ahmed on starring in the new series Bait as a British Pakistani actor whose auditioned to play James Bond sends his life into a spiral, or,
with human rights lawyer Brian Stevenson about reflecting on the harsh truths of our nation's history,
check out our podcast. You'll find lots of fresh air interviews. And to find out what's happening
behind the scenes of our show and get our producer's recommendations on what to watch, read,
and listen to. Subscribe to our free newsletter at w-h-h-y-y-d-org slash fresh air.
Fresh Air's executive producer is Sam Brigger.
Our technical director and engineer is Audrey Bentham.
Our engineer today is Adam Stanishefsky.
Our interviews and reviews are produced and edited by Phyllis Myers, Roberta Shorock, Ann Marie Baldinado, Lauren Crenzel, Teresa Madden, Monique Nazareth, Susan Yacundi, Anna Bauman, and Nico Gonzalez-Wistler.
Our digital media producer is Molly C.V. Nesper.
Theya Challoner directed today's show.
With Terry Gross, I'm Tanya Mosley.
