The Daily - The Era of Killer Robots Is Here

Episode Date: July 9, 2024

Outmanned and outgunned in what has become a war of attrition against Russia, Ukraine has looked for any way to overcome its vulnerabilities on the battlefield. That search has led to the emergence of... killer robots.Paul Mozur, the global technology correspondent for The Times, explains how Ukraine has become a Silicon Valley for autonomous weapons and how artificial intelligence is reshaping warfare.Guest: Paul Mozur, the global technology correspondent for The New York Times.Background reading:In the Ukraine war, A.I. has begun ushering in an age of killer robots.For more information on today’s episode, visit nytimes.com/thedaily. Transcripts of each episode will be made available by the next workday.

Transcript
Discussion (0)
Starting point is 00:00:00 From The New York Times, I'm Natalie Kitcheroa, and this is The Daily. Outmanned and outgunned in what has become a war of attrition against Russia, Ukraine has looked for any way to overcome its vulnerabilities on the battlefield. That search has led to the emergence of killer robots. Today, my colleague Paul Moser on how Ukraine has become a Silicon Valley for autonomous weapons and how artificial intelligence is reshaping modern warfare. It's Tuesday, July 9th. Paul, when we've talked on the show about the applications of advanced artificial intelligence,
Starting point is 00:01:05 one of the scarier ideas has been that militaries around the world could use it to make autonomous killing machines, i.e. killer robots. Your reporting shows that this may already be happening. Tell me about it. Yeah. So, you know, when I first got started on my reporting, you know, I thought this was the stuff of sci-fi. Like you think about AI hunting and killing somebody and you think of Terminator and, youinator and Arnold Schwarzenegger hunting people as a robot. Or you think of Hal in 2001, A Space Odyssey, an all-knowing robot that can kill people on a spaceship. But the thing is, the early versions of this technology that will get us there are already being developed. And they're being developed in Ukraine. And in some ways, Ukraine has become a nexus for the development of this type of autonomous military technology writ large. They're taking basically artificial intelligence and finding all kinds of new military
Starting point is 00:01:56 applications for it. Why has Ukraine become that nexus? So perhaps most importantly, they're outgunned. Weapons don't necessarily come quickly or predictably from the United States or Europe. They have to reach to anything they can use to fight this war. And so they turn to consumer technology and emerging technologies like AI to build new effective weapons. The second point is that they're outmanned. And so as you're facing the prospect of defending all these trenches and you just don't have as many people as the Russians have, you need to come up with things that solve that problem. And what better than something like an automated machine gun or a drone?
Starting point is 00:02:35 And then perhaps something people don't realize is that Ukraine has been a bit of a back office for the global technology industry for a long time. Many of the apps you use every day were probably in some part coded by engineers in Ukraine. And so you have a lot of coders and a lot of skilled experts taking their abilities and saying, well, now we need to turn from building a dating app to figuring out how to stop the Russians. And that means building these new weapons.
Starting point is 00:03:01 And then finally, extremely importantly, this is a war of attrition. And so every day there's fighting going on. And that means you have the ability to test these weapons each day and, you know, to use the Silicon Valley term, iterate on them, tweak them and make them better. And so having what is effectively a sort of laboratory to experiment and find out ways to make AI ever more deadly really helps. Yeah, it sounds like you kind of have all of these conditions that line up to make Ukraine a perfect incubator to build this type of technology. I'm wondering, Paul, what this actually looks like on the ground.
Starting point is 00:03:36 What kind of weapons are we talking about here? Yeah, so I went to Ukraine in May. I met with all kinds of different tech startups and developers, troops who use this technology. And perhaps the most startling moment wasn't near the front lines or anything. It was actually in a park just outside of Kyiv. And a couple of guys in their 20s and one in their teens who started this company that makes autonomous drones pulled up on motorcycles. They take me into a field, and they unbox a tiny little drone,
Starting point is 00:04:14 four rotors on it, kind of a smaller version of a typical drone you'd use to take pictures of your vacation or something like that. Maybe we can put the screen, like, somewhere where there will be not so much sun. Then they flip open this briefcase with a screen on it. And what they explained they'd done is they took a tiny little mini circuit board, a little mini hobbyist computer. Here we have a Raspberry Pi plus a thermal camera. And put software on it that allows that tiny drone to follow a tank, a piece of artillery, or even a human, and eventually smash on it that allows that tiny drone to follow a tank, a piece of artillery, or even a human,
Starting point is 00:04:49 and eventually smash into it. And so the idea is if you have a shell on the bottom of it, it becomes something of a missile. I had heard of this before, but I hadn't seen it. And so they said, well, you know, we're going to show you. And so the CEO then flips on his motorcycle helmet, revs the engine a few times, and rips off down this dirt road as a target. And one of the teammates launches the drone, and it's hovering above. And then what you kind of see is that he centers the crosshairs on the motorcycle.
Starting point is 00:05:29 And at that point, the machine takes over from the human. And so the drone starts sort of stalking the motorcycle, and it's getting closer and closer. And on the screen, you can see it. You know, it's lining up to swoop in, and his friends are crying out, Go faster, go faster, you're screwed, oh my god, you're done. It's basically not more than a couple of feet away from him. Okay, it was really close. And they hit a button and turn off the autonomy, and the drone flies back up into the air and they're laughing.
Starting point is 00:06:06 And it's a funny moment because they were able to run down their CEO. But the darker reality of it is that if this was an armed drone with a shell and they hadn't hit that button at the end, their CEO would be a goner. And this is the technology that is already being used on the Ukrainian front lines to hit Russian targets. It's kind of amazing. What you're seeing is the computer take over for the young startup guy in tracking down and targeting his CEO as he's swerving on his motorcycle. And that's why this technology is so powerful, right? This is not a remote control drone. This is a drone powered by artificial intelligence that's making judgments on its own. Yeah. And I think what's important to realize about these guys is, you know,
Starting point is 00:06:50 they're not doing something that miraculous. What they're doing is they're taking basic code that is around, combined it with some new data from the war and made it into something entirely different, which is a weapon. And what this automation system adds to it is one of the great protections against these tiny types of drones is radio jamming. And so if you can break the signal between pilot and drone, you can stop that drone from swooping in on your expensive weapon system. But what this does is it doesn't matter what the pilot sees. Once they hit that lock with the help of this AI software, it will keep going. And so you all of a sudden are completely helpless to stop it unless
Starting point is 00:07:31 you shoot it out of the sky. It's kind of insane. I mean, we're talking about basically autonomous kamikaze drones. Tell me about the other forms of AI that you saw. Yeah. So if you imagine that single kamikaze drone, the next step is to make a swarm of those kamikaze drones. And so there are a few companies who are building swarms of drones. And so what they're testing now is you have a single four-rotor drone that watches over the battlefield, but then it has its own little pack of kamikaze drones. And it can choose a target and identify a target. backpack of kamikaze drones and it can choose a target and identify a target and you know it's constantly searching with its camera and it sees a tank and it can then dispatch one of those drones to go in for a hit and one of the companies i spoke to called swarmer recently tested that
Starting point is 00:08:15 technology and hit a target 60 kilometers away wow so you know this is again not a very expensive thing this is hardware that costs thousands of dollars, and they're hitting weapon systems 60 kilometers away with incredible precision. Another thing that is emerging is a sort of autonomous machine gun turret. And so what this uses is computer vision that, you know, you would have on a lot of surveillance cameras or even on your iPhone, you know, your smartphone. It will sort of circle a human and identify it. So you take similar technology and you put it on a machine gun and that machine gun can then automatically see targets as they move. And then all it takes is a human to press the trigger. And it's already sort of being tested right now. Some of it's already, you know, achieving kills and taking out targets. Paul, did you see the automated machine gun in action?
Starting point is 00:09:07 Yeah, I did. And the story of it is actually fascinating. And it kind of indicates why Ukraine is a place where these weapon systems are emerging. So we went to a range and met a commander of a battalion called Da Vinci Wolves. And Da Vinci Wolves are very well known in Ukraine for their experimentation with weapons systems. And the commander we met, Oleksandr Yabchanka, you know, really looks the part of the Ukrainian community as this Cossack haircut and a bushy mustache. And he was very excited about the gun. He actually named his dog after the gun. And the reason he's so excited is because he helped create parts of the gun. He's helped innovate it. And, you know, unlike the engineers that we were seeing in Kiev, he's a soldier. So he's using this technology on the front and he's sort of the eyes for a lot of these companies.
Starting point is 00:09:54 And he gives feedback to them about how it's working and what they need and so on. And so he told me this amazing story. He was fighting in Bakhmut, a city in eastern Ukraine that the Russians were trying very hard to take over. And his unit was tasked with defending the only road in and out of the city. And they kept having this problem, which was that their machine gunners were just constantly a target. A machine gun is a big, heavy gun that can't easily be moved. You need to man it at all times. So he did what I think a lot of
Starting point is 00:10:26 us do these days. He went online and tried to find a solution. And he ended up asking on Facebook if anybody had an idea. And in just a few months, he had a working prototype in his hands from a company called Robineers. And what the gun does is it uses cameras and what is effectively a video game console, a sort of portable thing that looks like a Nintendo Switch, and it can automatically identify targets as they come over the horizon or appear. And then it kind of automatically aims, and all the soldier has to do is press the button and shoot. It sounds honestly quite terrifying.
Starting point is 00:11:04 An automated gun crowdsourced on Facebook. The whole thing sounds really outlandish, if I'm honest. Yeah, but it solved his unit's problems. He said it was great. We could sit back in the trench, drink coffee, smoke cigarettes and shoot Russians. And it solves this problem using very basic technology that's very powerful, but that lives in your smartphone and lives on your video game systems and is pretty easily able to be created with artificial intelligence. Paul, you've said this technology has been quite effective in Ukraine. But just to express a dose of my skepticism here, I think many of us interact with AI through chat GPT or Gemini, and we've seen the kind of hilariously bad answers those systems can
Starting point is 00:11:57 produce. Like, we can't even depend on AI to solve a crossword puzzle. I'm just wondering, how can Ukrainians rely on this technology for much higher stakes stuff? I mean, hitting the right targets in war. Are they finding that their AI is making mistakes? We don't really know the answer to that. We do know that the systems work pretty well. And part of the point is that they're supposed to be cheap. So they don't always have to that. We do know that the systems work pretty well. And part of the point is that they're supposed to be cheap, so they don't always have to work. If they work 80% of the time and they're cheap, that's okay. I will say that another reason why they want the human in the loop who can turn off the AI is that they're afraid of friendly fire and hitting the
Starting point is 00:12:40 wrong target. There's an ethical consideration, but there's also just a very practical one that this tech could go wrong and go at the wrong person or identify the wrong target. There's an ethical consideration, but there's also just a very practical one that this tech could go wrong and go at the wrong person or identify the wrong thing and they need to be able to turn it off. So there are still humans in the mix, but even with this bad technology, perhaps it's even scarier,
Starting point is 00:12:56 it's super simple to just take the human out of the mix. And how close are we to that? I mean, is Ukraine's military contemplating a scenario where the humans really go away and the machines and their judgments are entirely responsible for killing? that the human will stay in the loop for the foreseeable future. But people had different takes on it. And one guy who had a particularly interesting answer was an executive at the firm that made that automated machine gun, Roboniers.
Starting point is 00:13:31 His name is Anton Skripnik. And I met him at his offices in Ukraine. So, you know, realistically, the pace that we're seeing, when do you think we're going to start seeing the first automated kind of killing? And when I asked him how or when the first automated killing on the front lines might occur... Maybe it was already done. Most likely it was already done. You know, he said it probably honestly has already happened. He had no way of knowing, you know, he didn't, he wasn't sure, but... People very often just do something to survive, to complete a mission without, like,
Starting point is 00:14:11 sharing information, and this is, like, not bad. You know, in a fast-paced, high-stress environment, on the front where life and death are oftentimes a matter of an instant decision, whether or not a soldier flipped a switch that allowed something to just sort of go fully automated in this way or autonomous is very possible. Is there any thought of changing that or is that something that you guys are sort of standby? And so I asked him, okay, but you guys aren't doing this. Like, there is no, not a single request about, like, having that. He said, no, for us, you have to hit this trigger every time the gun sees a target so that it shoots.
Starting point is 00:14:55 How long would it take you to do it if you wanted to? And I said, so, okay, but if you wanted to make it fully autonomous, how long would that take? Tomorrow. Tomorrow. Because today it's already, like, half of the day. Yeah, yeah. I've taken two hours of your time. So basically no time at all. It's a matter of a few lines of code because these things are already effectively doing the auto-targeting. It just has the human pulling the trigger. So to make the computer pull the trigger is
Starting point is 00:15:22 almost so easy, it's trivial. Wow. So what Anton is saying essentially is that he already has the technology to create a robot that makes the decision to kill on its own. There's a human operator for now, but that's not a necessity. Exactly. And, you know, his answer really hit home for me because I still, I think, even sitting there had thought that this was the stuff of science fiction. But what I realized at that moment is that, you know, the era of the killer robot is already upon us. We're already here. And, you know, that raises just a huge number of ethical and moral questions about the future of warfare, the future of accessibility to these kinds of weapons, and, you know, what it requires to kill a human being in the future.
Starting point is 00:16:19 We'll be right back. We'll be right back. Paul, you started to get into the potential moral questions raised by a world where robots need no human input at all to make decisions about killing people. Let's talk about those questions. So I guess the first big consideration is who has access to this and where it will spread. And, you know, the first group that does is powerful countries, right? I mean, the United States is developing swarms of drones that can accompany its fighter jets. Other major military powers in Europe and China, for instance, also are developing this kind of thing. But it's also not just those guys. You
Starting point is 00:17:02 know, Ukraine, for instance, has channels where it's been sharing tips on drone warfare. You know, we did a story earlier in the year out of Myanmar where we found Burmese drone pilots were training on Ukrainian software that taught them to use kamikaze drones. And so the question becomes how long until these sorts of automated targeting softwares are shared? how long until these sorts of automated targeting softwares are shared. And perhaps what's maybe scarier is that Russia also is developing very similar solutions to the Ukrainians. And so who will they share it with? Will they share it with North Korea, Iran, certain fighters in Sudan? So the point is, it's very easy to spread software.
Starting point is 00:17:42 I mean, this isn't even a piece of hardware. This is just something that plugs into a piece of hardware. This is just something that plugs into a piece of hardware. You can send it over an email. And the guys that we talked to in the field who are flying the drones, one of the problems they had with their technology when they showed it to the Ukrainian military is that it wasn't encrypted. So the Ukrainians were afraid that if their drone crashed behind enemy lines without blowing up, the Russians could take the little mini computer, download the code and use it to build their own system and hit the Ukrainians. So software spreads incredibly fast and incredibly easily. And it's going to be extremely hard once these solutions are developed to stop them from going almost anywhere. It's not hard to imagine a dark website that allows you to sell all manner of autonomous drone attack systems. Right. There was one U.S. official I was speaking with who has huge concerns
Starting point is 00:18:30 about the terrorism implications of this. So, I mean, it's not hard to imagine. I mean, take a drone, for example. You know, you could fly something in from 20, 30 miles away and becomes extremely difficult to defend against. That raises a lot of questions, obviously, and I have to assume that ethicists and human rights officials are asking some of them. For example, is there any way to regulate AI weapons? Can we put limits on their use? Yeah, so this is something that's been debated in the UN by panels of experts for years. But we never really get to anything particularly concrete, in part because countries are already in an arms race to develop these things. And every time anybody proposes some kind of a rule, it's vetoed, if not by the United States, by China, by Russia, by other countries in Europe. by Russia, by other countries in Europe. But there are some basic principles that ethicists sort of rally behind, things like keeping a human in the loop so that the human makes the ultimate decision, even if there's automated targeting going on. And that's the sort of line the Ukrainians
Starting point is 00:19:36 are standing behind. But again, there's really not much out there to stop any of this from going wherever it wants to go. And honestly, it feels like we're already heading in that direction. Paul, in listening to you this whole time, I've been wondering how we should think about this idea of software making the decision about who lives and dies. Because on the one hand, I have to say the idea of robots hunting down humans is truly frightening. But on the other hand, it's not as if humans are known for their restraint in war. I mean, right now we're witnessing two wars in Ukraine and in Gaza where human-led military campaigns have killed tens of thousands of people, many of them civilians. And so I'm wondering, in your reporting, have you come to think of this technology as in any way better or a more precise form of warfare?
Starting point is 00:20:30 Yeah, so I think what was interesting is some of the technologists building this that I spoke with did make this case. And their logic goes something like, if we have robots fighting robots, humans aren't dying. something like if we have robots fighting robots, humans aren't dying. If we can put rules inside the software, no, you know, say children will be killed by this weapon. We can prevent it from doing certain things that maybe a really bad human would do. And you could maybe even create spaces like the front line where just nobody can step foot, you know, for four kilometers on either side because the weapons are so deadly. And so how can you move forward in either way? And you just create a perfect stalemate. But I guess history shows us, in my understanding of history at least, that that's not the way this will probably go. And that every time in the past we've seen a breakthrough in weaponry, oftentimes it's
Starting point is 00:21:21 just meant more devastating weapons get created. You know, I thought back to Alfred Nobel, who famously thought dynamite would end war. And of course, it simply made more powerful, deadly bombs. And it feels like that is the kind of future that we are treading into. into. But I just think that it's very hard to sit where we are, you know, in a place that's not at war and tell people building these things that are weapons to defend their families and their friends who are going off to war against an invader to stop doing it because it could make us unsafe in the future. And even some of the ethicists I spoke with who are very opposed to dedicated their careers to fighting against autonomous weapons, they would throw up their hands and say, well, I can't really argue with the Ukrainians. One of the Ukrainians I spoke with who's making autonomous drone systems said, you show me a hypothetical victim and I'll show you a real dead soldier and, you know, a family that now has to live without him. And I'll show you a real dead soldier and, you know, a family that now has to live without him. And that leaves you in a very hard place because you kind of have what seems sort of a runaway train. You can't morally argue for people to stop building things to defend themselves.
Starting point is 00:22:45 Yet what they're building basically secures a future that will be far more dangerous than the present that we live in. And as long as this war in Ukraine goes on, we are going to see more advanced systems get developed. And I just don't know how we avoid a future in which we have ever more powerful, ever more autonomous weapons. And that's pretty scary. Paul, thank you so much. Thank you. On Monday, Russia launched one of the deadliest assaults on Kyiv since the first months of the war,
Starting point is 00:23:29 striking Ukraine's largest children's hospital as part of a barrage of bombings across the country. At least 38 people were killed in the attacks, and more than 100 were injured. We'll be right back. Here's what else you should know today. The bottom line here is that we're not going anywhere. I am not going anywhere. On Monday, in a move to save his candidacy, President Biden told congressional Democrats in a letter and on MSNBC's Morning Joe that he would not withdraw from the race
Starting point is 00:24:22 and accused those asking him to step aside of being routinely wrong about politics. I don't care what those big names think. They're wrong in 2020. They're wrong in 2022 about the red wave. They're wrong in 2024. And go with me, come out with me, watch people react. You make a judgment. Biden faces what could be the most crucial week of his candidacy,
Starting point is 00:24:47 as he contends with growing concern among Democratic lawmakers about his age and ability to win re-election. He also spoke directly to some of his biggest fundraisers and donors in a private call, telling them Democrats needed to shift the focus away from him and back to Trump. And, as Tropical Storm Barrel battered Houston and its suburbs on Monday, at least two people were killed by fallen trees, and nearly three million homes and businesses lost power in Texas. The storm is expected to move across the eastern half of the United States
Starting point is 00:25:20 over the next several days. Today's episode was produced by Will Reed, Claire Tennesketter, and Stella Tan. It was edited by Lisa Chow, contains original music by Dan Powell, Alisha Ba'etup, and Sophia Landman, and was engineered by Alyssa Moxley. Our theme music is by Jim Brunberg and Ben Landsberg of Wonderly. That's it for The Daily. I'm Natalie Kittroweth. See you tomorrow.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.