Irregular Warfare Podcast - Artificial Intelligence in Counterterrorism and Counterinsurgency

Episode Date: January 1, 2021

What role do information and intelligence play in counterinsurgency? How can artificial intelligence assist in tracking and identifying insurgent or terrorist activity? What are some of the opportuni...ties and challenges of using AI in irregular warfare contexts? Retired Gen. Stan McChrystal and Dr. Anshu Roy tackle those questions and more in this episode. They argue that AI allows counterinsurgent and counterterrorist forces to aggregate and process massive amounts of data that illuminates and even predicts insurgent activity. However, there are challenges that come with this groundbreaking opportunity. Intro music: "Unsilenced" by Ketsa Outro music: "Launch" by Ketsa CC BY-NC-ND 4.0

Transcript
Discussion (0)
Starting point is 00:00:00 If we think about what a terrorist does, a terrorist creates turbulence. They create fear. They make you respond. And what does a counter-terrorist do or a counter-insurgency force do? Try to either maintain or re-establish order. That turbulence is almost impossible to depict. It's very hard to describe, and then it is hard to prescribe actions to reduce it. There's actually an inherent order within turbulence. Turbulence is not purely chaos, and that's what sets it apart from chaos. To have the ability to discern that order and understand it so that you can intervene and change that order is at the heart of this particular very complex problem.
Starting point is 00:00:57 It's a classic complex system. There's not one question. There are multiple questions. There's not one variable. There are multiple variables. There are multiple questions. There's not one variable. There are multiple variables. And it's in the interplay of these variables that most of the opportunities for intervention lie. Welcome to Episode 17 of the Irregular Warfare Podcast.
Starting point is 00:01:22 I am Nick Lopez, and I will be your host today, along with Kyle Atwell. Today's episode takes a look at artificial intelligence's role in counterinsurgency and counterterrorism efforts. Today's conversation starts with an introduction to the role that information and intelligence play in counterinsurgency. Governments battling insurgents, such as the United States and its partners in Iraq, Afghanistan, and beyond, focus significant effort on identifying and tracking insurgent activity. Our guests argue that new artificial intelligence technologies allow the United States to aggregate and process massive amounts of data that illuminates and even predicts insurgent activity. The conversation concludes with a discussion of the opportunities and challenges of integrating
Starting point is 00:01:59 artificial intelligence into the modern battlefield, with implications for both policymakers and practitioners on the ground. General Stan McChrystal is the founder and CEO of the McChrystal Group that advises senior executives at multinational corporations. General McChrystal is the former commander of the U.S. and International Security Assistance Forces Afghanistan, and he is also the former commander of Joint Special Operations Command. He is a best-selling author and is on the board of several companies. He is a graduate of the United States Military Academy at West Point and Naval War College. Dr. Anshu Roy is the founder and CEO of Rhombus Power. He holds the patent for solid-state subatomic particle detection
Starting point is 00:02:41 and is the architect of an artificial intelligence platform used by several government agencies. Anshu received his PhD from the University of Michigan, where he researched the intersection of materials, complex systems, high-performance computing, and turbulence. You are listening to the Irregular Warfare podcast, a joint production of the Princeton Empirical Studies of Conflict Project and the Modern War Institute at West Point, dedicated to bridging the gap between scholars and practitioners to support the community of irregular warfare professionals. Here's our conversation with Stan and Anshu. General Stan McChrystal and Dr. Anshu Roy, welcome to the Irregular Warfare Podcast,
Starting point is 00:03:22 and thank you both for joining us today. Nick, this is Stan, and I want you and Kyle particularly to call me Stan if you wouldn't. I'm sure I already have friends. So no problem there. Great. Thank you, Nick and Kyle, for inviting me to this podcast. Really appreciate it. It's fantastic to have you both on. And it'd be great to start with Stan. So Kyle and I are familiar with your background as commander of International Security Assistant Forces in Afghanistan and also the commander of International Security Assistant Forces in Afghanistan and also the commander of Joint Special Operations Command during the height of the surge in Iraq. We were surprised in our call with you and Anshu last week because you both started to discuss artificial intelligence,
Starting point is 00:04:00 which was rather fascinating for us. Can you provide some context for our listeners and discuss what drives your interest in artificial intelligence? Absolutely. And thanks for having me on this. Anshu and I became friends a year, year and a half ago because he approached me, but I was immediately fascinated by the ability to leverage information, call it intelligence, call it data, whatever you want, to figure out patterns and solve problems. If you think back to the problem of irregular warfare, whether it's counterterrorism or counterinsurgency, it's not a case of military might. It's not whether or not we have enough tanks or enough forces. It's whether we understand the situation well enough,
Starting point is 00:04:46 because our opponents are necessarily trying to deceive us of what they are doing and how they're doing it. And so the problem becomes one of first appreciating the problem and then being able to break it down well enough that you can address the problem. And often that means, where are the enemy? What are they doing? And of course, now, most importantly, what are they planning to do? And so it's all a case of me being fascinated by how do you establish an understanding of patterns from information that makes information usable. And Anshu, how did you get started in working in this space, specifically within the Department of Defense and then counterinsurgency, counterterrorism problem sets? So my background is in complex systems, having studied the most, shall we say, the most
Starting point is 00:05:40 challenging problem in the field of classical mechanics, which is turbulence, through my PhD. I've always been drawn to what is difficult and seemingly intractable. And my philosophy in general has been to leave my mark to the best I can towards the solutions to such problems. So just a little bit about the company Rhombus that I started in 2010 after a fairly long academic stint. The journey began with trying to solve a very different problem, which had to do with mapping out debris in a melted reactor core in Fukushima. That was for five years. We invented a neutron sensor, which became successful. In 2015, we decided to expand our vision and use some of the machine learning that we had been employing in Fukushima towards problems that would, in my mind at least, have game-changing impact on national security.
Starting point is 00:06:42 That was the start of our journey to build Guardian, our platform. Two coincidences. One, around that time, DIUX came to the Silicon Valley, and it was a wonderful opening for companies like us to be able to interact with folks who knew what the problems were and directly be able to interface our potential solutions with those problems. That was through DIUx. So that was one coincidence. And the second forced serendipity, as I call it, we happen to be at NASA, NASA Research Park here in Moffett Field. We're surrounded by all kinds of incredible talent and folks who are aligned on the vision of solving pressing problems. And so we were able to put together a very, very good team to address some of those problems very directly. So that was how we got involved in this space.
Starting point is 00:07:37 Can I jump in on that, Nick? Because, you know, Anshu, there he was at Fukushima trying to figure out nuclear particle debris, and he never once called me for advice, John. Now I'm hurt. He left you on the sidelines there. Here I was. Yep. So I wanted to start with you, Stan. Can you tell us, based on your experiences, what is the role of information in counterinsurgency and counterterrorism? Well, it's understanding what is happening role of information in counterinsurgency and counterterrorism?
Starting point is 00:08:10 Well, it's understanding what is happening and hopefully why it's happening and then being able to figure out what is going to happen and be able to affect that. When Anshu mentioned turbulence, if we think about what a terrorist does, a terrorist creates turbulence. They create fear. They make you respond. And what does a counter-terrorist do or a counter-insurgency force do? Try to either maintain or re-establish order. If you think societies, try to get security, try to get things working. Meanwhile, your opponents are trying to constantly induce turbulence. And it's this constant fight. If you go into a military command center, because military minds work this way, the first thing you do is have a map. In the old days, it was a paper map with acetate over it. And you try to draw the situation on there. And that's our attempt to make order of the battlefield
Starting point is 00:09:08 And that's our attempt to make order of the battlefield and to do something that we can get our mind around. And of course, in previous generations of warfare, that worked pretty well because a map could be a general depiction of what was happening. Although we always know that it was a lot more chaotic on the ground than the map showed. In an insurgency or counterterrorist situation, that turbulence is almost impossible to depict. It's very hard to describe, and then it is hard to prescribe actions to reduce it. And so this is, and my fascination with what Anshu and his team do and just the concept is, how do you create a model or a mindset that lets you start to, one, live with that complexity, and two, start to figure out what you could actually do about it? Yeah, and some have argued that in insurgency, the insurgents have an information advantage over the counterinsurgents. Is that something that you saw in your time working on these issues? They do for two reasons. The first is they have what we call proximity, and that is physical proximity. They are typically there in the village or in the local area. And two, they've got cultural proximity. They are typically of the area, and so there's a natural connection to it.
Starting point is 00:10:26 The other thing that we found as we studied it when I was commanding in Afghanistan, we brought London School of Economics analysts in, and they came up with some very interesting data. a violent event somewhere, support for the government went down. If security forces, Afghan or coalition forces created a violent event somewhere, support for the government went down. Whatever created violence caused support for the government to go down, regardless of who did it. And so the Taliban had a huge information advantage. All they have to do is create violence, turbulence, and societies who desperately need order to operate and live can't stand it. And it's easier to break things and build things if my kids have taught me anything about the world. So, Stan, you mentioned turbulence is very hard to depict and to gain like an understanding of exactly what the problem set is. And not only is it hard to depict at the tactical
Starting point is 00:11:34 level, but as you go to the operational and strategic level, it becomes even more difficult because there's dissonance between the strategic, the tactical level in terms of what is exactly happening. How do you fix that or attempt to fix that to gain a shared understanding of the problem set from the tactical level to the strategic level, especially in counterinsurgency and counterterrorism? Yeah. Nick, this is something that I would say first is understand you are not going to order something more than is possible. We used to have things like congressional delegations come to visit either in Iraq or Afghanistan. And they would say, OK, General, I've heard this complicated description you've given. Now tell me the one big problem.
Starting point is 00:12:20 And I literally would go, have you not been listening? There is not one big problem. There's a whole bunch of them. Oversimplification. Yeah. And so I think what we've got to do is start by understanding that is the nature of it. And so deal with it as it is. And don't try to pretend that you can create more order than is possible.
Starting point is 00:12:43 In your career, have you seen a change in how we handle information processing, especially vertically and horizontally, whether that's across different government agencies or from the tactical to the strategic level? Sure. When I was a young officer, we were not at war. It was 1976 when I first came in. We do training exercises. So intelligence would come in very limited amounts and everybody would grab on this piece of intelligence and study this single report. And so the problem was not being able to digest the intelligence. It was the problem was get more. And then what happened 15, 20 years later, we suddenly could collect far more than we could analyze or understand and make sense of. So the problem
Starting point is 00:13:31 flipped. Suddenly we had oceans of information that was valueless to us if we couldn't figure out what it meant. And so I don't think we've solved that. And one of the reasons why I think what Anshu and Rhombus does offer such leverage is we are in this problem of trying to either curate the intelligence or limit it down. A number of different approaches I've seen, none of which have yet, in my mind, gone nearly far enough to be able to pick out the important conclusions from this mass. If I may just add one more layer to what Stan said, there's something we ought to know about turbulence, which I think folks tend to sort of overlook. There's actually an inherent order within turbulence. Turbulence is not purely chaos. And that's what sets it apart from chaos. To have the ability, like Stan was saying, to sort of discern that order and understand it so that you can intervene and change that order is at the heart of this particular
Starting point is 00:14:39 very complex problem. It's a classic complex system. Like he said, there's not one question, there are multiple questions. There's not one variable, there are multiple variables. And it's in the interplay of these variables that most of the opportunities for intervention lie. Anshu, let me ask you a question if I could. If you think of a human-driven turbulent system or complex system like an insurgency or terrorist, do the terrorists or insurgents have to understand that's what they're doing? That's an interesting question. And I think one has to give them credit for the various reasons you mentioned, which is proximity and their local knowledge.
Starting point is 00:15:23 I think they have a very instinctive understanding of what creates spectacular impact, which is what terrorism is really all about. The effects are, in the larger scheme of things, can be said to be not particularly large, but the overall influence on what the mindset of the counter-incgency movement is, is extraordinary. So yes, I think the short answer is they do. Stan, you stated that the problem with intelligence has shifted from not having enough when you were a young officer to having almost more than could be digested or processed later in your career. Can you discuss a little more how intelligence collection and processing evolved over your time in the military? My experience in command was revolutionary
Starting point is 00:16:11 compared to the first part of my career. I'd been in the military 24, 25 years when suddenly ended up in Iraq in commanding the counterterrorist forces. And several things had changed from earlier in my career. First, we had this ability to collect massively more information. Signals intelligence was much easier to get. And of course, the enemy is using more devices so you could collect more. Second, we had the rise of unmanned aerial vehicles and other aerial platforms which skyrocketed. So suddenly we had all of this information that we could collect. And we realized, one, we didn't know how to collect. We didn't know how to focus it. And second, we didn't know what to do with it when we did collect it. And so it became this incredible learning curve as we started to understand that if we
Starting point is 00:16:58 could harness intelligence, if we could focus our collection, if we could actually digest and then act quickly enough so that the information that we got was still relevant, because of course everything's temporal in that situation, then we could be much, much more effective. And so the big revolution inside Joint Special Operations Command was to collect all the different parts of the organization, connect them in real time so that information is passing in a speed we'd never considered before. And instead of being linear, it's like almost on a computer chip or a silicon chip where information is passing with incredible speed. And then we're able to act on it. We still were probably at the very early stages, like people who had just discovered fire, and we learned we could do cooking and burn
Starting point is 00:17:52 our neighbor's house down. But we understood we had something of extraordinary power there. Anshu, I'm particularly interested in how AI has addressed the information processing challenge that Stan has identified. More specifically, where does the current application of AI reside? Is it at the strategic or tactical level? And would you say its primary focus is enhanced situational awareness? So Nick, AI as a technology is being currently employed at all levels, strategic to tactical. And you have companies that are really good at taking various kinds of signals and making tactical sense out of it on the ground for improved situational awareness. That's one kind of AI. Then there are companies such as us that try to integrate the entire picture from strategic to tactical by taking other providers as well into the ecosystem that
Starting point is 00:18:53 we've built on our platform. And on top of that, we create a layer of decision advantage so that folks all the way from strategic to tactical, can have the same level of awareness. And more importantly, predictive awareness of what is likely to, what are the possible futures one can imagine from this, right, based on the data. Machine learning is being used all over the board. Machine learning as a component of artificial intelligence, where artificial intelligence is set for providing advantages at various levels, that is also happening. So where we're at, Stan, is you developed through various means, a lot of it technologically based, a process or a system where you could collect information about the battlefield.
Starting point is 00:19:41 And this isn't a counter-terrorist fight. So you could figure out where terrorists or insurgents were located, and you get just massive amounts of information, and those would drive where you would do your next kinetic operations. But the shortfall you had is that you just couldn't process it because it was too much information. And then you are seeing artificial intelligence as a way to essentially process the information, take the humans out of that processing loop, and that allows the humans who are involved to focus on getting ready for the next mission, not trying to process just too much information. Is that pretty much a good wrap-up of where AI fits into counterinsurgency and counterterrorism? Let me say it back to you in the way I think of it.
Starting point is 00:20:21 What JSOC basically did was we leveraged emerging IT systems, which were essentially video teleconference and pervasive communication, so that we could connect all of our parts of the force in real time. And then we leveraged some new technology like unmanned aerial vehicles to collect more. But the real breakthrough was in connecting the pieces process-wise and culturally, because culturally there'd been aversions to doing that and process. So we slammed that stuff together and we started getting information passing like never had before. That didn't solve, as you correctly said, the problem is now we got all this information. And sometimes we're literally stepping back and looking at and go like a big pile and say, we know the answers in there.
Starting point is 00:21:10 And, you know, you start digging through and we had really smart people. And the most valuable people to us turned out to be those all source intelligence people who were almost like artificial intelligence because they'd been around a long time. They'd seen and heard. They kind of remembered it. And so they intuitively could pull you toward trends and conclusions. But you became extraordinarily dependent upon those people. Like Anshu said, it was never a bank of 150 intelligence analysts and parallel desks and rows that could do it. It was always a smaller number of people who are hearing so much and just sort of instinctively wait a minute, I remember that and connect something. And why I'm so excited about this,
Starting point is 00:21:59 if the machine can help us do that, if the machine can pull all that stuff together, then you create these super analysts, I'll call them, who can harness the machine and take it, put the human part and be even more powerful. Building on what Stan said about having the machine being able to do something like this, what the machine is able to do is to take this all source data, collect it, What the machine is able to do is to take this all source data, collect it, aggregate it, analyze it, apply some level of reasoning to it, which I'll get into in a second, and then be able to visualize that to address the question that you are asking of. So, for example, if the question is, where is the adversary going to strike next? Okay, I want to predict it.
Starting point is 00:22:42 The question is, where is the adversary going to strike next? Okay, I want to predict it. Then you can basically encode the system to try and address that question. Now, how do you make a machine reason? The short answer is, in these complex problems, it's difficult. You probably can't. But what you can do is you can leverage these super analysts that Stan just mentioned. Take multiple such analysts and feed their reasoning models on top of the analysis you're doing. So where did we leave this 10 years ago in terms of capability was we could know where things are. Oh, here's a cow,
Starting point is 00:23:21 here's a goat, here's a guy. Okay. How are these three in proximity connected to each other so that this suddenly looks like a threat? It's something that a human is able to see, right? But if we can take that reasoning process from multiple different folks, and the reason multiple is important is because humans are extremely biased and expertise bias is the worst kind of bias, which I have learned through the years and I'm guilty of it. So when we put that together in the right way, in an iterative, continuously improving way, then you have the opportunity to be able to make the machine do what Stan just said. You're describing what AI can do, but can you provide us a little bit more information on exactly what types of information are we processing here? And I have two thoughts in mind.
Starting point is 00:24:11 One, we have lots of types of intelligence analysts. We have geo-intelligence analysts who look at imagery. We have signals intelligence analysts who can track signals. And then also, even in open source, you could imagine picking up audio clips or news. What kind of intelligence are we looking at? And I'm thinking about, you mentioned that the challenge with turbulence is there's a lot of variables involved. I'm guessing you're trying to collect on every variable that might be influencing kind of our predictive analysis of where a terrorist or insurgent is going to act next. Yes, absolutely. So it has to be fundamentally all source. So get rid of those silos immediately. And when you do that, the open source actually provides a playground for building systems like this, because you don't have those walls. geolocate that data. Think of any lat long in the world and suddenly you can build a stack of data for that particular location. And now if you do that for as much of the globe as you can
Starting point is 00:25:13 and do that on an ongoing basis with different levels of frequency, recency of the data, and you have to account for things like that and missing data and bad data, disinformation, misinformation, a lot of filters have to go in there. And that's what we mean by aggregation. But if you do that in a way that is systematic, then you will develop a goldmine of data that is geolocated and cuts across all these different ints from geo, sig, all of those various ints, combines it with open source, social data, mainstream media data, all of that. And different levels of machine learning need to be applied to each of those to extract.
Starting point is 00:25:59 This is just data we're talking about, right? From data, you got to get the signals. From signals, you got to get anomalies. From anomalies, you got to get the signatures. And from those signatures, you then start to make assertions about what's likely to happen. So there is this filtering that needs to happen in that process. So usually, innovative approaches to complex problems come from diversity, especially with people with specific expertise in different areas like yourself. And I understand that it can be difficult at times to integrate into different organizations with different organizational cultures. Can you talk to us a little bit about building your team and then building a team that's also going to go ahead
Starting point is 00:26:43 and work within the DoD in solving counterinsurgency and counterterrorism issues. Yeah, that's a great question. It brings tears to my eyes and a smile to my face simultaneously. So I will describe to the best I can, as closely as I can. Here's the thing. What I learned, what we all learned here at Rhombus is that, and Stan alluded to this earlier on, it's a joint training of the minds that one has to really do. That takes time and it takes a lot of iterations. So in order to execute on that philosophy, what one needs to do, and this is something we learned, it may not be the best approach, but this is what we learned, is to keep your team really small and work very hard to do the work that would probably be done by people 10 times more than the team size you have. You're talking about less than half a dozen people at most. In fact, we started with three people on this particular effort. And the iterations I'm referring to were in the form of pilots,
Starting point is 00:27:55 pilot projects with different customers, trying to learn how they think about problems and then to the best we could collaborate on helping them reshape those questions to be able to go after the solutions in a more targeted way. Real opportunity came about when we started our work in Afghanistan because this is where we directly interfaced with the warfighters on the ground and developed an appreciation for the spectrum of challenges that they have to deal with. And what we realized working with them was that how much influence so-called background variables tend to have,
Starting point is 00:28:42 and by that I mean, you know, the influence of other state actors, influence of religion, demographics, political influence, economic influence, social influence, what have you, environmental influence, climate change, so many factors. So when you integrate all of that in the form of a framework that is pointing towards the very questions that Stan was referring to. What is the adversary currently doing and what are they planning to do? Let me ask a question because, Anshu, one of the biggest challenges that I've seen and personally but also watched you interface with as you try to help the warfighter get their arms around using the potential of a new capability, but they're not
Starting point is 00:29:31 artificial intelligence experts. They don't have a sense of where the technology is. And yet we have so many artificial walls that keep potential elements like you away from the warfighter. I don't think they're all evilly constructed, but they have an evil outcome. Can you talk a little bit about that? Sure, Stan. So that's actually a great question and generally an observation, I think. So for us, the opportunity of working with a warfighter was really an opportunity to get an understanding of how they think of these problems. What I was extremely impressed by, to be honest with you, is how close they were to the framework I'm talking about. Literally, they constructed it for us. We mathematized it, sure, and that's what we're good at.
Starting point is 00:30:22 But the overall logic framework was in their minds, in their documents, in their thought processes. And what we were able to do was to quickly and iteratively map that into a mathematical construct that could then be encoded, that could then be put into an artificial intelligence system. My own conclusion is that they understand way more than they're given credit for, I think, or they're recognized for. And the trick is to listen very carefully and spend the amount of time one has to in order to get that understanding from them.
Starting point is 00:30:59 That's great. So a warfighter on the ground has a lot going on. They have a lot of new pieces of technology and information handed to them. They're just trying to get through the day a lot of the time and make sure they get a hot meal. Have you had trouble communicating the relevance, the utility, or even just the basic how do you use this capability at any level, either at the strategic level, at the operational level, or even down to the individual warfighter who needs to integrate this information into their daily fight? So Kyle, yes, short answer. In our case, at least, it goes beyond the constraints you talked about, which is what they are wanting out of all of this. In the end, they're looking for a solution,
Starting point is 00:31:41 they're looking for more convenience, something that makes their lives easier. Now, you convolve that with the situation on the ground right now, you know, the drawdown happening, the churn happening, all of those things happening. And suddenly, you're talking to a set of people for a month, and then they're all gone. And you're refreshing everybody again and starting from scratch yet again. That's the reality. Okay, and I'm not going to sit and cry about that. What I will tell you is the answer is to move at the speed of relevance. In this case, relevance means daily. Okay. They may be not there tomorrow because
Starting point is 00:32:17 they've been reassigned or something like that. So capture everything you can from them, mathematize it, encode it, so then the next set of people who come in have a far easier time. Now they have a software in front of them potentially, right? We work almost around the clock with these folks. So yes, there have been many challenges in that regard due to various other factors
Starting point is 00:32:41 in addition to the typical constraints of people on the ground. But there are ways to solve that problem, I think. Have we had issues in conveying the value to leadership? Yes, they're distracted sometimes. They have many priorities, competing priorities. So the only thing they are interested in knowing is, what did this do for me? Did it have an effect? Can I describe a little bit about technology? Because Anshu really hit something I think is key here. And Kyle, you did as well, because operators are busy. In 2004, in Iraq, I was really trying to wrestle with how we were going to do operations at the speed it was becoming
Starting point is 00:33:22 evident we're going to have to do them. And a Delta Force non-commissioned officer came into the operation center. And I'd known him for a long time. And I said, well, what do you need? And he walked over to a whiteboard and he did this drawing. And it's got a stick figure operated at one point. It's got an unmanned aerial vehicle above. It's got a compound that the target and it's got some stuff. And he's connected. He says, okay, this guy here has to be able to have downlink from this thing here, point to the UAV, which has got to have full motion video. We've got to be able to talk here. I got to be able to send this information. He's drawn all that. And he says, that's what I need to be better. And we kept the picture because it was so correct. And here was the guy who was doing it every night.
Starting point is 00:34:06 And we were able to, not immediately, but pretty quickly start to pull those pieces together. But you needed to get that person who was the expert on what was needed on the ground, connected with the people who had the ability to actually produce that kind of stuff. I couldn't do that, but I could create the marriage. to actually produce that kind of stuff. I couldn't do that, but I could create the marriage. And so when I see sometimes we stumble to try to get things, I'm always in a belief that we need to get more on shoes and his team closer to more operators, listening both ways so that the problem becomes clearer and then the potential solutions become more evident. I have a question now to Stan that's more so of a leadership question.
Starting point is 00:34:52 So if a leader has an all-source analyst, he or she is familiar with their training, their background, their personality, their performance or potential. background, their personality, their performance or potential. If a leader doesn't truly understand what is happening between the input and output of this machine, you know, the mechanics in there of the AI or the machine learning, how does a leader trust it enough to act on it when lives are at stake? Yeah, that's a great question. I think a couple answers I'd give. The first is, I remember, you know, I was pretty senior in my career, already in command at JSOC when finally my ALNO one day said, you know, you have become a competent consumer of intelligence. And it was interesting because, of course, he was implying that I had not been before, but he was right. And what I learned to do was when somebody brought me
Starting point is 00:35:53 intelligence, they say, okay, it's going to rain tomorrow. I would start, okay, why do you think that? And if they go, because I'm a meteorologist, I go, well, that's great. Give me some data. Take me through your logic trail. Take me what data you used, how you thought about it. And it's interesting. You had to do that a bunch of times for your key people. What you had to do was develop over time an understanding and a confidence in their ability, where they were getting their information, where they were getting their information, how they were vetting it, how they processed, et cetera. And then you had to do that with entire almost systems organizations like National Security Agency or whatever. I think we are going to have to create leaders, not just intelligence analysts, but leaders who have enough understanding of AI and machine
Starting point is 00:36:46 learning and what it's doing so that they're not waiting at a black box, waiting for information to come out like cards from a fortune teller and then do it. They're going to have to go, okay, this is the information being connected. This is what's happening. So when it comes out, This is the information being connected. This is what's happening. So when it comes out, it makes some sense to them. And that's going to mean we're going to have to train leaders, consumers. And I see this.
Starting point is 00:37:14 We work with a lot of civilian companies right now. And I see a lot of data out there. And then I see it taken into management meetings. But it has been changed from data to a PowerPoint slide. And then it's put in the PowerPoint meeting. It's really not data they're considering. It's some piece of information and they are going, okay, look, we're making a database decision. You know, yeah, sorta. And I think commanders are going to have to get much better at that than we currently are. Yeah, I don't I don't think you've broader question is, where is that line between machines
Starting point is 00:38:07 and humans, and particularly in the realm of counterinsurgency, which a lot of people argue is kind of the human domain of warfare, where you need to have connection with people on the ground, understand the people, understand the culture. And this could really go to either of you, but with the integration of artificial intelligence, are there some red lines on where the machine is going to do some work and where humans need to do work? So just on the previous point of trusting, I think the answer is don't trust it. Verify it, like Stan said. And it also begs another question. Why do you trust analyses now? Another question, why do you trust analyses now?
Starting point is 00:38:45 Just because a human did it? Highly imperfect, highly biased, incentivized for things that may or may not be aligned with what you care about. People who are constrained, people who are operating under pressure, these are the folks who have generated the analysis. And they've done the best job possible with imperfect information and imperfect knowledge. And yet, due to various reasons that have to do with our own cultural upbringing, as well as our own familiarity, as well as having the ability to maybe potentially blame somebody if things go south, you know, a notion of trust emerges out of that. Okay. But it's only an illusion. At the end of Okay. But it's only an illusion. At the end of the day, it's only an illusion.
Starting point is 00:39:30 We just accept it and then say, okay, I'll take the risk here. I would argue that a machine can do a very good job of explaining how it arrived at a particular result. And within Guardian, we call that data DNA. It's an explainable AI. It's completely transparent. It's not a black box. Any result you see, why is it going to, you know, why do you think it's going to rain tomorrow? Well, it'll show you all the data it ever used, as well as all the analysis it used. We call that analytic DNA. And this is really important, of course, because why would anybody take any machine's results for granted?
Starting point is 00:40:06 There's no reason. Also, what we didn't talk about is the notion of uncertainty. How do you quantify uncertainty with various results that you're talking about? If a machine can tell you the level of uncertainty and why it thinks it's uncertain, then you can say that, yeah, logically, this kind of makes sense and proceed. Now to the question of where do you draw the line? In my opinion, when the cost of a decision is extremely high, say measured in human lives, then automating that decision process is not a good idea. The cost is too high. When the cost is very low, for example, popping an ad on a website
Starting point is 00:40:48 because you just happen to be there and some company, and I'm not going to name it, went through your emails and saw you talking about, I don't know, vacations, then, yeah, that does make sense. That does make sense. Sure, automate that. So it's that spectrum.
Starting point is 00:41:04 And the human in the loop is extremely vital when the cost of the decision is very high. hey, you might want to focus your collection on this village or on this region or whatever. And that's one thing. But it's another level where you say, hey, this is an autonomous kinetic device that we have that will act on information independently. If I understand correctly, you're saying that we can point people in the right direction, but we're not moving toward a situation where AI platforms are going to make autonomous decisions on when to take kinetic action. That is precisely right. And that's exactly the value that we bring here, especially in counterinsurgency, where
Starting point is 00:41:57 we are able to say, here's something that's, you know, something bad is going to happen here in location X. And we are 90% confident that to happen here in location X. And we are 90% confident that'll happen. Here's why. And, oh, by the way, here are the locations where you should have a closer look. So, Anshu, a quick follow-up to your last point that this, you know, AI could point us into the right direction, the right village, the right province, where to focus efforts and resources. And the first thing that comes to my mind is that it has some implications
Starting point is 00:42:30 for counterinsurgency interventions in the footprint itself, the number of forces that are going to be applied to a problem set. So Stan, to you, I'm interested in your thoughts, Dan, to you, I'm interested in your thoughts, the implications in terms of counterinsurgency intervention moving forward. If we have this tool and this power or this understanding, better understanding of the problem set itself, do you foresee a coin footprint getting smaller and smaller? Yeah, Nick, and that's sort of the challenge there is on the one hand, a footprint can get smaller because you can use greater efficiency and collection, greater surgical ability to put things, but it gets back to proximity at a certain point. If part of your coin strategy is just to make people feel safer, and that is part of what the population needs. They have to see people, the cop on the beat, they have to see things and they have to feel that proximity, that cultural proximity. So the answer is there'll always be a tension there. I kind of want to circle around
Starting point is 00:43:37 because Anshu said something that I think is really interesting. And we start to talk about hypervelocity missiles or very rapid activities. And I'll use a terrorist event here, but if we had reporting that was collected from a variety of sources and gave us a report based upon artificial intelligence that says there's a very, very, very high probability that this particular group is going to use this, let's call it a weapon of mass destruction in this city. Now we've got all of this collection. And the answer is because of the time and the nature of the thing. Do I shoot down this airplane that's in route to the airport, or do I not? And we've got a preponderance of data that tells us one thing, but we don't have 100% because as Anshu mentioned so well, there's uncertainty. Maybe they don't, or maybe their
Starting point is 00:44:39 intentions changed in route. When we have this time and this inability to close the gap on uncertainty, what are we going to do? You're talking about, you know, there's a terrorist attack that might happen. You have predictive data saying this terrorist attack is going to happen and you essentially have a limited time to take action. So I guess that's a good question, aren't you? A very easy question for you to answer is, should we be looking at cutting the humans out of the decision loop in a time-sensitive situation? And how would AI even know that that was happening, that that was time-sensitive? That's a great question. So because it's so easy, I'll make my answer very long.
Starting point is 00:45:32 very long. So in my experience, I think it does not make sense yet to automate any decision when it comes to something, a situation that critical. If I were to expand on this, let me just circle back on the notion of uncertainty and get into how are we building these things. We are building these things by learning from outcomes. One of the things that one has to ask is if there is a particular prediction and the uncertainty is high, why is the uncertainty high? Mostly it's because we don't know enough. So then that gets back into explaining the result of the AI and then finding out what kind of data would reduce the level of uncertainty. You can never eliminate uncertainty, but you can minimize it and do your best for it. So today makes no sense to automate it.
Starting point is 00:46:23 Five years from now, having been through sufficient number of these sorts of events, of course, God forbid, but if this were to be the case, then I think one can gradually asymptotically head towards that sort of a situation where you are able to know more and more accurately how much time you have, how much decision space you have, and accordingly then calibrate your decision-making process. Yeah, it seems to me that another challenge is if you develop a system that is machine-based, then the temptation for people to try to spoof it or deceive it and to get a reaction, and they can do that with people as well, but to get a reaction. And of course, terrorists particularly want you to overreact. And so if they could get technical systems to overreact more easily than human systems, of course, that's just a vulnerability. So we have time for one more question. And we've talked a bit about potential implications for practitioners and counterinsurgency interventions.
Starting point is 00:47:30 What are some of the potential implications or key takeaways for policymakers? Yeah, I'll start because Anshu's going to have better than mine. First is one of my favorite movies ever is the old early 1960s movies, Fail Safe. And that was where, of course, confidence was put in a system that would give an automated response to a Soviet attack. And the automation didn't work and it ended up in tragedy. That was early in the age. And so I think that we've always got to be very, very cautious. At the same time, just because we have to be cautious doesn't mean we don't need to embrace it and leverage it. It means we need to learn it. We need to understand it. We need to understand what it will do,
Starting point is 00:48:18 what it will not do. We've got to make it do everything it will do, but don't ask it to do what it won't do. And that means that the real responsibilities on us and not just that the technical experts like Anshu, the really smart people, it's going to be the people who are going to be at where technology policy and people at the friction points, you know, people like operators in the field, commanders, different things like that, that's where the responsibility is going to lie because you're not going to be able to just throw the responsibility on the expert and said, should I do that? They're not in that position. Operators will be. Yeah, I completely agree with that, Stan. The only thing I would add to that is from a policy perspective, I think it's important to recognize that for capabilities that are rapidly evolving like AI,
Starting point is 00:49:13 the way we do acquisition has to be fundamentally different. And it has to be a lot more experimentation-based, performance-based, naturally. I think DIU is doing some amazing work. Jake, the Joint Artificial Intelligence Center, is doing some amazing work in this space. We can all do better, of course, but things are in the right direction. But you can't sit and write requirements for this thing.
Starting point is 00:49:38 You have to try it out. And so that has to be enforced more at scale. It'll help companies like us and the wider ecosystem come together in service of this great nation. Dr. Alan Chiu-Roy and General Stan McChrystal, this was a fascinating conversation. Thank you for coming on the Irregular Warfare podcast. My pleasure. Thanks, Nick. Thanks, Kyle. Thank you, Nick. And thank you, Kyle. Really, it's an honor. Thanks for listening to episode 17 of the Irregular Warfare podcast. We release a new episode every two weeks. In our next episode, Shauna and Andy talk with Dr. Thomas Ridd, author of the book Active Measures, and Lieutenant General Lori Reynolds about information and disinformation operations. Following that, Daphne and Nick will
Starting point is 00:50:30 discuss Plan Columbia, the Columbia peace process, and interagency collaboration with Ambassador Kevin Whitaker. Please be sure to subscribe to the Irregular Warfare podcast so you don't miss an episode. You can also follow and engage with us on Twitter, Facebook, or LinkedIn. One last note, what you hear in this episode are the views of the participants and do not represent those of West Point or any other agency of the United States government. Thanks again, and we'll see you next time.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.