a16z Podcast - How AI Is Changing Warfare with Brian Schimpf, CEO of Anduril

Episode Date: January 28, 2025

How is AI reshaping modern warfare? Speaking with a16z Growth General Partner David George, Anduril cofounder and CEO Brian Schimpf discusses how AI helps humans make better strategic decisions by so...rting through the enormous amount of data collected from modern battlefields. Schimpf also discusses navigating the US government’s complex procurement processes, using commercial technologies to kickstart their own product development, and the growing opportunities for startups in defense. Throughout, Brian offers a deep dive into the intersection of technology, geopolitics, and the future of defense.This episode is part of our AI Revolution series, where we explore how industry leaders are leveraging generative AI to steer innovation and navigate the next major platform shift. Discover more insights and content from the AI Revolution series at a16z.com/AIRevolution. Resources: Find Brian on X: https://x.com/schimpfbrianFind David on X: https://x.com/davidgeorge83 Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://twitter.com/stephsmithioPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Transcript
Discussion (0)
Starting point is 00:00:00 The American defense industry is the largest in the world at nearly $1 trillion, accounting for about 40% of military spending around the world, and also arguably impacting every person on Earth. Now, this sector has also face shifted throughout the decades, including the consolidation of crimes, shrinking from over 50 to less than 10 large crimes receiving a majority of defense dollars. Those are companies like Lockheed, Raytheon, or Boeing. But there are some new companies in town, trying to disrupt how defense is done through new hardware and software. One of those is Anderil, a company that just announced Arsenal One, a billion-dollar factory in Columbus, Ohio, expected to create 4,000 jobs in the region. Now, in today's episode, Androo co-founder and CEO, Brian Schimph, sits down with A6C Growth General Partner, David George. Together, they discuss how Andrel got its first product off the ground, competing with some of the largest companies in the world
Starting point is 00:01:03 in navigating the U.S. government's complex procurement processes. They also discuss how AI changes the modern battlefield. Being able to pull out that signal from this overwhelming amount of information that exists. Plus, what most people get wrong about these technologies. It is unethical to not apply these technologies to these problems. And how we shape up to the technology. competition. They're running hundreds of tests a year of hypersonic weapons.
Starting point is 00:01:31 Right. The U.S. is running like four. Now, if you do like this episode, it comes straight from our AI Revolution series. So if you miss previous episodes of that series, with guests like AMD CEO Lisa Sue, Anthropic co-founder Dario Amadeh, or the founders of companies like Databricks, Waymo, Figma, and more, head on over to A16C.com slash AI Revolution. All right, let's get started. As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund.
Starting point is 00:02:11 Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see A16c.com slash disclosures. Let's jump right in. What is Andrew? Tell us what you do. All right. So we were founded 2017. We're about 70 years in.
Starting point is 00:02:42 The basic idea was we thought there was a better way to make defense technology. So number one, the tech for the next 20 or 30 years was going to be primarily, how do you just have more cheap autonomous systems on the battlefield, old, just more sensors, just more information flowing in. That seemed like it had to be true. So we invested in the core software platform we call lattice that enables to make sense of all these things. We have built a variety of autonomous products that we fielded over the last seven years, just an outrageous pace. And we're really working on all aspects of national security and defense. And how did you guys get on to national defense as the place to go spend your time?
Starting point is 00:03:20 Obviously, I know your background, but maybe you can share that. Yeah. So I was at Palantir for about 10 years. I'd been working on a variety of government programs. And then several of the co-founder so, Trey Stevens, was also a Palantir, Matt Graham, our COO, as a Palantir. We're all really good friends. And we've been talking about doing this idea of there needs to be a next generation defense company. And then Trey and Palmer met through VC world and Palmer was just getting out of Oculus and it was the same thing I want to do. And so we decided to kick this off together. But for me, working in defense, it was just obvious the degree to which there was a problem. You work in this space, the tech is old. It is not moving fast. It is very lethargic. There are relatively
Starting point is 00:03:59 few competitors at this point. It just felt very right to do something different. And it's the sort of thing that once you get into it, the people who are actually serving just had this patriotic motivation to solve the problem. It's just very, very motivating problem to work on. How did you land on the first product? So first product we worked on was what we call sentries for border security. And this was a Palmer idea. He believed that tech could actually solve this. So we have these automated cameras with radars. We can monitor the border for miles away from these cameras. And he was like, this is something we can solve super fast with technology. And it really kind of hit what has ended up being a very good pattern for us is find an urgent problem that actually
Starting point is 00:04:44 has a real tech solution that we can apply the cutting edge technologies to. So early, on. It was 2017. Computer Vision was just starting to work. It wasn't even really embedded GPUs yet. We were literally taking desktop GPUs and liquid cooling them to get these things to work in a hot desert, under solar power. But we were able to go and get a prototype up in about three months and then move into like a pilot in about six months and then a full scale in about two and a half years. So really, really quick timeline. But I got to fit this problem set of we had a technical insight of how you could do this better. And there was urgency to solve the problem. They actually wanted to make a dent in this.
Starting point is 00:05:20 All right, I'm going to ask you a lot more about that stuff. But one of the things that people say to me all the time, and you hear it in speeches and all this stuff, like AI is going to change the nature of warfare. Yeah. On the one hand, the major breakthrough that we just had, the way everyone interacts with it is like a chatbot and an LLM. It's pretty cool. It's amazing. It's awesome. I use it for everything.
Starting point is 00:05:39 But what are the implications of this new wave of AI, generative of AI, on modern warfare, physical sense? Yeah. on the software side. Let's talk about that. So when I think about where AI is going to drive the most value for warfare, it is dealing with the scale problem, which is really the amount of information that is the number of sensors, the sheer volume of systems that are going to be fielded and it's going to go through the roof. So this is like lattice.
Starting point is 00:06:05 Maybe start even there. Everything has a sensor. That's right. So what people do in the DoD? There's a lot of things they do, but what's the primary warfighting function? They are trying to find where the adversaries are. Yes. They need to then deploy effects against them.
Starting point is 00:06:21 That can be a strike, that can be deterring them by a show of force. That can be jamming and non-kinetic things. And they've got to then assess, did that actually work, right? Find them. You got to engage and you've got to assess, right? It's like pretty straightforward. That is the primary thing that the military does. And so, okay, what do you need to do that?
Starting point is 00:06:40 You need a ton of sensors. You need a ton of information on what is going on with an adversary who's constantly trying to hide from you and deceive you. Yeah. So just huge amounts of information to make that problem as intractable as possible for them to be able to hide or when they are deceiving, you can figure it out. Yeah, yeah, yeah, yeah. So it's hard to deceive in every single phenomenology of sensing.
Starting point is 00:06:58 This technology exists, right? The sensors exist. The sensors all exist. The sensors are deployed, right? They're going to get better and you're going to be cheaper and you're going to be able to do more of them. But a lot of the limit of why can't we do more is what the hell are you going to do with the data?
Starting point is 00:07:10 Processing capabilities, yeah. Processing, but also just operationally. So, okay, now I say I had a perfect. A.I. system that could tell me where every ship, aircraft, and soldier was in the world. What are you going to do with that? Now I know everything. That is overwhelming, right? And so then being able to sit through that information to, well, okay, they're maneuvering here. What does that imply? Is this an aggressive action? Is it outside their norms? Is this different than we've seen in the past, being able to pull out that signal from just this overwhelming amount of information
Starting point is 00:07:40 that exists? And then on the other side, you've got to act, right? So now I've got to actually be able to carry out these missions. Yeah. So this is where on the autonomy side, it really comes in, which is, okay, I want to send fighter pilots out. So the way they do, like, a predator drone today is like a guy with a joystick. Yeah. No, we've all seen that, you know, Ukraine, Russia.
Starting point is 00:07:56 Yep, exactly. It's all, like, manually piloted. But that doesn't really scale, and there presents a lot of limitations on communication jamming, all these things. So I want to be able to task a team of drones to go out and say, hey, go in this area and find any ships and tell me where they are. I just wanted to be that simple. And they just need to figure out their own route.
Starting point is 00:08:13 And if I lose some of them, they read. rebalance. They just go out and handle it. I'm running target recognition. They can pop back whatever's relevant. That is where I think the autonomy side really comes in, which is I can just drive scale into the number of systems I can operate in the environment. The promise of AI in a lot of ways
Starting point is 00:08:29 in the long run with this is just the ability to scale the types of operations I'm doing, the amount of information I have, and if done very well, it will put humans into a place of sort of better decision-making, right? Instead of being like inundated by a volume of data and then all of our capacity goes to these
Starting point is 00:08:45 mechanical tasks. We can have humans with much better context, much better understanding, historical understanding of what this means, what the implications of different choices are. Yeah. Those are all things that AI can enable over time. Ideally, better decision-making. I think it's a wildly better decision-making. We're working with both limited information and imperfect judgment. That's right, I guess, right? Yeah. And so the more you can have AI augmentation for these things and synthesis and like clarity, that is where the promise of this is. And so the U.S. posture on this is very much, we want to have humans accountable for what happens in war. Yes.
Starting point is 00:09:19 That is how it should be. Right? The military commander that employs a weapon is accountable for the impact of those weapons. Yeah. That is correct. I think that is the system we should have. And so then nobody is talking about having full-blown AI is going to decide who lives and die. That is a crazy version that nobody wants to have.
Starting point is 00:09:37 Well, I think it's also far-fetched in the sense that it presumes some sort of objective function. that isn't driven by us. This is my conversation with everybody when they're kind of like, oh my God, what about when the AI goes Terminator on us? And I'm like, it's a tool for humans is it doesn't have an objective function.
Starting point is 00:09:53 That's a leap that is not on the scientific roadmap today. So why would that be the case in warfare? That's right. And so I think the reality for these things that's going to be human augmentation that is going to be enabling human software in a much larger scale
Starting point is 00:10:05 with much higher precision on these things. And that is the opportunity with it. And so to me, it is unethical to not apply these technologies to these problems. And I think our view has always been, we're the best technologists on these problems where we can get the best technologists to it, giving the best tools on these absolutely critical decisions that are extremely material. That seems like probably a good thing. And engaging in the question of how can you use this technology
Starting point is 00:10:31 responsibly and ethically is incredibly important. Yeah. Is it more humane to have a fighter pilot in the way of danger or having an autonomous system piloting in a conflict? That's right. And by the way, I have friends who are fighter pilots. I love fighter pilots. Yeah. But, you know, the technology has advanced significantly, and you can make the argument that it is more humane, not to put them in the line of fire.
Starting point is 00:10:51 That's right. Yeah. We're not going to want to put U.S. troops at risk. Yes. And I think those are the deterrence factor of the U.S. saying, I have this capability, and I've reduced my political cost of engaging on these things. It's actually a pretty good deterrent as well.
Starting point is 00:11:06 Yeah. I'm not putting U.S. troops. Or I can give this to allies. Yes. And they can defend themselves. Yep. And so I keep us out of the fight, yeah, exactly. Keep our troops out of the fight.
Starting point is 00:11:14 Keep the troops out of the fight. And it changes the calculus quite a bit. And so I think that actually, in a lot of ways, have done well, has a significant kind of stabilizing impact and deterrent impact. It just is harder to use force to get your political ends. Yes, exactly. And I think that can be a very positive thing. Yeah, exactly.
Starting point is 00:11:32 Yeah, I keep coming back to deterrence, and we need to find a way to create a sense of urgency for the sake of deterrence, not for the sake of going to war. That's right. And so it feels like that's universally, like people we talk to, I feel like that's universally known. And hopefully we can make some progress. Yeah. I think people largely agree. Look, Vladimir Putin was very convincing on this.
Starting point is 00:11:52 It turns out invading Ukraine was probably the single biggest shift I've seen in terms of people recognizing that, look, there are still bad actors in the world. They will use force to get to their political will if they think it will work. Yeah. If the cost is worth it, they're going to do it. And I don't think there's any reason to believe that's going to stop. It's been true for tens of thousands of years. years. Do you think the future of warfare, so you said AI as an augmentation for humans? Yep. How fully automated do you think a conflict can become, say, in the next 10 years? Look, I think
Starting point is 00:12:22 the mechanics of, okay, there's this airfield and you want to go surveil it and take it, you can do some strike, you're going to do some surveillance, you're going to do all these things. There will be a large degree of automation in that, right? Like, I can just say, hey, send this team of drones out in these waves to go conduct this operation, find things that pop up that are a threat, tell me if you pop up to the human to say engage or not, it goes. Yeah, yeah. Like, it can move at a much faster pace. I think a lot of the things that were starting to happen in Ukraine, a lot of the great work Palantir did was on things like this where it was like the targeting process of going from satellite imagery through to, hey, this looks like a tank,
Starting point is 00:13:01 through to an approval of, is this a legitimate military target or not? I was streamlined and compressed. Much faster work. Much faster. So I think those things, will happen very, very quickly, like very, very quickly. Then, okay, now it turns into a matter of policy and degree and scope. That is a thing that I think we're just going to have to figure out as we work through it with the military. So then what we think about from the technology side is, okay, I don't want to design anything that precludes more advanced forms of this over time, architected it correctly.
Starting point is 00:13:30 But the crawl phase is just get a lot of the basics, just automated, very mechanical things, make it very predictable, don't have any surprises. And then you can add more sophistication as you build trust, the AI advances. These things get more sophisticated over time. And one of the best examples is on the defensive side where it's right for AI. So we do a lot of work on our drone systems. This is one of the areas we're partnering with Open AI on. And it's looking at this question of if you have multiple drones flying at you and you have minutes to respond before the strike happens on you.
Starting point is 00:14:02 How do you make an optimal decision? When you are panicked, you are nervous and your life is at risk. It's very hard. Is that like a person manually sitting there making those decisions today? Yeah. It's often three because they have a separate radar from a camera or separate from the guy pulling the trigger on the weapon systems. Oh, man. And so then this, the coordination cost can be significant.
Starting point is 00:14:22 So you can automate a lot of this. And then the other problem with us is then, as we've seen in Ukraine, every single unit, every single soldier is now at risk of drones. Yes. So this has to proliferate out from being a specialty that you do in an operation center now to every vehicle. In the field. In the field, everyone has to have this capability. You need the ability to have these systems just process all that sensor data automatically, fuse it together, tell you viable options for countering this, and tell you what's a threat and what's not a threat. These are the types of things you need to be able to do, respond with intelligent suggestions, and then have the system just automatically carry it out from there.
Starting point is 00:14:59 These are the types of problems we're working on. And the defensive side is just you need it, right? There's no choice because the timelines are too. short and the urgency is too high. Yeah. And it's a very straightforward area to understand where technology can really improve the problem. Yeah, it's like the highest stakes version of decisioning that autonomous driving cars are doing today, but with way more sensor information. Yeah.
Starting point is 00:15:26 Yeah. It's not a road. Yes. Yes. With an adversary who's constantly trying to fool you, deceive you, and yes, it's very, very hard. So that's one of the big parts of the partnership with Open AI. Yeah, yeah. So they've been great. And I think Sam especially has been very clear that he supports our warfighters and he cares about having the best minds and AI working on national security. And who better exist to work through these hard problems. Yeah. And so I was just incredibly proud of them for coming out in favor of this and saying they're going to work on this. They're going to do it responsibly. They're going to do it ethically. But this is an important problem that the best people should be working on. The defense industry is notoriously difficult for startups to navigate.
Starting point is 00:16:08 So how did you guys actually get traction in the first place? And do you think that's going to change in the future? Do you think it will continue to be hard? Do you think the Primes will continue to have a stranglehold? I'd love your take on that. It is very hard. And I think we built a lot of the right technology, I think the right business model of investing in things that we believe need to exist.
Starting point is 00:16:30 I think we're picking a lot of the right problems to go after, but probably more than anything, I think we understood the nature of what it took to sell, right? And the congressional relationships, the Pentagon relationships, the military relationships, like all of this that you need to be able to say, hey, we have the right tech, you can trust us, we can scale, we can actually solve these problems for you, proving that it works, and then, like, catalyzing all of these really complex processes around it. I think the other part that we've done quite well is we're just finding ways to find those early adopters and we understand those playbooks. Who's going to move quick? How do you just build that momentum and advocacy in the government to make this
Starting point is 00:17:08 go? Look, it's like more bureaucratic in certain ways. Is it much worse than selling to a bank or an oil and gas company? It's, I don't know, maybe 30% worse, but like probably not 5x worse. Yeah. And I think the reality is it's like enterprise sales are actually very hard in any context, especially the ones with long sales cycles and massive commitments. That's right. These are large capital investments customers making. That is a slow sales cycle. That is how it works. And so I think there's, like, a lot of complaining and frustration. It's, okay, well, also being bad at business means you're bad at business.
Starting point is 00:17:38 If you don't understand your customer, you're going to lose. That's how it works. So do I think the government needs to be a better buyer of these things? Do I think they need to, like, take better strategies that'll get them more what they want? Absolutely. They're taking observably bad strategies to get to the outcome they actually want. Do I think it's necessary to change for us to be successful? Not really.
Starting point is 00:17:58 we're just going to play the game that they present. Okay, I want to talk about the observably bad strategies. What are the observably bad strategies? And then what are the good ones? And maybe also wrap it into this idea that how do you actually convince the government that your ideas are the right ideas? So, take, should you go spend money building a whole new generation of F-35s with manned pilots or a whole new generation of aircraft carriers, or should you do something different?
Starting point is 00:18:26 and how do you actually get your points across to them? Okay, so they're sort of like, how do they contract and buy and what's been going wrong there? And then it's what's the right composition of, even if you could buy perfectly well, what should you be buying? What should you buy? Yeah, two different questions. And so how are they buying poorly?
Starting point is 00:18:42 So the typical government contracts are done in what's called cost plus fixed fee. And this actually came out of World War II when we were retooling industry to work on national security problems. We're just like, we're going to cover all your costs and we'll give you a fixed profit percentage on top. And so the incentives here are sort of obvious If it's more expensive, you get more profit If it is less reliable, you get more profit, right?
Starting point is 00:19:02 The longer it takes, yeah, the longer it takes, the less reliable it is, the more complicated it is There's no point of incentive in there to actually drive down costs And you see this play out, right? It's like the companies have gotten so used to this where you look at even something like Starliner Where, you know, I think SpaceX had a third the amount of money that Boeing was given To make Starliner work And SpaceX did it on time probably fast. than they even predicted, did it probably incredibly profitably, and it worked.
Starting point is 00:19:30 And so I think these incentives that don't hold you accountable are actually bad for your company. It just makes you a worse company. But do people in the government realize that it's bad for the country? I think they are frustrated. I think they understand that this is not really working. So you look at like F-35 as an example of one of these programs. It took 25 years to get it from initial concept to fielding.
Starting point is 00:19:53 There's this awesome chart, which is like, show. how long it takes to get commercial aircraft or autos from kickoff to fielding. And it's been like flat to slightly better for all of those things, like on the order of two to three years. The military aircraft side just went literally straight up. Like these things are taking longer and longer and longer. There's an amazing quote that if you extrapolate this out, this is in the 90s, this guy made this quote. If you extrapolate this out by like 2046, the U.S. government will be able to afford one airplane that the Air Force and Navy share and the Marine Corps every other day of the week gets.
Starting point is 00:20:25 That better be a good airplane. Yeah, these things are just crazy. And so I think they recognize that this is not working, right? This is broken. Now, the other part of this is they haven't had a lot of alternatives. So you have a relatively small cartel of these companies who sort of all say we won't do fixed price programs anymore. They won't do things on a fixed cost basis. So, okay, if you're the government, you're a buyer, what are you going to do?
Starting point is 00:20:46 Yeah, of course. You don't have a lot of choice here. And there's been a lot of problems with trying to get this model right. Now, in terms of things that can work a lot better, I think SpaceX really proved this, where they literally built a reusable rocket that you catch with chopsticks commercially. Like, that is...
Starting point is 00:21:04 I think we can solve these things, guys. What thing can't we build? I think we can build an airplane. I think we can build it. So there's not really a question that this is the only part of some magical thing that can only... Like, this is the only people that can do it anymore.
Starting point is 00:21:18 And you guys with autonomous... Exactly. Yeah. Yeah, like, it's proven now that the future is here. Yeah, yeah, yeah, exactly. And so I think the alternatives are there now. And then models that can work a lot better, it's like, one of my crazier ideas is, is new missile takes about 12 years to go from concept through to fielding, like 12 years. That's insane.
Starting point is 00:21:36 It's insane. And so, okay, if you're in that world. But how fast is the technology evolving? Oh, like, this is the, like, 12 years from now, like, what we'll be able to do? Right, exactly. And then we'll still be on the previous system. No, there's even crazier examples, like the Columbia class nuclear submarine is going into service. in 2035, and it's expected lifetime as your 2085.
Starting point is 00:21:54 So how good were we in 1960, I guess, where we were today? It's like, someone clear. We had technology to go to the moon. Yeah, exactly. It was pretty good. It was like the quality of the phone. It was like the computing power of the phone. Yeah.
Starting point is 00:22:06 And so these timelines get longer and longer, and it's just this death spiral, these things. Contrast that cycle of development with China. Do they take 12 years? How does their tech stack up to ours? The single best stat for this is they're running hundreds of a year of hypersonic weapons. Right. The U.S. is running like four.
Starting point is 00:22:24 Right. Anyone who's worked in technology understands the compounding value of iterating on these things. And it is just so undervalued. Why is that the case? Look, the U.S., all these tests are very expensive, very complicated. There's so much buildup because every test has to go well because we do relatively few tests. So then it increases the risk and the duration that you prep for these tests and increase the cost. And you're just in this vicious negative cycle.
Starting point is 00:22:49 Like anyone who's worked in like software. understands this, like the old-school way of releasing software. If you did a yearly release, you try to shove everything you can into that. The risk goes through the roof. Quality is a disaster. Going fast has an insane quality of its own and just how quickly you can learn and how much you can actually reduce costs on these things. And so they're just much more willing to like test and iterate in a way that the U.S. is not right now. And so I think that is like long term. The biggest thing I worry about for the U.S. is just pace, the pace of innovation, pace of on these things, it probably is the single biggest determining factor of how successful
Starting point is 00:23:26 you're going to be over a 20 to 30-year period. How do we create a sense of urgency? Yeah, like you look at that retooling, we had a two-year period of land lease and the amount of GDP that was spent on lend lease, the time was through the roof. And we weren't at war that, right? Yeah, yeah, yeah. So we had a two-year head start to recondition U.S. industry around us before we even entered into the conflict.
Starting point is 00:23:47 And that's about how long it took. Empirically, Russia, about same duration, about two years to retool their industry against defense production. They are now outproducing all of NATO on munitions. Russia. Russia. Yeah. Russia. Oh, I believe it. And we've sanctioned them to hell and they still are doing, right? Well, they still have gas. They still have plenty of gas. And so it's quite tricky to think you're going to reconstitute this in a single day. I think the department has a lot of urgency on it. One of the areas where we see it is showing up with weapons. So when you look at these wargaming sort of scenario, In all these war games are sort of questionable in their own ways, but pretty consistently, the stockpile of key U.S. munitions is exhausted in about eight days.
Starting point is 00:24:28 It's hugely problematic. And, like, that is because we have gone down this path of thinking that we'll be able to have this Gulf War strategy of concluding a conflict in two or three days, and that's how we're going to fight our wars. And it's just not true, right? It's like any of these high-intensity conflicts are for any adversary that matters. That's right. It's not even close. We've got to be prepared to sustain these protracted conflicts. And that in and of itself is probably one of the best deterrent factors we can have.
Starting point is 00:24:54 Exactly. It's like we will not stop. We will not back down. We will have the capacity to withstand anything. That is a message we need to send to our adversaries worldwide. We have critical gaps on a lot of the kind of constituent parts of supply chain. This is a national security issue. So I think there is a feeling that this is a problem.
Starting point is 00:25:11 I don't think anyone thinks everything's going great. Now the question is, what are the strategies to get a way out? Right. I don't think there's any debate that we're on our back foot in terms of the capacity we need, the mass we need, the tight systems we need. Now, like, how do you get out of it? That's a much harder question. And do it in a way that is going to work with Congress, is affordable, is actually something we can sustain. The path Iran is probably more incremental than revolutionary, I would say, with, like, U.S. government, where companies like us are going to come in and win incremental new programs and show the different models. Yeah, so we'll be more innovative. We'll be more innovative. I think that flywheel is really starting to go. We saw a volume issue. But it is a major volume issue.
Starting point is 00:25:48 And I think on the weapons production side, look, the only solve out of this is to actually top into the commercial and industrial supply chains that exist. We're pretty good at building cars. We're pretty good at building electronics. Certainly the components are going to. The components, for sure. We're building a lot of components. Like, we can do this stuff.
Starting point is 00:26:02 Yeah. And you could design your systems in a way that take advantage of those commercial supply chains. Like one example we have is made like a low-cost cruise missile. Yeah. It's very cool, several hundred mile range. And we made the exterior fuselage in this process that you used for making acrylic bathshops. This is this hot press process. So, like, we're making the gas tank.
Starting point is 00:26:19 This is a mad scientist thing. It's incredible. And it's like the fuel tank is made the same thing as the rotomolding. You use it for, like, making plastic toys. And it works great. There's a huge supply base that's available to do these things. And in contrast it with most of these traditional weapons, where it's like overly bespoke components, we got to get the dude that knows how to solder this one thing out of retirement. And the supply chains are super deep, like four-year lead times on these weapons.
Starting point is 00:26:44 It's really, really bad. get into it. Like, I saw this thing. The defense primes were like, we need to change the federal account acquisition rules so that we can stockpile four-year lead-time parts. You're like, a four-year lead-time part. What are we doing? What are we doing? The world has changed in four years. Yeah, like, what is happening? And so I think there's a problem. But then the government doesn't help where they don't allow them to change the components. There's no incentive to change the components. Well, so this is the problem. There's no urge. It goes back to there's no urge. Exactly. And so, look, I think a lot of the traditional players are like Patriots and they really care,
Starting point is 00:27:15 but it's like they're in a system that doesn't encourage them, support them, or even, I kind of boil it down to like two key things. One is meaningful redirection of resources. So, like, right now, the amount of money that's actually spent on capabilities, like the types of things working on, is somewhere between 0.1 and 0.2% of the defense budget. That seems pretty loud. Even if we got to 2%, 2%. We are, like, in a wildly different world in terms of what you can do with that type of money.
Starting point is 00:27:42 You're making, like, a VC sounding pitch? Yeah. If I could even get, look how big the market. If I could just get to two percent. But that's actually very helpful context, all kidding aside. This is a crazy small number. It's a crazy small number. And even the small numbers are pretty big, but you really need to up this.
Starting point is 00:27:58 So, like, number one is make the hard choices to drive redirection of resources and the technologies that are actually going to be what you need, right? Where they're so stuck with these legacy costs. Number two is every company in the world gets this, which is you need to empower. good people to run hard at a problem and put all the things that they need to do it and all the approvals and all of that under their command to just get to yes and yes yes it's very simple right this is how every company operates and that is how you are successful just empower good leaders to get results yeah hold them accountable it is the opposite of how it works in the pentagon where every time something has gone wrong a new process and a new office has been added to check the homework and say no and they stall progress out and so i think there's relatively simple things that can be done with some combination of congressional action and executive action to flip that on its head, say, nope, these program offices are fully empowered to field their capabilities, and they are just accountable to senior leaders on the risk and tradeoffs. Yeah, and that's it. And you give them a budget. Give them a budget. Give them a target.
Starting point is 00:29:04 And they have to understand the risk. They have to do all this, but they're going to make informed choices on risk and cost and schedule and performance tradeoffs. Yeah. Like, that's their job. That's what we're hiring them to do. And if we create really empowered people to actually field stuff, you will get amazing results. Because there are really good people in the government. It's just there are 10 times as many people who say no as there are to people who are accountable for dollars. Oh, that's fascinating. Ten times more people who hang around say no than say yes.
Starting point is 00:29:28 That's right. Could you do just like a project warp speed for defense? I know that's like that implies something short term. Like it's like a one-time catch-up or something. Yeah, this is probably needs to be just like a permanent shift. I think you have to do both, right? So you've got to say, look, yeah, we need a warp speed for us. autonomous systems or weapons.
Starting point is 00:29:45 We need that, right? That's a no-brainer that we need to have. And in doing that, you can tease out what are those things that you cut and everything worked out fine and you just didn't need to do it again. And then in parallel, you do the painful and slow process of just swacking back all these like bureaucratic things that exist. I think you got to do something right and use that as a template. And so these sort of like things that prove you can be successful, do more of them, go
Starting point is 00:30:12 at bigger scale, while also cutting back all the nonsense on things that just don't need to exist anymore. They made sense at the time. Now let's revert, walk back, and reset where we actually need to be for where we are. Like, tech has changed. The pace has changed. Reflect that in your process. It seems even before the stuff we were just talking about in 2019, when you got started the company in 2017, starting a company in defense was extremely unpopular. And when you talk about what do you need to succeed as a startup, there's so many things. But capital, talent, relationships with customers, like all of those things are way, way, way harder, or were way, way harder in defense in 2017, and in fact, like, radioactive for some in 2017.
Starting point is 00:30:53 A lot of the engineers and things, there's like a religiously opposed. Now it seems that there's this whole new burgeoning interest in defense startups, and we have an American Dynamism Fund, and lots of people are interested. How did that happen? Because it seemed to happen a little bit before Ukraine, too. Started to shift just before Ukraine. Yeah, what was the cause of that? So, yeah, when we started, I mean, the number of VCs who gave us, like, ethics interviews or just said no, or, look, like, my crass take is that Silicon Valley is, like, quite memetic, the VC world as well.
Starting point is 00:31:29 And once the mainline funds, like you guys, founders fund, general catalyst, all came out and said we're doing this and, like, our valuation was high enough, then everyone was like... Then they got it. Chase, chase. I think that was like step one was it was sort of normalized. Yeah. Like the mainstream VC funds were saying, no, we're doing this, this is important. I know Mark put out most on it at the time. And so I think that was like the snowball then of, okay, this is succeeding.
Starting point is 00:31:58 It's actually okay. Everyone's been told it's okay. And then there was this catalyzing event around Ukraine. And then I think on the why so many defense tech startups, it's like, look, this stuff is, I think it's very important to work. It's also as an engineer, just some of the hardest and most interesting problems you're going to work. Yeah, yeah. So I think a lot of engineers grew up looking at Skunkworks and seeing the SR-71 Blackbird, all these wild things that the U.S. was able to pull off. That was your inspiration growing up as an engineer.
Starting point is 00:32:28 Yeah. Like, this stuff is iconic. People want to work on these things. And so I think it just really mobilized people who really cared about this. And then you have a ton of vets who are leaving the military and just want to solve problems that they encounter it. And so you just have all these kind of a ton of interest in working on it, now a ton of capital because they've seen our success, they know it can be done, and then just the social normalization of the whole thing. Yeah. Really flip the narrative.
Starting point is 00:32:52 Yeah. And I would say the evolution of the sort of primitives for technology has actually advanced the opportunity big time, right? So like a lot of the dollars that would go to something like an aircraft carrier, which is untouchable for a startup, should go to smaller form factor, a treatable, fully autonomous. You're 100% right. And this has been a big part of our strategy on this has been like, we are leaning into everywhere where there's commercial investment. And so many of the things that historically have been like defense exclusive are no longer the case. Totally. One of the examples of this is we built this electronic warfare system.
Starting point is 00:33:28 It's really cool. It's a jammer, senses, jams, radio signals. If we did that five years ago, 10 years ago, you would have custom tape out chips. It's hundreds of millions in. It's a huge thing. So only government funded things did it. It's on a really slow cycle. Well, now with all the 5G tech,
Starting point is 00:33:45 this is the performance of these things is through the roof. You can just take commercial parts. And then just being the fastest to integrate and understand how to utilize these technologies becomes the advantage. Same with AI. It was like, we don't do AI model research. Yep.
Starting point is 00:33:58 We don't need to. Yeah. We just take the best things that are there. The best models. Yeah, exactly. So riding these tech waves has been a huge part of it. And that is the macro shift that occurred where the department hasn't reconciled yet, which is like the innovation is,
Starting point is 00:34:11 much more coming from the commercial world. So it becomes being the best adopter, it is no longer these 10-year tech redmaps of the department controls. Yes, exactly. It is a totally different world we're living in. And so I think, yeah, the macro pieces of why a company like us can see major technology shifts, like around where the innovation is coming from, huge geopolitical shifts. Yes. And then the consolidation of the existing industrial base with the bad incentives has led to an erosion of capacity. And so you combine all these things together and you're like, the conditions were sort of set for, like, us to be successful on. Yes, yeah.
Starting point is 00:34:44 They don't think we could have done it five years or later. It would be too late. Five years earlier, probably would have been too early. It wouldn't have worked, yeah. I think we were in this, like, two to three year window where we could ride all those waves correctly. Yeah. Brian, it's so fun to be with you.
Starting point is 00:34:56 Thanks a ton for spending the time. Thank you for what you're building as your investor, but more importantly, for all of America. Thank you. Thank you. Thank you. Thank you. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.