Closing Bell - Manifest Space: Istari CEO Will Roper on making a moon dust battery for Blue Origin 12/3/25

Episode Date: December 3, 2025

Morgan Brennan sits down with Will Roper, Istari CEO, on the latest episode of Manifest Space. They discuss the startup’s big news announced today: partnering with Blue Origin on making a moon dust ...battery – a device designed entirely with AI. They discuss Istari's "ground" infrastructure that lets AI innovate within guardrails, preventing hallucinations while maintaining creativity. Plus, why connecting data without consolidating it could transform aerospace and defense. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Transcript
Discussion (0)
Starting point is 00:00:00 On stage at AWS Reinvent, Jeff Bezos' other company, Blue Origin, announcing a major aerospace breakthrough. A moon dust-powered battery that it hopes will keep spacecraft operating through lunar night. The breakthrough, though, was how it was developed. A space part that was completely designed by AI, agentically designed. And it seems like that's the main headline and it is. But equal to what the AI did. is what it didn't do. It used the Astari infrastructure to guide the design. And what that
Starting point is 00:00:37 infrastructure does is put a set of guardrails around the AI so that it can be creative on the inside and produce cool things, but not go out of bounds where things like requirements or compliance, all those things that the lawyers care about, where those are safely out of bounds. And so giving the AI a creative zone and a hallucination-free zone is equally the headline because it makes that part not just one that was made 75% faster than any of the parts Blue Origin has created in the past, it makes it arguably one of the safest parts ever made in aerospace history. Blue Origin partnered with a little-known startup, Istari Digital, founded by the Honorable Will Roper, a former senior Pentagon official best known for transforming the acquisition process at both the air and space forces during the first Trump presidency.
Starting point is 00:01:34 Ishtari, backed by former Google CEO Eric Schmidt, already works with the U.S. government, including as a prime contractor with Lockheed Martin on the experimental X-56A unmanned aircraft. Roper calls Astari a new infrastructure company, one that hasn't made AI hallucinations go away, but has found a solution to contain and ultimately nullify them. That's what Astari's platform does, is it takes those things that have to be checked and turns those into the deterministic guardrails. That's the fence around the playground that you can't leave. And then within that playground, AI can generate to its heart content. It may hallucinate on the inside, but you're going to know that if it did, that it didn't cross a boundary of your determination.
Starting point is 00:02:24 In the case of the Blue Origin's moon battery, it doesn't tell you the design was a good one, but it tells us that all of the requirements were met, the standards were met, things like that that you've got to check before you go operational. And I think that's what AI has needed. It's needed a set of deterministic guardrails so that it doesn't have to do both things. You can't ask AI to be a probabilistic generator, which makes it powerful. but gives you the risk that it may be pulling information from its compressions of its data set that aren't tracking to reality. Rather than try to solve that problem, put that problem in a safe environment where you can check the outcome. And that's exactly what Astari's infrastructure does.
Starting point is 00:03:12 It provides that deterministic framework. The framework itself cannot be hallucinated because of the power of mathematics. And then that gives people who have to approve the outcome of AI, the confidence. that what they're approving isn't bringing in something they wouldn't otherwise intend. If this is the case, it's game-changing. For government and for industry, racing to realize the full promise of AI. On this episode, the Honorable Will Roper on Astari Digital, the broader AI ecosystem, and how aerospace and defense is evolving, from moon dust batteries to civil aviation.
Starting point is 00:03:47 I'm Morgan Brennan, and this is Manifest Space. You're bringing some news to us. You're partnered with Blue Origin and you're announcing a breakthrough. What is it? Yeah, today Blue Origin is keynoting at AWS ReInvent, and one of the highlights of their talk is holding up a space part that was completely designed by AI, agentically designed. And it seems like that's the main headline and it is, but equal to what the AI did. is what it didn't do. It used the Astari infrastructure to guide the design. And what that infrastructure does is put a set of guardrails around the AI so that it can be creative on the
Starting point is 00:04:34 inside and produce cool things, but not go out of bounds where things like requirements or compliance, all those things that the lawyers care about, where those are safely out of bounds. And so giving the AI a creative zone and a hallucination-free zone is equally the headline because it makes that part not just one that was made 75% faster than any of the parts Blue Origin has created in the past, it makes it arguably one of the safest parts ever made in aerospace history. That's a huge statement to make. And I want to dig into how that's even possible here.
Starting point is 00:05:15 But first, maybe a little bit more on this part specifically. How quickly did this come together and is this part going to space? It came together really quickly. And what this part does is equally mind-blowing to the way the AI created it. So what it does is sucks up moon dust and it extracts the heat from it. So it can be used as an energy source, like turning moon dust into a battery. And this is important because you go through these lunar nights that can be used. be as long as 14 days and you need energy to get through them and that's blue origin's ideas we can turn
Starting point is 00:05:52 moon dust into energy kind of like vacuuming at home but creating your own electricity while you do it super cool idea and the way that the AI was able to create this part was to work with the blue origin engineers understand in human language what they wanted and then turn that into a set of requirements that was turned into a design that was run through a simulation, and then finally checked against the final desired outcomes, and the Astari infrastructure was running underneath it all, mapping every interaction, and making sure the AI never went out of bounds and did something that wasn't safe or compliant. So at the very end, you don't just get the probabilistic powers of the AI, which is what you want. You get the deterministic guardrail.
Starting point is 00:06:43 of ensuring it didn't do things you didn't want. You put both of those together, you get a one-two punch. All right, let's dig a little more deeply then into that value proposition from Astari. What specifically does Istari do, and how are you providing these guardrails? So we're a new infrastructure company, and I joked that our infrastructure would be called ground
Starting point is 00:07:08 as an opposite of cloud, but it's not a joke anymore. That's what we're going to call it. And it's a very instructive way to understand it. So we all know cloud computing. All of our personal data is likely in one right now. It's a computer overhead, as we think of it, where we're sharing the IT. Our data's together.
Starting point is 00:07:29 We can access it anywhere, but we can't connect it anywhere. If you want to connect your data in a cloud to some random place that's outside of the cloud, that gets difficult. The cloud has a boundary. Ground, Astari Digital's infrastructure is the opposite. There is no shared computing. The data lives wherever its owner wants it to live. In their IT, on their premise, in a cloud, we don't care.
Starting point is 00:07:58 And what we give them is a shareable connection protocol. The ability to connect that data anywhere without a boundary. And that's what we use to do the agenic workflows for Blue Origin. We connected their AI with their data, with various tools they were using for the design. We even brought in a very cool startup called Entop that had AI for CAD, connected them all together into one seamless system, but it wasn't consolidated. It was connected, not consolidated. And this was something I sorely needed when I was still in public service. I needed an infrastructure to connect data without consolidated.
Starting point is 00:08:41 consolidating it. And it just didn't exist. And so that led to the idea of, well, if you need it, go create it. Don't just talk about it. And there's a lot I want to get into there, too, because I've gotten to know you over the years from when you served in the first Trump administration as a senior Air Force official as well. And somebody who really laid the groundwork for a lot of what we're seeing on the acquisition side and thinking outside the box and this idea of dual-use technologies that's kind of come to the forefront now. So I do want to get into all of that. with you. But first, you made a big statement. And that is that you claim that this technology and your company have basically cracked the code on AI hallucinations. How? And how can you be so sure
Starting point is 00:09:25 that's happened? So, you know, let's put truth in advertising. We didn't really make hallucinations go away, but we put them in a safe space. And I'll give you an example from that past life of mind that you mentioned. I was the senior acquisition executive of the Air Force, but I was also the first one for the U.S. Space Force. And one thing I had to do in that dual hat assignment was approve the first rocket to put a national security payload in space that was a reusable booster. We'd never done it in the U.S. military. Now, when I reviewed all the paperwork to approve that, I was not going through the performance benefits and the cost savings that we would have. I was going through reams of paperwork to make sure this thing was going to be safe, reliable, accountable, all of those things that if they go wrong and you didn't check them, well, you got some answering to do. That's what Astari's platform does is it takes those things that have to be checked and turns those into the deterministic guardrails. That's the fence around the playground that you can't leave. And then within that playground, AI can
Starting point is 00:10:39 generate to its heart content. It may hallucinate on the inside, but you're going to know that if it did, that it didn't cross a boundary of your determination. In the case of the Blue Origins, moon battery, it doesn't tell you the design was a good one, but it tells us that all of the requirements were met, the standards were met, things like that that you got to check before you go operational. And I think that's what AI has needed. It's needed a set of deterministic guardrails so that it doesn't have to do both things. You can't ask AI to be a probabilistic generator, which makes it powerful, but gives you the risk that it may be pulling information from its compressions of its data set that aren't tracking to reality. Rather than try to solve that problem, put that problem in a safe environment where you can check the outcome.
Starting point is 00:11:36 And that's exactly what Astari's infrastructure does. It provides that deterministic framework. The framework itself cannot be hallucinated because of the power of mathematics. And then that gives people who have to approve the outcome of AI, the confidence that what they're approving isn't bringing in something they wouldn't otherwise intend. Yeah, so you're working with Blue Origin, but you are also laying out the sore need for something like this in government. You're working with government, too.
Starting point is 00:12:04 It's interesting to me because you're essentially, the company is essentially coming out of stealth right now, but you already have active contracts and you're already working with customers, including the government. So let's talk a little bit more about that. We've got some cool programs there. And we have to have a little bit of a public face when you have public contracts, but we haven't talked much about it. It's time, by the way. Blue Origins announcement is a triggering event for me, that now the era of AI-driven engineering is here. just needed an enabling infrastructure that I lacked in public service.
Starting point is 00:12:39 A cool example of a public program we're doing is with the U.S. Air Force and Lockheed Martin Skunk Works. So they're the designers of the U2, the SR71, the first stealth fighter, the F-117. They've done some first in aviation. Well, we're doing a first with them designing an experimental airplane, an X-plane called the X-56A, that'll be the first airplane that's digitally certified. Let me unpack that for you. So in one respect, the airplane is being digitally certified. The Lockheed Martin Skunkworks models and simulations can run directly against the U.S.
Starting point is 00:13:21 Air Force certification standards as if they're a central software company, but they're not. They're two separate entities, but they feel like they're developing software, even though they're developing an airplane. And so getting rid of all the paperwork is a huge reform over the acquisition system that I had to run a few years ago. But the second thing we'll be doing next year, which is also cool, is we'll be using those same models and simulations to create a digital flight envelope. Flight envelope is the conditions that an airplane can fly.
Starting point is 00:13:57 The digital flight envelope are the models and simulations best estimate of what that is. The physical airplane has sensors on it and it'll be measuring itself as it fly, similar to how a Formula One car has sensors on it that measure against its predictions. Well, we'll do the exact same thing Formula One does, but in the air. We'll be measuring the real performance, checking against the predicted performance, and if they're within tolerance set by the Air Force, then the airplane will certify itself. It'll continue to open its envelope and not just do it with it. one time, it'll be continuously checking its certification. And I think that's a tremendous paradigm
Starting point is 00:14:39 that we've lacked. That feels like a software process, continual compliance, continual safety, continual quality, coming into aerospace. And I expect it'll be the first of many. They'll all be themed under, finally, the software era is happening to aerospace. It can eat the rest of the world, including aerospace, not just the world of our data on the internet. I mean, that sounds like it has huge implications. It raises the question, though, especially when you start to think about how this translates out into something like civil aviation and, you know, jets that have people on board. Do you think regulators are ready to wrap their arms around the technology and the capability
Starting point is 00:15:23 this presents? Well, you know, it's tough. Let me own up front. It's tough to do any change. in government. And I've been there and scars are on my back from the ones I attempted myself. But I'll tell you what tends to get government officials the easier button. There is no easy button is when a technology doesn't just make things faster. It makes them safer too. I live that during my time with the Air Force and Space Force. Container technology came out for
Starting point is 00:15:57 software and we were having trouble getting code from our development environment to systems like the B2 bomber and we didn't know why it ran in our cloud but didn't run on our jet and then containers came out and not only was it faster to get from the cloud to the edge it was safer we had we had more provable ways of proving reliability that made the adoption easy in fact that was the gateway for bringing AI onto the first military system it controlled the U2 spy plane made by Lockheed Martin Skunk Works. Containers were a big deal because they were fast and safe, not fast versus safe. I think the same thing is going to be true here.
Starting point is 00:16:40 I mentioned that Blue Origins Park could be the safest part ever made, and here's what I mean by that. There's a mathematical map of everything about it from the time it was first conceived until the time it was printed. There are no human gaps or paper trails. There are no places that humans could inject an error. Math is tracking everything. And I'm not sure a part's ever been created with that traceability.
Starting point is 00:17:07 That's the traceability of software, and now it's coming into hardware. So I expect that that's going to make it a little easier for government officials. The fast part is great. The engineers will love that. The war fighters will love that. But if you've got a sign on the dotted line, the fact it's safer makes it a lot of easier to get through the process. Which I think also raises the question,
Starting point is 00:17:31 how does this fit into the broader AI ecosystem, especially when you start to look at private sector applications as well, and you have all these LLMs and NVIDIA dominating the chip and inferencing market, but maybe alphabet involved too? How to think about what the value proposition is for Ishtari in this world and what that means for, I guess, growth for your company.
Starting point is 00:17:56 with this capability. Well, if you think of the future of AI, and who can predict that, right, it's exciting. But AI is becoming the most valuable thing on Earth, and we have the large language models, the foundational models, but we also have a lot of specialized models that are being used, similar to the ones used
Starting point is 00:18:16 to create Blue Origins' Moon Battery. So how do you connect these things? They're so valuable that companies that own them own them aren't going to give them to anyone else. Even licensing them will be risky because of the risk of compromise. They're just small files of numbers, weights and biases. They're not big. The likelihood of loss could be high and the penalty could be catastrophic. So I think it's going to get harder and harder to share through traditional means. But then Astari Digital comes in and says you can protect your data through whatever means you want and will give you the
Starting point is 00:18:55 ability to connect it wherever you want. One of our federal programs is going to do this next year for top secret data all the way down to unclassified, to show that we can take the most sensitive data in the U.S. government and connect it to the least sensitive data. The connection layer has no sensitivity to it. It's not even valuable to you unless you're where the data lives. But if you're where the data lives, it has such rich metadata. data and context, that it allows AI to not just act effectively, but to act traceably.
Starting point is 00:19:32 All of its actions mapped and those guardrails in place that provide you the psychological safety net that if you need to know the AI past test A, B, C, and D that all outputs do. That's what a starry does. It's a deterministic system. It does not have any AI generative components, but when it works with systems that do, you get real synergy, in the real sense of the word, not the government sense of synergy, a real one plus one equals three equation. And we're seeing customers across aerospace and defense, public and private, unlock them. Companies that you would not dream of are pushing the boundaries of
Starting point is 00:20:14 AI. They're just not talking about it publicly because it's their competitive advantage. But I think we'll have a little more to say on that in the new year. All right. Sounds like 2026 is shaping up to be a big year for you and your company. It is. I expect you'll have some more news from me in the new year. And, you know, it's time for us to grow. We're not a startup anymore. We're moving into that early stage growth company phase, which is exciting. And we love our customers. The feats like Blue Origin inspire other people to do the same. And I think we're going to be in an era of one-upping across the aerospace industry and outside of it. Any cyber physical industry that has felt like we don't
Starting point is 00:20:59 get to play in the same internet of individuals now has a safe way to do that while ensuring that their data isn't just as safe as it's ever been. It's safer than it's ever been. It's safer to collaborate than it's ever been. I love when fast and safe occasionally get together and have a beer. Will Roper, it's great to speak with you. Astari Digital is the name of the company, and look forward to tracking it. Congratulations on today's news. That does it for this episode of Manifest Space. Make sure you never miss a launch by following us wherever you get your podcasts, and by watching our coverage on Closing Bell overtime. I'm Morgan Brennan.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.