Main Engine Cut Off - T+315: Autonomy in Space (with Simone D’Amico, DJ Bush, and Al Tadros)

Episode Date: November 11, 2025

Simone D’Amico of Stanford and EraDrive, DJ Bush of NVIDIA, and Al Tadros of Redwire join me to talk about autonomy in space, to get into the specific details of what they’re working on and how it... comes together, and what it may do for the industry in the next few years.This episode of Main Engine Cut Off is brought to you by 32 executive producers—Joonas, Russell, Donald, Stealth Julian, Pat, Fred, David, Lee, Frank, Josh from Impulse, Steve, Joel, Joakim, Matt, Natasha Tsakos, Tim Dodd (the Everyday Astronaut!), Kris, Theo and Violet, Heiko, Will and Lars from Agile, Jan, Warren, The Astrogators at SEE, Ryan, Better Every Day Studios, and four anonymous—and hundreds of supporters.TopicsEpisode T+315: Autonomy in Space (with Simone D’Amico, DJ Bush, and Al Tadros) - YouTubeSimone D'Amico | LinkedInCenter for AEroSpace Autonomy Research (CAESAR)Stanford spinoff EraDrive claims $1 million NASA contract - SpaceNewsDJ Bush | LinkedInHow Starcloud Is Bringing Data Centers to Outer Space | NVIDIA BlogAl Tadros | LinkedInRedwire Space | Heritage + InnovationNASA Starling - Autonomous Tip and Cue in OrbitNASA Starling - Distributed Optical NavigationNASA Starling - Autonomous Space Domain AwarenessVISORS - Precise Formation-FlyingAutonomous Spacecraft 3D Model ReconstructionThe ShowLike the show? Support the show on Patreon or Substack!Email your thoughts, comments, and questions to anthony@mainenginecutoff.comFollow @WeHaveMECOFollow @meco@spacey.space on MastodonListen to MECO HeadlinesListen to Off-NominalJoin the Off-Nominal DiscordSubscribe on Apple Podcasts, Overcast, Pocket Casts, Spotify, Google Play, Stitcher, TuneIn or elsewhereSubscribe to the Main Engine Cut Off NewsletterArtwork photo by JAXAWork with me and my design and development agency: Pine Works

Transcript
Discussion (0)
Starting point is 00:00:00 Hello and welcome to main engine cutoff. I'm Anthony Colangelo, and we've got a special edition of the show today. We've got a couple of guests with us, a really interesting topic, but we're also doing this as a video show for the first time ever. If you don't count the live shows that I did that were recorded, streamed, and archived later, I guess those count. But this is the first one, in the traditional sense, that we're doing his video. And we're doing it that way, number one, just to have some fun. Number two, we've got some cool visuals about the topic at hand. So we're going to be talking about autonomy and space.
Starting point is 00:00:40 We've got Simone DiMiko from Stanford. We've got DJ Bush from Nvidia and Al Tadros from Redwire. We're going to talk about what that means when we say autonomy in space. Try to cut through some of the buzzwords that are going around in the industry right now because I feel like everyone's talking about autonomy or AI, but they aren't really giving you a lot of good, examples. So this crew is working on some interesting things. They've deployed a lot in this area in space today. And I think they're the right set of people to help us understand what is really
Starting point is 00:01:10 going on in this environment. So I'm excited. It's going to be a good time talking with them all. I hope you enjoy the conversation. And if you're listening to the show, you want to check out the visuals. I've got a link in the show notes. If you're watching, welcome aboard. I hope you enjoy. All right, y'all, thanks for joining me so much on a slightly experimental version, to be honest. We don't usually do video in this particular podcast that I do. So who knows? This may be the only one ever like this, and it may be the start of a new thing. So we're going to find out together. I'm excited. We've got a really interesting cross section of skill sets and viewpoints and a lot of interesting crossover areas, I'm sure, as well. So to start us off, since a lot of people
Starting point is 00:01:51 will be listening only, I want to make sure everyone knows whose voice is whose. So we'll start off with the least surprising voice to name, Simone DiMico, would you like to introduce yourself? Yes. Thank you for having me. I'm an associate professor of aeronautics and astronautics and of geophysics by courtesy, and I'm the chief science officer of EraDrive, a new company spin-off from my Stanford lab. And we've got DJ Bush out there at Invidia? Yep, thanks for having me. Excited to be here.
Starting point is 00:02:27 DJ Bush, senior account manager, supporting our aerospace customers at Nvidia, and then the second hat I wear is leading our space projects across invidia as well. No news coming out of that department lately in the last week or so. I have not seen your name everywhere in the space headlines,
Starting point is 00:02:42 so we'll talk about that, I'm sure. And then we've got Al from Redwire, who somehow I've never had on the show, but I feel like we've orbited each other at conferences or other Redwire guests, but how's it going? Yeah, great. So I'm Al-Tadro's CTO at Redwire, based here in Silicon Valley, and I'm a big fan of your show. So finally get to speak on your show.
Starting point is 00:03:02 Were you not? I feel like you were on at least one time. We talked at a conference and maybe you grabbed a quote or two. But anyway, in full disclosure, I have the pleasure of working with Simone in his lab. And I also have the pleasure of working with DJ and the NVIDIA team. So this is a great forum and a great group to have the discussion. Yeah, and that's, I mean, some of the background is Omar and I were talking, and a big part of this that people already heard in the intro for sure is that, you know, there's a lot of buzzwords thrown around in the area of autonomy and AI and people just like to say it because it sounds really good and it plays really well in headlines or press releases or whatever, but you don't often hear a lot about the specific use cases or the ways that it's being deployed today or the kind of like underlying fundamental tech that's being developed and being worked on. And in my show, I typically like to look in like the two to five year head range,
Starting point is 00:03:59 right? Not super long distance where you're talking about all sorts of futuristic stuff, but really like practical, how is this going to change my work tomorrow? And just hearing what what you guys are all involved in, I feel like there's a ton of that. But we probably should start with like what we actually mean when we say autonomy. There's different levels. There's, different ways that that actually plays out in the real world. Simone, that might be a good one to start off on considering some of the examples I have pulled up here. Could you give us an overall sense when you think about autonomy, what kinds or what definitions would you use in that instance?
Starting point is 00:04:39 Thanks for the question. This is very important. Also, because there is no universally agreed upon definition of autonomy, especially for spacecraft, the definition depends on the field of work. But, for example, I can pull from a NASA vocabulary, and NASA defines autonomy as the degree to which a system can achieve mission objectives independently from human control. Now, this is a very general definition, and I think it's reasonable. However, academics, especially in robotics, where most of the advances in autonomy in general are happening today,
Starting point is 00:05:26 refer to autonomy also as the capability to adapt to circumstances or to environmental condition that have not been foreseen at the design state. So this is a little bit of a more, so to say, aggressive definition of autonomy from a field that is less risk-averse than what space is. If you want, I can get into a more granular definition and to the levels of space of autonomy that may know we can continue the discussion.
Starting point is 00:06:05 Yeah, I think, and the one thing that I thought, too, as you mentioned it, is like deriving it from NASA. One of the areas I feel like NASA has talked about it the most is in the concept of operations that they have for, like, human Mars missions, where they always are like, the crew needs to have autonomy make decisions on the fly. And it might be a good example to use that it's like, it's not robots doing it. It's not software that's doing this. But there is something about
Starting point is 00:06:27 the interaction mechanism there where there are instances that would be happening too quickly in a crewed Mars mission where mission control is irrelevant because of time delay. And there needs to be a crew level decision, right? In that case, you're subbing out software for crew. But again, like that's the longer term version of this. And that's more of like authority than autonomy, and I feel like they've used the word autonomy to say that when I think authority might be the better. But I think it would be helpful for like a more granular breakdown, because if you're, you know, somebody who writes software, if you put enough switch cases and if else together, you're going to make a thing that looks autonomous to people that don't know how your software
Starting point is 00:07:05 is written. But I think that's different than what we're talking about when we're looking at object classification or even large-scale constellation management. So, you know, are both of those things on that spectrum, or do you class, is there like a line where it's like, this is the start of what I would consider autonomous? Yeah, I can really pull from my experience in the last 20 years, I worked on several multi-satellite missions. So these are missions such as formation of satellites, swarms of satellites, or rendezvous and proximity operations, where autonomy is required in order to achieve the mission objectives. When we talk about autonomy for space, I like to focus on three levels of autonomy.
Starting point is 00:07:52 You typically find that as a number, there's two, three, and four. In level two, which is the past, which is probably the most common level of autonomy for a spacecraft, this is highly, satellites are highly micromanaged, orbit control maneuvers are computed on the ground and then simply uploaded to the spacecraft for basically replay. There is a very limited reaction of the satellites to events that can be anomalies, like entering some safe modes. Very basic. Now, a level three of autonomy, which is arising, it's emerging, and it's a lot of the research we are doing at the moment. Instead, even if it's still rule-based, but it's adaptive.
Starting point is 00:08:35 It's more complex in terms of the automated transitions between modes that are done on board, in terms of the coordination between the satellites, the decisions that are done at the mission level, especially for fault detection, isolation, and recovery, and collision avoidance. So these are maneuvers that are autonomously computed by the satellites on board without human intervention, all the way to level four, which is the future.
Starting point is 00:09:03 Basically, a human provide a high-level goal, and I can provide examples later, And the machine, the satellites in this case, is able to learn to determine objectives to be accomplished in order to meet that high level goal. So we are talking really about a cognitive spacecraft which might allow, for example, for a natural language communication with an operator that is able to do reasoning on its own in order to achieve this high level. the goal of the missions. I pose here for a more discussion. One aspect to that last part is that, is that, does that still, in certain ways, because of the aspects that we're working with here in orbital mechanics,
Starting point is 00:09:54 has to include constraints put upon that, right? Because if you deployed a constellation of communication satellites, they may realize, hey, you know what would be nice, a nice 400-kilometer circular orbit at 51-degree inclination, when humans would be like, no, let's not do that one, because that's where the ISS is hanging. and out. So maybe keep it a little outside of the keepout sphere of the ISS. So I don't, I don't think there's a point at which we achieve like escape velocity where we're not going to apply some human constraints to the systems, even at that upper end extreme. Al's making the inquisitive phase, so maybe you disagree there. Well, first of all, this is a great topic and it could go into
Starting point is 00:10:32 lots of technical depth. And the great thing about it is now people have more and more experience with their cars, having levels of autonomy, right? Whether it's lane following or car pacing or changing lanes. So people have some kind of intuition in this. But there are sections of autonomy, and I'll bring up war fighting, where you will not have the luxury of pausing and having a human, just like you wouldn't.
Starting point is 00:11:01 And the battlefield always have an option of assessing where your soldiers are or where your equipment is before they make some decision. And whether you call it authority or autonomy, you know, if you are maneuvering in space to avoid something, you might not have the luxury of waiting to say, is that new trajectory on the path of something that I'm not aware of? So you have to be able to, you know, incorporate all the constraints, all the important constraints, and the flexible constraints on board and make that part of the decision making. Now, that does not preclude us from putting in fault detection, isolation, and recovery, which we do anyway. There can be hardware failures.
Starting point is 00:11:40 There can be software errors. And we still have on board some kind of decision making of, okay, there is an issue here. What is the next step in order to resolve the issue? Oftentimes right now for space, the decision is go to safe mode, do nothing, point at the sun, and wait for a human to decide. Well, again, that isn't always feasible if you're in a combat mindset or a warfighting situation or landing a lander on the surface of the moon. You know, there, there's no time to sit and wait. So autonomy or even just a spacecraft that was weirdly designed. There's been cases I know about of like a cis lunar mission where safe mode was pointed to sun, but that made it so that they could not communicate with the Earth anymore, which was like, that was,
Starting point is 00:12:24 that was the wrong safe mode in that particular case. That's right. That's right. And you, so the fault detection, isolation and recovery or FDIR needs to protect against that. Okay, I've been looking at the sun now for a day. No one has talked to me. I need to now rotate, you know, the other direction to say, is it possible that, you know, they can't talk to me? So there, yes, there are constraints that we have to worry about, and then there are safety concerns that we have to put in. But just like in, you know, full self-driving cars, there are corner cases. And, you know, you mentioned the space station. Let's say the space station maneuvered, you know, and increased its orbit. You didn't get that update, but now it's in the way. So now what do you do? Well, you need to have a
Starting point is 00:13:07 backup, maybe onboard sensing that tells you, hey, there are residence-based objects in the way, and that goes into part of the autonomy that says the humans aren't always right either. It isn't just AI. It's misinformation by humans or coding errors by humans, and that goes into it. So the implementation and pragmatic application of autonomy goes to that level of complexity and layers of supervision. DJ, can we round out too? I feel like people are probably listening, having some preconceived notions
Starting point is 00:13:38 of why you're hanging out with us talking about this and how Nvidia plays into it all. And maybe those are correct, maybe those are incorrect, but how does what we're talking about here come into your world and the stuff that passes by your desk every day? Yeah, so I think, you know, one of the keys to kind of that next level autonomy
Starting point is 00:13:54 is the reasoning capabilities of these systems. And for us, that goes back to how we're training them. And so there's a lot of different. different areas that present challenges when you're looking to train these autonomous systems and what when we look at space exploration are largely unknown environments and so that's how how full of fidelity can we have for these different digital twin environments so how can we you know better model the lunar surface so that our perception stack can readily understand hazards and where it is and then it can then learn to reason okay how do I avoid these hazards and not go into a crater run into
Starting point is 00:14:25 a boulder things like that so you know there's a big effort on the training side of it which is what we pulled from a lot of the work we're doing on our robotic side. And then I would say the other part of it is, and how do we enable enough compute at the edge to then host these reasoning models and these detection models so that we're not having to downlink large data sets. We can have insights where the data is being generated, and then we can start to actuate based upon what we've trained upon, the new inputs that we get from the different sensors, and then move to our next step. The first half there is interesting too. I don't, didn't necessarily have this in the rundown, but I should have, which was a lot of the modeling and
Starting point is 00:15:00 simulation that has to happen either in the development of these systems or in the testing of, you know, once you have your idea of, all right, this is my constellation, this is the jobs I'm tasking with it, I want to run that at full fidelity. Is that kind of like where the initial Nvidia connection to space happened before you started putting H-100s up into orbit like happened last week or, you know, how much, what is the breakdown in terms of like the space projects that come across your desk? Is it all stuff that's being deployed in space or is it a lot of stuff that's being deployed here on the ground, either data centers or, like I said, simulation environments.
Starting point is 00:15:34 Yeah, I would say traditionally it's been more on the simulation side. You know, we've had our GPUs in space since, you know, two generations ago. Ron Thor is our latest jets in line. You know, we've had it since Xavier. So we do have some heritage there. I would say, but like I think we're the groundbreaking work that we're doing is really on that simulation side that we're deploying. So, you know, if we can take a small data set that we have from our existing, you know,
Starting point is 00:15:57 lunar imagery, for example, and then we can add better fidelity to that. We can make sure we improve the lighting conditions so that our sensors are, you know, the whole goal with our robotics pipeline is that we don't want our sensors to know that they're in a simulation. And so when we go to then operate in these environments, it's a like for like. And so the large effort, I think, where the value lies is how do we safely train these autonomous systems in a high fidelity simulation, so we're not sending things up and then hoping for the best when they get there?
Starting point is 00:16:25 And so a lot of the work has been around that, you know, the physics simulation, obviously, the physics in orbit and the lunar surface are very different than training a robot in a factory, right? So how do we have the integration of those two and how do we make sure the simulations are physically aware so that we can understand the orbital mechanics and how that may impact, you know, an orbital burn or how we need to then do different trajectories. So a heavy part of the work is on the simulation side and that's what we call, I think the best way to capitalize is our three computer problems. So, you know, that's how we approach our robotics. So the first computer is just how do we train the AI and the policies of the robot. The second computer is then that validation through simulation, so a high-fidelity simulation to make sure that it's operating how we want it to. And then the third is that edge deployment so that we can then actuate and then process at the edge.
Starting point is 00:17:10 Before we dive into each of those verticals that we're talking about, software, hardware, sensors, simulation, I want to get to kind of lay the land of how you three would interact, either through the Stanford research that's happening, but really even on missions as well and Al might be a good person to talk to this since I think there's a cross-integration here where you're building platforms for customers
Starting point is 00:17:32 that are going to make use of software and fundamental research that's happening over in Simone's world using hardware that's coming from DJ's office so give us a sense of the interaction and how red wires involved in all that but also maybe like you know who comes first in this process is it is it pretty clear that you've got to define the mission up front and then work your way back
Starting point is 00:17:53 through the stack, or is it more cyclical than that? Yeah. Yeah, great question. And you're going to make me think about what did we do in what order here. But first of all, you know, Nvidia and Stanford and Redwire first didn't have a strategic business plan out from the outset, right? Stanford's been doing fundamental research and a number of areas, including AI, and Nvidia's, of course, been building GPUs for gaming and other applications for decades and developing then unique software for accelerated computing.
Starting point is 00:18:29 So that all has converged and one of the points I want to make is that Red Wire is looking at it from where does it take, where does AI, where does Accelerate Computing take our space systems, our solutions well beyond where traditional approaches, not just duplicate what we can do with traditional approaches,
Starting point is 00:18:52 but it takes us well beyond. And that's what we wanted to do is say, okay, how do we take autonomy, not from an incremental step, but jumping, leapfrogging ahead. So we were working and we're a sponsor of Caesar, which is a research initiative for AI-enabled autonomy at Stanford, and we were working that element of it.
Starting point is 00:19:14 We saw the capability that Omniverse and other tools have that NVIDIA has been developing for the terrestrial markets. And the question was, where can we apply that and advance it in space? And then we piece those together with our customer's requirements and our customer's vision. And we say, where can we apply these to leapfrog to the next level, next level of civil space, of commercial capabilities, cost reductions, or warfighting? And that's where we get the real pragmatic requirements and drivers to say, where do these elements advance? And I'll just give you one example.
Starting point is 00:19:58 Machine Vision has been, you know, progressing for many years. Some AI capabilities right now can leapfrog dramatically what machine vision has been doing. And that's one of the areas that we're looking at to enable autonomy with Stanford as an example. Of course, compute power in space has been notoriously lagging, notoriously lagging to our frustration. And that's something that, you know, NVIDIA and others, but NVIDIA has been real leader in demonstrating the marriage of compute with software. And that's an example of where we intersect then to bring solutions to our customers. Since Cesar was mentioned, Cesar stands for the Center of Aerospace Autonomy Research at Stanford that was co-founded by me and Professor Pavone, since we pioneered AI for space applications,
Starting point is 00:20:59 especially starting with the first convolutional neural networks and transformers for spacecraft navigation. Then we realized that the topic is so multidisciplinary where you need to combine skills that are rarely available in one single person. Think about the expertise in astrodynamics unit, the expertise in machine learning and artificial intelligence. How do we make that happen? And how do we enable the broader impact of these technologies? And so we thought about the triple helix model.
Starting point is 00:21:37 the model where we combine academia with industry and with government, and so Caesar was born as a place where we can work together with these entities and transfer the technology more rapidly into the actual field. And Redwire is one of our sponsors together with Blue Origin and the government to really try to deploy AI models in space. When you're tackling, you know, you sent over a ton of, examples that I want to go through some of these because they're really cool visuals and I think help really understand what's happening between these small constellation and starling with G, not the
Starting point is 00:22:17 one with the K. Always important. You guys were colliding on names there for a little bit. But there's a couple different branches of examples that you sent over. So I'm curious, you know, is this something that up front you guys sat down and said like these are the problems that we need to figure out how to solve, we need to show these different things implemented on an actual mission, or we know we need this capability? What is the start of these different branches of research that you're doing? This is a great question, because it really connects to a very long path, I mean, of my career and what I have been developing stepwise in terms of an increasing complexity in space and understanding what's possible.
Starting point is 00:23:07 So, for example, in previous mission, we have seen that we can use a camera in order to navigate with respect to another satellite, but with some limitations, with heavy supervision, you know, from the ground, downlink in the data, working with only just one target visible in the camera, and then we realized that we can do much more, especially now that there is a proliferation of space assets and where, you know, if you are in lower orbit and you look around,
Starting point is 00:23:32 you start seeing a tense of residence-based objects in the field of view of your camera. So this starts becoming beacons. And so how do we navigate relative to this larger number of satellites? And how can we do that autonomously on board, which is a rising need for several applications, scientific, commercial, and military? And so we conceived missions and experiments in order to... to increase the technology readiness level for this capability. And you mentioned Starling, which is the name of a bird species.
Starting point is 00:24:12 They like to flock, and they have some quite interesting consensus capabilities when flying in flocks. And that's the origin of the name of the NASA Starling mission, where for the first time, we have demonstrated the capability of a swarm to navigate, not only with respect to one another, the satellites of the swarm, but also with respect to other resident space objects of the population that is out there in lower orbit using only optical means. So basically it's a new era of spaceflight where thanks to the fact that space is feature-reach compared to the Voyager era where we had one satellite, now we have thousands of them. And so we can use them as beacons similar to a GPS system but using light in order to navigate. in order to reason, in order to cooperate with other satellites, identify threats, etc.
Starting point is 00:25:10 Is this this video that might be a good example of this, where you're kind of seeing this cross-section of what's happening on the Starlink mission? Yeah, this is one of probably the most advanced experiments we conducted. So on the left side at the top, you see, a coordinate frame, which is the origin of that coordinate frame, is within a satellite of the swarm. And then you see in blue and red, the trajectories that are flown by other satellites of the swarm with respect
Starting point is 00:25:47 to the green dog. And then you see those dash lines that refer to moments where these satellites exchange measurements with one another from their imagery. The imagery is shown on the right. You see those videos. These are actual images collected by observers during this experiment. And at the bottom, you see that we are able to navigate, determining the orbits of the observer, the orbits of the other satellites, fully autonomously using only visual navigation. And this is the first time that such a coordinated or a distributed optical navigation system is deployed in. orbit.
Starting point is 00:26:31 So when you say that it's doing the sort of orbit determination, is that something that it can do totally on its own, or does it need the catalog of TLEs and some knowledge of what these things may be and where they may be, or is it purely calculating it based on knowing that it's around Earth in this particular instance? Yeah, so in this mode that you see for this experiment, there is no a priori knowledge available for the satellites that are being observed and tracked in the field of view. So we are not using tool and elements.
Starting point is 00:27:10 We are not using extra means that are provided from the ground, except for the observer. The observer satellite has GPS available on board, and you need an anchor. At least one of the satellites need to know its own orbit. and then through this network, they are able to navigate with respect to one another. Now, there is another mode of operations of the system we have in orbit, which is used, however, for resident space objects which are observed just in one image, one snapshot. And this happens all the time because you have a satellite passing through the field of view.
Starting point is 00:27:45 And in that case, in order to identify it, to determine what that satellite is, we use a space object catalog which we upload. to the satellite from the ground. And so we use that space object catalog to determine who is Sue. And eventually, yeah, this is a great time to show this video where we are doing this on board at the edge.
Starting point is 00:28:10 You see when the video poses and there is extra information provided there, we have identified an object. This is a Venezuelan, for example, F-Observation satellite. This is a satellite from SpaceX, Starlink, that we are able to identify in the field of view.
Starting point is 00:28:26 Now, this goes beyond just space domain awareness and, you know, enabling space safety and security. This is an upper stage of a, from Rocket Lab, or this is a Soviet cosmos satellite, et cetera. So it goes beyond space domain awareness that is done autonomously on board, but get into the capability to navigating space without extra means just by knowing, you know,
Starting point is 00:28:53 the a prior orbits of these beacons. One thing I was thinking about with respect to the hardware side, DJ, is the data set size of space today is like hilariously small to a lot of the other data sets that are being worked here on Earth because of, you know, and this is one thing that always comes up with people worrying about how crowded is space and is everything going to run into each other. It's like if you have 10,000 volts wagons on the surface of Earth,
Starting point is 00:29:20 like they're not going to be that close to each other and now get even bigger than that because, you're going out in these increasing larger shells, like, it's busy, but it's not as busy as you think it is. So you think about the satellite catalog, you know, it's like a couple thousand active satellites, thousands of pieces of debris, but that data set is nowhere close to data sets that were being used down here on Earth for all sorts of different variety of things. So is that something that when you're working in the space area, you know, what is that comparison like and how does it impact decisions to be made on what hardware is appropriate or
Starting point is 00:29:51 replicable to these space areas right now and how that might change as things get busier with constellations launching, you know, tens of thousands of more over the next couple years. Right. Yeah, you know, I think most of the decision, honestly, comes from, like, power availability and then how much compute can you support with that, right? Like, I don't think the data set has typically been the decision factor. It's, okay, how much compute can I get in here and can I support all my other instrumentation while driving that power to those processors? So that's typically been, and we've benefited from having, you know, the leading of the most tops for the lowest wattage on our Jetson line so that we can get into these 3Us and these nanosats and really have a ton of processing power that can be at the edge. And so I don't think we've run into too many datasets that have pushed past that processing power.
Starting point is 00:30:38 Now I think when we get out of like the standard Leo example and we start getting into what that next wave of emissions is going to be around persistent lunar colony and gateways and mining and research. resource extraction and logistics and all these things, I think as those tasks become more complex and those data sets are going to explode, and so that's going to require a larger amount of compute. But those systems are going to be larger as well, so we're not going to be trying to do that with the 3U nanoset. So I think, you know, the curve should generally follow each other, but we typically haven't been data set constrained. It's usually how much processing power can we get on board. So, Simone, when you're tackling these different use cases and these different missions, when you are beginning the earliest phases of development and you're figuring
Starting point is 00:31:18 out what you even need to do to accomplish the task. How does that eventually make its way towards the hardware suite that does need to be deployed in that case? Like, is that something that you're passing off to the team that might be building that satellite or maybe say you're going to go build, you know, something on a red wire bus or you're going to integrate, you know, sensors from across the industry? Are you saying this is the bare minimum that I need to achieve the thing that I've just shown here on Earth in the demonstration of this technology? or what does that flow like to actually build out the spacecraft itself?
Starting point is 00:31:50 Yes, we follow the system engineering guidelines that have been defined by NASA where actually they date back to the Apollo program and they are still really the golden standard if you provide a proper interpretation to them and streamline. And so we start from the game, key objectives and then we derive requirements for each of the subsystems.
Starting point is 00:32:21 There are several field of expertise that are needed there in order to get down to the selection of components for this system is an approach of divide and conquer, basically. And what we do is provide our expertise in that design for what aspects related to orbit design to guidance navigation and control, which is crucial to all the advanced mission we are working on right now. So to give an example, VISA is a precise formation of flying mission that we are expected to launch at the end of next year. It's a virtual telescope made of two propulsive 6U cubesats.
Starting point is 00:33:09 These are shoeboxes that align very precisely. at 40 meters distance to form a virtual telescope with a 40-meter focal length and achieve very high-resolution imagery for the application here we are talking about the corona of the sun to understand the energy release mechanisms. And so you understand that in order to go through that system design process, we need the control expert, the astrodynamics expert, to speak the same language of the scientist that defines what are the objectives of the mission.
Starting point is 00:33:50 And so from there, we need to find, you know, the best trade-off in order to accomplish, so to realize that instrument with a minimum cost, with a minimum complexity, the maximum reliability. When it comes to two shoe boxes, you know, that are 40-meter distances, you have a problem of safety. for example, how do we align them so precisely? We're talking about centimeter accurate relative position control in order to take that high-resolution imagery while avoiding collisions between the satellites in the case of anomalies.
Starting point is 00:34:22 And so we need to really close tightly in a concurrent engineering approach with all the other subsystem experts in order to design the system. This one is a really cool example. Oh, go ahead. Yeah, no, I wanted to emphasize what Simone just described is
Starting point is 00:34:38 part of the critical importance of modeling and simulation. These are systems that will never really be built fully to operate and test on the earth the way that they're going to fly without good modeling and simulation capabilities, including synthetic data for scenarios
Starting point is 00:34:55 that you may never be able to record in orbit, may never get into, but need to make sure you set the accommodations for in the event that you do see them. Without that modeling, and simulation capability, you really can't capture all the nuances that are going to go into
Starting point is 00:35:13 the power management, the thermal management. And we've had systems that have been launched and gotten to the moon, seen a scenario that they didn't expect. And all of a sudden, their lifetime is limited because they didn't realize, you know, there'd be reflections off of the side of the crater or there be, you know, shadowing from, you know, this object and so forth. I made us say what we're thinking of on the count of three. We would all say different lunar missions that have happened in the last five years.
Starting point is 00:35:42 Exactly. Exactly. Exactly. But they're just so complex that, you know, you can model the moon, you can model the sun, you can model the radiation. Without having all that in a, you know, in a collaborative interacting environment, it's really hard to get to, well, what is it that the power levels and what margins do we want to add to the power levels because there are going to be degradations along the way. So what Simone is describing is some of the, both complexities, but the fun of engineering these systems. I mean, as an aerospace engineer, this is why we're doing it. It's a miracle in many ways that we actually get them to work at all because of the complexity that goes into it. But there are engineering processes that he's described in tools that go into a methodical system of designing space systems. Can you talk a little more about that on the red wire level itself? on like how you guys are approaching that because just you know I hear a lot of scuttle butt from the industry and I've had it's a small industry where everyone knows each other so then everyone knows what went wrong with everyone else's missions and then I end up hearing it not from the person who's mission it was but from someone else who was like oh what happened there was X Y Z because I know because my friend works on that thing and there's been a bunch of examples in last couple years where somebody didn't fly or they flew before they did like a full up complete system test where they were factoring in and
Starting point is 00:37:06 power generation with communications and can we actually do both things at the same time or do we have to choose one or than the other and how does that impact the actual orbit that we're on? So what does that whole life cycle like? Yeah. So first of all, just in full, I mean, I've been in the industry for 35 years. You know, I went to college in aerospace engineering.
Starting point is 00:37:27 My focus was guidance, navigation, and control, which if you're familiar with what controls analysis and controls engineering does, oftentimes it's on console or very integrated into the operation of the satellite in orbit. So not only do I get to work on the design and test through my career, but I get to see the end product in operation. And you get to those points where you're saying, huh, I wonder why it's doing that.
Starting point is 00:37:52 And you designed it. You did it all. But that's, so I've seen a little bit of everything in my career. And it's humbling. It's humbling that the, you know, the amount of information that humans can retain and actually, you know, you know, confidently recall and use versus, you know, the complexity of space systems.
Starting point is 00:38:17 So, and hence, I have a huge appreciation. Even when I fly from San Francisco to D.C., I have a huge appreciation for the airplane that I'm flying in and engineering that goes into it. Some of what we're doing at Redwire, because of that, because of that appreciation, and I give kudos to our CEO who is the biggest champion of this is a very strategic initiative called Dempsey. It's a digital ecosystem for mission and system integration. And basically we're taking a lot of the COTS analysis tools,
Starting point is 00:38:50 the aerospace tools, a lot of the models, and integrating them into a cohesive modeling, simulation, and engineering system. And by that, you know, the structures, engineers are collaborating with, thermal engineers with the GNC engineers and so forth. And we can actually not only render model and simulate before we even get to the proposal level, what the space system is going to do, how it's going to operate, and we can envision and demonstrate different design reference missions or scenarios on the output. And sometimes from that we go back to our partners and customers and say, hey, look, you know, this is the requirement that will match, you know,
Starting point is 00:39:33 what your objectives are. These are the requirements. And sometimes it's, we need to, you know, optimize this and move that. But without that, you know, cohesive, unified modeling simulation capability, you really can't get to that end point of, okay, how is this going to work in reality at the end? And this isn't just for full-up missions. This could be for a rollout solar array that we built for the space station, where it could be for camera systems that we put on the Orion spacecraft and capture images and do things like optical navigation. So that is something that we believe very strongly. And Dempsey is the name of our digital engineering ecosystem.
Starting point is 00:40:18 And with that, as a foundation, we can then start to look at even multi-domain operations. So we recently acquired a UAV or a UAS company called edge autonomy, and a lot of missions are working in atmosphere, out of atmosphere, and V-Leo at the edge of the atmosphere, and start to model and simulate multi-domain operations. So that's something that we're passionate about because of the complexity that I mentioned and something that we're doing, investing in internally at Redwire with so far really great results from internal involvement and benefits, but also our partners and customers who see that we're doing that. I'd like to really underline the importance of this aspect. I mean, there is no shortcut. And I'm not only talking about the capability to simulate through, for example, a digital twin, the system before deployment in space.
Starting point is 00:41:20 But what we pursue is a hybrid digital robotic queen. The hardware has to be included in the loop in those simulations. As it was said earlier, the hardware basically is poofed. I mean, it should not realize that it's actually not flying. space, but it's in the lab, in a so-called Fletza. And sometimes due to costs basically
Starting point is 00:41:45 some corner shortcuts or some tests are not conducted, and these hybrid digital robotic twins in the password called software in the loop versus hardware in the loop, they are not basically implemented in the most comprehensive
Starting point is 00:42:02 way. And for example, for the NASA Starling mission, just to give an idea of the lessons learned, you know, due to the, let's say, constraints with the launch time due to the reduced cost of the program. We had to sacrifice some
Starting point is 00:42:17 of the tests. And then what happened when we first activated the software on board, it crashed. The software simply crashed at the beginning. And then we had to investigate on the ground, basically doing those tests that we didn't have the time to do during the
Starting point is 00:42:33 development. And we simply realized that the memory that was allocated to the software stack on the microprocessor world was too small for certain execution paths. And so what we did was simply to allocate more memory to the application. And then all of a sudden, the magic happens and everything was working seamlessly.
Starting point is 00:42:54 But there are other problems, like time synchronization, maybe a stat tracker and the GPS receiver, they start providing working on time information that depart from one another. And you need to test that on the ground. So it's not just digital quinnying, but it's a digital robotic twin of the platform that is strictly needed. Yeah, I want to add on to that.
Starting point is 00:43:16 We take digital engineering not just as the modeling and simulation, but then the capture of the requirements, the capture of the risks, and the flow down into the program. It's critically important to engineering that we don't just model and simulate, but we actually then know the design requirements that we're going to work to and how we're going to verify those requirements. And that's all part of digital engineering. And I want to give kudos to, you know, Nvidia for what they're doing with, again, with a lot of their software where they don't just stop at the compute. But it's amazing what they do with some
Starting point is 00:43:53 industries in the modeling of whole factories, of whole, you know, things that you couldn't even conceive of, you know, starting to analyze, you know, without this kind of capability. And then they know that they can achieve their production rate, their, you know, yield rate. rate and so forth. And I think, you know, Nvidia is doing a really good job with that in other industries. And hopefully, we can pull that forward into the space. And maybe DJ wants to say more to that himself. Yeah, you know, I think we talked about a little bit earlier, right?
Starting point is 00:44:21 You know, if we look at autonomous space system, to us, that's a robot. And instead of doing it on Earth, we're doing it in space, right? And obviously the physics are wildly different, but that same training, validation, and deployment approach remains the same, right? So we've had a ton of innovation on the software side, that enables us to do a significant amount of work on a very little amount of compute comparatively, right? And so what would not be possible on a COTS FPGA, you know, we can host a sizable VOM at the edge that we can then use to actuate and understand and contextualize our operational environment. And I think that SIM to Real Gap is really where that value is, right?
Starting point is 00:44:56 You know, we don't want our sensors to know that they're in a simulated environment like somebody was testing with hardware in the loop. We want them to think that they're in the real world testing. And it's expensive to test a robot on Earth and then get the real world feedback. and it's a thousand times more expensive to do that in space. And so the more confidence that we can bring to that training cycle, the more risk that we can reduce, the more varied scenarios that we can throw at these systems, the higher level of confidence that we have to go deploy that.
Starting point is 00:45:19 And that's been a huge benefit of what we've done here terrestially. And then now we're working with our great partners like at Stanford and Redwire to then add that space layer on top of it and achieve those same outcomes. We don't have a ton of time left, but I want to make sure we talk about some examples of what we can do with this. this entire stack that humans can't today. A lot of, you know, there's a lot of news of like, oh, this company is replacing all their people with LLMs or whatever.
Starting point is 00:45:46 And it's like, yeah, that's not really the interesting stuff. It's like, what can you do that humans can't do yet? And I think we saw good examples earlier, Simone, of like, the, you know, categorizing what satellites are flying by you at any moment. I don't think anyone's going to be like, that was Starlink 3057 that just went flying by your window. But those are things that, you know, a human could do if you were given sufficient amount of time,
Starting point is 00:46:06 but you don't have that time, so you can't do it. And the example that you had of the visors mission, I want to make sure I pull up this video because it was, this one took me a little while to understand what I was looking at here, but I think it's a really cool example when you're showing, you know, the way that these two satellites interact,
Starting point is 00:46:21 as you described earlier. I don't know if you want to talk through some of what we're seeing here in this, especially on the upper left. Yeah, you see the two dots. These are the two satellites. And we leverage the natural dynamics because we want to spend the least propellant.
Starting point is 00:46:35 And so you see this relative orbits, you see this ellipses that are flown by one of the satellites that is aligning with the other spacecraft in order to form the telescope for about 10 seconds. The 10 seconds is the integration time that the detector need in order to take that high resolution image of the sun. Now, this has to be done in a safe manner. And so these, they are called passive relative orbits as such that if there is a fatal on the propulsion system, then the two satellites will basically cross one another with a missed distance, with a guarantee of no collision. So that's why you see this fancy relative motion between the satellites. Because they align with the sun, they take the image, but if something wrong happens, then they simply move in an helicoidal motion with respect to one another, avoiding each other.
Starting point is 00:47:30 That's part of the really safety, the same thing. safety concept. And nobody has ever flown such a mission with cubesats, you know, shoeboxes, with that level of precision to obtain a resolution, which is similar to the James Webb Space Telescope at 0.1 arc second, speaking about geometric resolution. And basically making a huge focal length out of these two spacecraft next each other. Yeah, and that's an example of like, yeah, you could do that. You could be the person flying these two cubesets, but you wouldn't want to to do everything that's involved with this.
Starting point is 00:48:04 And maybe you couldn't even actually achieve the centimeter accuracy needed there. Autonomy obviously strictly needed in order to achieve these objectives because whenever the time is critical, so maneuvers have to be executed at the time of every minutes in order to achieve the desire, control, accuracy.
Starting point is 00:48:24 And whenever safety is critical, because you have to avoid collision. and the human operator will never be able to react in time, will never be able to provide that control profile in order to align the formation. This is a typical example where autonomy is strictly required. Now, I'm not saying that AI is strictly required here for this mission to be successful.
Starting point is 00:48:46 AI would be required to achieve a level of autonomy number four, which means the formation will be able to realize how well the solar science, data are and then decide to change the part of the sound that is being observed, change the detector settings autonomously in order to get the most science out of it. This is a reasoning that would happen on board that only AI with a capability to process unstructured data and solve a very complex optimization problem and a non-convex optimization problem online will be able to achieve. That would be advisors AI boosted that could happen
Starting point is 00:49:27 in the future. Yeah, and I think that combination of those multimodal inputs is very important, right? Because once you take the human out of the loop and you introduce the latency, having that autonomy and the AI to then synthesize that and then reason what to do next with those multimodal inputs is something that is critically important as we look to reduce human presence, increase human presence in certain areas. Complex tasks are now part of the mission profile. So I think that's where that really, hey, what could we do with human?
Starting point is 00:49:57 we could send it all back and have someone analyze it and then say, okay, now go take these five steps. But does the mission profile allow that to happen? It's going to be accurate enough. Do we miss something? Are the physics correct? You know, there's a lot of things that go into it. I think that's where the significant value of AI at the edge is. How do you see the use cases?
Starting point is 00:50:16 Go ahead, Al. Sorry. Well, I was just going to mention maybe two cases where, you know, AI might enable, but autonomy is critically important. might depend on AI tools. One is something that I'm very interested in, object recognition or characterization before it's even resolved. So in space, distances are very large, as you mentioned. Very rarely do you see anything like you see in the movies where there it is and it's
Starting point is 00:50:43 right in front of you and it's somehow... You're saying gravity was not real? Are you saying you didn't like the movie? Yeah, okay. She flew to the Tongong with a fire extinguisher, I think? That's right. No fire extinguish is going. So the reality is most of the stuff is unresolved in space.
Starting point is 00:51:00 They look like blurry dots out there. And there are AI techniques to understanding what satellites, you know, traditionally look like. They might have solar panels. They might have reflective services or thrusters or antennas. But taking that a trained model and looking at, you know, a set of unresolved, you know, pixels and starting to characterize it. So you can start to apply machine vision to objects in space.
Starting point is 00:51:26 space that, you know, typically are not directly in front of you with, you know, full resolution. The other is the direction that the world is going, which is proliferation, whether in space or air, and it's really, really hard. It might be possible for a human or a set of humans to operate, you know, a thousand satellites even, but to operate a thousand dynamic satellites where they're, you know, maneuvering and so forth and understanding the realm of the ultimate objective, the condition of each satellite, because regardless of your manufacturing process, they all turn out different, just like kids. So that is the task for the modeling simulation and training tools that AI brings forward. So I think those two realms are things that
Starting point is 00:52:13 just were not possible before that we're going to start seeing much more feasible, cost-effective. Also, go ahead. I'll show the example of the imaging. I think this is one that Simon and over as well where I don't know this is beyond my pay grade to describe what your example has here Simone well this is a great example because it's one of the research projects that we're doing in collaboration with Redwire and funded by Redware where we are using the most most recent techniques which are data driven so you can you know they fall in in the era of AI, in order to recover the 3D model of a spacecraft from sparse imagery and how can we accelerate that process to make it feasible on board the satellite?
Starting point is 00:53:06 And so our new algorithms that do something similar to what Al was explaining. They provide an acceleration capability in terms of recovering a 3D model of a satellite that I want to characterize compared instead with 3rd. traditional approaches. It's one of the example where obviously AI not just enhance what we could do in the past, but enable
Starting point is 00:53:31 new capabilities such as recovering a 3D model of an object from a single image, something that would be impossible with a classical computer vision technique, but thanks to a trained, a pre-trained AI model, it's something that can be done.
Starting point is 00:53:48 Similar to how you as a human would guess the 3D shape of that object from the image. One thing that I was thinking of as you were describing some of these use cases out, there are a lot of use cases today that feel kind of pedestrian around Earth that seem like simple things that humans do a lot of, but it gets more unique the farther the edge gets away, right? So like, you know, DJ, when you're talking about, like, we're going to deploy stuff
Starting point is 00:54:18 at the edge. It's like, well, the edge is still within a pretty fast communication time right now when we're just talking about Earth orbit or even the moon. But, you know, maybe the humans, the autonomous humans example was not so stupid from the NASA side of things where, like, you deploy a couple imaging satellites around Mars, very unstable orbits, and they need to communicate with each other and do orbit determination and figure out, okay, this one should boost up a little higher to make sure we get the imagery. Edge gets so much more important when you're 30, 40, 60 light minutes away. So is there an aspect there of the developing these capabilities and realizing that the stuff that's cool today is not going to be cool around Earth in 10 years, but it will be really cool around Mars in 10 years, even if it's the same level.
Starting point is 00:54:59 Does this make any sense? Does anyone feel like this, understand where I'm going here? Yeah, you're pushing it out further. Yeah, no, that makes sense. The farther the edges, the cooler it is, even if the use case is something that we would consider, oh, that was like 10 years ago. Well, I'd like to chime in here because, you know, try to wonder, you know, how come that we are able to infer or use AI chatbots or AI models on our laptops and on our phones, you know, because they are thin client.
Starting point is 00:55:30 And thanks to the communication infrastructure to the network, we are able to use servers, you know, where these very heavy, very high, multi-billion neurons models that can execute. Now, we can imagine a future where this will happen in near Earth space, think of Leo, think of geo, in geostationary orbit, where through, for example, data centers, which is a big topic at the moment on the news, can be those servers and satellites being able to run, you know, this decision-making, you know, AI models online. But when we get to the moon, when we get to Mars, when we get to the ocean of Europe, when we get to a rover, underwater vehicles, drones on the surface of other planets, exploring caves, exploring tunnels, you know, forget about that. You need that edge capability. Yeah, I would double emphasize that. And yeah, we take for granted a lot of things here. Everything is devices are connected. If anybody doubts that just check your router. How many devices are connected at your router at home? It'll list them out and you'll say, no, I don't have 30.
Starting point is 00:56:41 30 devices connected and your toaster is talking to your router half the time. So we take that for granted and they're all computing. Our Teslas or EVs are connected and charging and conditioning the batteries and all these things that are happening. And yes, unfortunately, space has been in such an austere environment, especially if we have to land something, the mass is critically important and so forth. But I just want to remind people that there are situations here on Earth in natural disasters where connection is broken off, power is broken off, there are austere environments in, you know, national security, you know, where you can't assume you're going to have, you know, communications and GPS and even human interaction and intervention, but still need to operate.
Starting point is 00:57:33 those are here and domestic and now and it makes you really appreciate things when you have a moment a day without space when you have a moment at home where you can't find your phone or the router goes out and you're thinking oh my god how do i even map my way to you know the dinner tonight i had no idea where that address is so there's basic things like that that we are very very um take for granted very accustomed to that, yes, our, you know, our magic, you know, as you get farther away or separated from our infrastructure. And that is what they mean by a day without space or that, you know, it is our, you know, the human quality of life is dependent on the connectivity and the compute that is all around us. Well, this is really cool to hanging out with y'all and hopefully people enjoy the conversation. I thought it was awesome. Really good examples.
Starting point is 00:58:27 Interesting cross-section between everybody. but maybe before we head out just to like a quick around the horn if there's something that you want to point people to, we'll go, same order we started in, Simone, is there anything that you want people listening to go check out, follow along with, anything that you would point them to?
Starting point is 00:58:43 Well, check our website. All our projects are there. And if you are interested in any of these projects, I'm talking about slap. org, stentford.edu, should an email because, you know,
Starting point is 00:58:58 We are looking for collaborators, and so that will be very, very cool to work on all these topics together. And I've got all the links to the examples that we had in the show notes where people can go and watch them full length and then jump off from there. But DJ, how about you for InVidia? What is, what should people do if they're like, man, I've got to get some of that in my life? Yeah, the two I would point to. So we just had our GTC conference a week and a half ago, and we had a lightning talk around AI for space exploration. So we had some of our great partners from Northrop and Blue Origin talking about how they're leveraging a lot of the technologies that we talked about here today. So I would go look up that talk.
Starting point is 00:59:35 I think that would be very interesting for some folks. And then we've also been a sponsor of the Frontier Development Lab out of Europe. And there's been a lot of great work that leveraged our same technology there. So I would look at some of the things that have taken place there. And those are two good starting points. Al, round us out. What's up with Redwater? Yeah, yeah.
Starting point is 00:59:51 Well, first of all, thanks for this opportunity. I think it's a great conversation. It excites me this topic. First of all, any early careers or students, you know, there's a lot of opportunities in the space industry. It's an exciting time. In fact, I say it is the most exciting time ever in the space industry right now. Whether you want to go to the moon, work in V-Leo habitats, go to Mars, it's all happening, and it's happening now. So, you know, I think that the other thing that I would say is Redwire has a great website.
Starting point is 01:00:20 If you want to look at that, reach out to me on LinkedIn. If you're really interested in a topic, so please feel. free to do that. We are out at conferences all the time and would welcome that and it is very much a relationship industry. Part of it is what Anthony is doing here and connecting people and topics. So get out and get engaged. We're mostly friendly, I think. So, you know, people who aren't in the space industry, you know, whether you're a lawyer, economist, you know, GPU developer, there's a little bit of everything in the space industry. So I'd welcome all of that. And again, This is one of the many exciting topics that we have in the space industry that's going on right now.
Starting point is 01:01:00 And I appreciate you covering it. Yeah, it's been awesome. I have about 8,000 topics for each one of you that I'm probably going to email you separately and be like, can you come back and talk about this one very specific thing that we glanced over entirely today? So our definitely not enough time, but I appreciate it very much. And thanks again. Thank you, Anthony. Awesome.
Starting point is 01:01:17 Thanks for having us. Thanks again to everyone for coming on the show. It was an awesome conversation. It really had a good time talking with them. I think we went interesting places. I hope you enjoyed it as well. Hope you enjoyed the visuals that we had there, the really cool examples. All those links in the show notes to the things we talk about, some of the things that we didn't,
Starting point is 01:01:34 but were good reference material for building the conversation. So that's it. If you like this show, if you like what I'm doing, this is a 100% listener and viewer-supported institution. Maintingutoff.com slash support is where to go to sign up and join the crew there. You can get access to Miko Headlines, which is a podcast I do, every single week running through all the stories in space that are worth keeping track of. I filter out all the news you don't need to know about, fill you in on the ones that you do need to know about. It's a great way to stay up on the news and support the show.
Starting point is 01:02:04 So this show was supported by 30, what do we got? 32 executive producers who made this episode possible. Thank you so much to Eunice, Russell, Donald, Stealth Julian, Pat, Fred, David, Lee, Frank, Josh from Impulse, Steve, Joel, Joe Kim, Matt, Natasha Sackos, Tim, The Everyday Astronaut, Chris, Theo and Violet, Hico, Will & Lars from Agile, Jan Warren, The Astrogators at SEE, Ryan, Better Everyday Studios, and Four Anonymous Executive Users. Thank you all so much. If you've got any questions or thoughts, hit me up, Anthony at Managing Cutoff.com. I guess comment here on YouTube as well.
Starting point is 01:02:40 Maybe we'll keep doing more of this. Let me know if you enjoyed this. Trying it out. We may never do it again. We may do it all the time. Who knows? It's really up to you and me. So let me know what you think.
Starting point is 01:02:49 And until next time, I'll talk to you soon. I'm going to be a lot of M. M. M. Thank you. I don't know.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.