Lex Fridman Podcast - Keoki Jackson: Lockheed Martin
Episode Date: August 19, 2019Keoki Jackson is the CTO of Lockheed Martin, a company that through its long history has created some of the most incredible engineering marvels that human beings have ever built, including planes tha...t fly fast and undetected, defense systems that intersect threats that could take the lives of millions in the case of nuclear weapons, and spacecraft systems that venture out into space, the moon, Mars, and beyond with and without humans on-board. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon.
Transcript
Discussion (0)
The following is a conversation with Kyoki Jackson.
He's the CTO of Lockheed Martin, a company that through strong history has created some of the most
incredible engineering marvels human beings have ever built, including planes that fly fast and
undetected, defense systems that intersect nuclear threats that can take the lives of millions,
and systems that venture out into space, the moon, Mars,
and beyond.
In these days, more and more, artificial intelligence has an assistive role to play in these systems.
I've read several books in preparation for this conversation.
It is a difficult one, because in part, Lucky Martin builds military systems that operate
in a complicated world
that often does not have easy solutions in the gray area between good and evil.
I hope one day this world will rid itself of war in all its forms.
But the path to achieving that in a world that does have evil is not obvious.
What is obvious is good engineering and artificial
intelligence research has a role to play on the side of good. Lucky Martin and the rest of
our community are hard at work at exactly this task. We talk about these and other important
topics in this conversation. Also, most certainly both Kio-K and I have a passion for space. Us, humans,
venturing out toward the stars. We talk about the exciting future as well.
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube, give
it 5 stars on iTunes, support it on Patreon, or simply connect with me on Twitter
at Lex Friedman spelled F-R-I-D-M-A-N. And now here's my conversation with K-O-K-Jaxon. I read several books on Locking Martin recently.
My favorite in particular is by Ben Rich, called Concork's personal memoir.
It gets a little edgier times.
But from that I was reminded that the engineers of Lockheed Martin have created some of the most
incredible engineering marvels human beings have ever built throughout the century throughout the 20th century and the 21st
Do you remember a particular project or
System and Lockheed or before that at the space shuttle Columbia that you were just in awe at the fact that
us humans could create something like this.
You know, that's a great question.
There's a lot of things that I could draw on there.
When you look at the skunkworks and Ben Rich's book in particular, of course, it starts off
with basically the start of the jet age and the P80.
I had the opportunity to sit next to one of the Apollo astronauts, Charlie Duke,
recently at dinner, and I said, Hey, what's your favorite aircraft?
And he said, well, it was by far the F-104 Starfighter, which was another aircraft that came out of
Lockheed there.
What does it, it was the first mock to jet fighter aircraft. they called it the missile with a man in it and
So those are the kinds of things I grew up here in stories about
You know, of course the SR-71 is incomparable as you know kind of the pit of me of
Speed altitude and just the coolest-looking aircraft ever So there's a plane that's a intelligence surveillance and reconnaissance aircraft that was
designed to be able to outrun, basically go faster than any air defense system.
But I'll tell you, I'm a space junkie. That's why I came to MIT. That's really what took me
That's why I came to MIT. That's really what took me ultimately to Lockheed Martin.
And I grew up in so Lockheed Martin, for example,
has been essentially at the heart of every planetary mission,
like all the Mars missions we've had a part in.
And we've talked a lot about the 50th anniversary
of Apollo here in the last couple of weeks, right?
But remember 1976, July 20th again,
National Space Day, so the landing of the Viking,
the Viking lander on the surface of Mars,
just a huge accomplishment.
And when I was a young engineer at Lockheed Martin,
I got to meet engineers who had designed,
various pieces of that mission as well.
So that's what I grew up on is these planetary missions,
the start of the space shuttle era
and ultimately had the opportunity
to see Lockheed Martin's part
and we can maybe talk about some of these here,
but Lockheed Martin's part
and all of these space journeys over the years.
Do you dream and apologize for getting philosophical at times or sentimental?
I do romanticize the notion of space exploration.
So do you dream of the day when us humans colonize another planet like Mars or a man, a woman,
a human being steps on Mars?
Absolutely.
And that's a personal dream of mine.
I haven't given up yet on my own opportunity
to fly into space. But as you know, from the Lockheed Martin perspective, this is something
that we're working towards every day. And of course, you know, we're building the Orion
spacecraft, which is the most sophisticated human-rated spacecraft ever built. And it's
really designed for these deep space journeys, you know, starting with the moon, but ultimately going to Mars and being the platform, you know,
from a design perspective, we call the Mars Base Camp to be able to take humans to the
surface and then after a mission of a couple of weeks, bring them back up safely.
And so that is something I want to see happen during my time at Lockheed Martin.
So I'm pretty excited about that.
And I think once we prove that's possible, colonization
might be a little bit further out,
but it's something that I'd hope to see.
So maybe you can give a little bit an overview of Lockheed Martin
as partner with a few years ago with Boeing to work with the DOD
and NASA to build launch systems and rockets with the ULA.
What's beyond that?
What's Lockheed's mission, timeline and long-term dream in terms of space?
You mentioned the moon.
I've heard you talk about asteroids as Mars, what's the timeline, what's the engineering challenges, and what's
the dream long term?
Yeah, I think the dream long term is to have a permanent presence in space beyond lower
orbit, ultimately with a long term presence on the moon and then to the planets to Mars.
And it's very to interrupt you.
And that's a-term presence means sustained
and sustainable presence in an economy,
a space economy that really goes alongside that.
With human beings and being able to launch perhaps
from those, so like hop.
You know, it's a lot of energy that goes in those hops, right?
So I think the first step is being able to get there and to be able to establish sustained
bases, right, and build from there.
And a lot of that means getting, as you know, things like the cost of launch down and
you mentioned United Launch Alliance.
And so I don't want to speak for ULA, but obviously they're working really hard to,
on their next generation of launch vehicles, to maintain that incredible mission success
record that ULA has, but ultimately continue to drive down the cost and make the flexibility
the speed and the access ever greater.
So what's the missions that are in horizon that you could talk to?
No, I hope to get to the moon.
Absolutely.
I mean, I think you know this, or you may know this, there's a lot of ways to accomplish
some of these goals.
And so that's a lot of what's in discussion today.
But ultimately, the goal is to be able to establish a base, essentially, in
syslooner space, that would allow for ready transfer from orbit to the lunar
surface and back again. And so that's sort of that near term, I say, near term in
the next decade or so vision, Starting off with a stated objective by this
administration to get back to the moon in the 2024, 2025 time frame, which is right
around the corner here. So I'll be of an engineering challenge is that.
I think the big challenge is not so much to go but to stay, right? And so we demonstrated in the 60s that you could
send somebody up to a couple of days of mission and bring them home again successfully.
Now we're talking about doing that. I'd say more to, I don't say an industrial scale,
but a sustained scale, right? So permanent habitation, you know, regular reuse of vehicles, the infrastructure to get things like fuel air,
consumables, replacement parts, all the things that you need to sustain that kind of infrastructure.
So those are certainly engineering challenges.
They're budgetary challenges, and those are all things that we're going to
have to work through.
The other thing, and I shouldn't, I don't want to minimize this.
I mean, I'm excited about human exploration, but the reality is our technology and where
we've come over the last 40 years essentially has changed what we can do with robotic exploration as well.
And to me, it's incredibly thrilling. And this seems like old news now, but the fact that we have
rovers driving around the surface of Mars and sending back data is just incredible. The fact that we
have satellites and orbit around Mars that are collecting weather, you know, they're looking at the
terrain, they're mapping,
all of these kinds of things on a continuous basis.
That's incredible.
And the fact that you got the time lag, of course,
going to the planets.
But you can effectively have virtual human presence there
in a way that we have never been able to do before.
And now, with the advent of even greater processing power,
better AI systems, better cognitive systems
and decision systems, you put that together
with the human piece.
And we really opened up the solar system
in a whole different way.
And I'll give you an example.
We've got a Cyrus Rex, which is a mission
to the asteroid Benu.
So the spacecraft is out there right now
and basically a year mapping activity
to map the entire surface of that asteroid in great detail.
You know, all autonomously piloted, right?
But the idea then that, and this is not too far away,
it's gonna go in, it's got a sort of fancy vacuum cleaner with a bucket.
It's going to collect the sample off the asteroid
and then send it back here to Earth.
And so, you know, we have gone from sort of those
tentative steps in the 70s, you know, early landings,
video of the solar system.
So now we've sent spacecraft to Pluto.
We have gone to
comets and brought and intercepted comets. We've brought star dust, you know,
material back. So that's we've gone far and there's incredible opportunity to
go even farther. So it seems quite crazy that this is even possible, that can you talk a little bit about
what it means to orbit an asteroid and with a bucket to try to pick up some soil samples.
Yeah, so part of it is just kind of the, you know, these are the same kinds of techniques we use here on Earth for high speed, high-speed
high-accuracy imagery stitching these scenes together and creating essentially
high-accuracy world maps, right? And so that's what we're doing, obviously, on a
much smaller scale with an asteroid. But the other thing that's really
interesting, you put together sort of that neat control
and data and imagery problem.
But the stories around how we design the collection, I mean, as essentially, you know, this is
the sort of the human ingenuity element, right?
But, you know, essentially, you know, had an engineer who had one day, he's like, well,
starts messing around with parts
vacuum cleaner, bucket, you know, maybe we could do something like this.
And that was what led to what we call the Pogo Stick Collection, right?
Where basically I think comes down, it's only there for seconds.
Does that collection grabs the, essentially blows the, the regolith material into the collection hopper and off it goes.
It doesn't really land almost. It's a very short landing. Wow, that's incredible. So what is
in those, we talk a little bit more about space, what's the role of the human in all of this?
What are the challenges, what are the opportunities for humans as they
pilot these vehicles in space and for humans that may step foot on either the moon or Mars?
Yeah, it's a great question because I just have been extolling the virtues of robotic, and rovers, autonomous systems,
and those absolutely have a role.
I think the thing that we don't know how to replace today
is the ability to adapt on the fly to new information.
And I believe that will come, but we're not there yet.
There's a ways to go. And I believe that will come, but we're not there yet.
There's a ways to go.
And so, you know, you think back to Apollo 13 and the ingenuity of the folks on the ground
and on the spacecraft essentially cobbled together a way to get the carbon dioxide scrubbers
to work.
Those are the kinds of things that ultimately, you know, I'd say not just from dealing with
anomalies, but you know, dealing with new information.
You see something and rather than waiting 20 minutes or half an hour and hour to try
to get information back and forth, but be able to essentially revector on the fly, collect different samples, take a different approach,
choose different areas to explore.
Those are the kinds of things that that human presence enables that still weighs ahead of
us on the AI side.
Yeah, there's some interesting stuff we'll talk about on the teaming side here on Earth.
That's pretty cool to explore.
And in space.
So let's not leave the space piece out.
So what is A.I. and humans working together in space
look like?
Yeah, one of the things we're working on
is a system called Maya, which is, you can think of it,
it's what's an A.I. assistant.
And in space, exactly.
And you think of it as the Alexa in space, right? But this goes hand-in-hand
with a lot of other developments. And so today's world, everything is essentially model-based,
model-based systems engineering to the actual digital tapestry that goes through the design,
the build, the manufacture, the testing, and ultimately the sustainment of these systems.
And so our vision is really that, you know, when our astronauts are there around Mars, you're
going to have that entire digital library of the spacecraft, of its operations, all the
test data, all the test data and flight data from previous missions to be able to look
and see if there are anomalous conditions until the humans and potentially deal with that before
it becomes a bad situation and help the astronauts work through those kinds of things. And it's not just
dealing with problems as they come up, but also offering
up opportunities for additional exploration capability, for example.
So, that's the vision is that these are going to take the best of the human to respond
to changing circumstances and rely on the best of AI capabilities to monitor these,
this almost infinite number of data points and correlations
of data points that humans frankly aren't that good at. So how do you develop systems in space like
this, whether it's a Alexa in space or in general, and you kind of control systems, and you kind of
intelligent systems, when you can't really test stuff too much out in space,
it's very expensive to test stuff. So how do you develop such systems?
Yeah, that's the beauty of this digital twin, if you will. And of course, with Lockheed Martin,
we've over the past five plus decades been refining our knowledge of the space environment, of how materials behave,
dynamics, the controls, the, you know, radiation environments, all of these kinds of things. So we're
able to create very sophisticated models, not perfect, but they're very good. And so you can actually do a lot. I spent part of my career, you know, simulating
communication spacecraft, you know, missile warning spacecraft, GPS spacecraft,
in all kinds of scenarios and all kinds of environments. So this is really just taking that to
the next level. The interesting thing is that now you're bringing into that loop a system depending on how it's developed that maybe
non-deterministic, it may be learning as it goes. In fact, we anticipate that it will be learning as it
goes. And so that brings a whole new level of interest, I guess, into how do you do verification and
validation of these non-deterministic learning systems in scenarios
that may go out of the bounds or the envelope that you have initially designed to?
So, it has this system and its intelligence as the same complexity. Some of the same
complexity as a human does, and the learns over time, it's unpredictable in certain kinds
of ways. So, you also have to model that when you're thinking about it.
So in your thoughts, it's possible to model the majority of situations, the important
aspects of situations here on Earth and in space, enough to test stuff.
Yeah, this is really an active area of research.
And we're actually funding university research in a variety of places including MIT.
This is in the realm of trust and verification and validation of, I'd say, autonomous systems
in general and then as a subset of that autonomous systems that incorporate artificial intelligence
capabilities.
And this is not an easy problem. We're working with startup companies. We've
got internal R&D, but our conviction is that autonomy and more and more AI-enabled autonomy
is going to be in everything that Lockheed Martin develops and feels. And it's going to
be retrofit into existing systems. They're going to be part of the
design for all of our future systems. And so maybe I should take a step back and say the way we
define autonomy. So we talk about autonomy essentially system that composes, selects, and then executes decisions with varying levels of human intervention.
And so you could think of no autonomy. So this is essentially the human doing the task.
You can think of effectively partial autonomy where the human is in the loop. So making decisions in every case about what the autonomous system can do.
Either in the cockpit or remotely.
Or remotely, exactly, but still in that control loop.
Then there's what you'd call supervisory autonomy.
The autonomous system is doing most of the work.
The human can intervene to stop it or to change the direction.
And then ultimately, full autonomy,
where the human is off the loop all together.
And for different types of missions
want to have different levels of autonomy.
So now take that spectrum and this conviction
that autonomy and more and more AI
are in everything that we develop.
The kinds of things that Lockheed Martin does, a lot of times are safety of life
critical kinds of missions. Think about aircraft, for example. And so we require and our customers
require an extremely high level of confidence. One, that we're going to protect life, two, that these systems will behave in ways that
their operators can understand.
And so this gets into that whole field.
Again, being able to verify and validate that the systems have been, that they will operate
the way they're designed and the way they're expected.
And furthermore, they will do that in ways that can be explained and understood.
And that is an extremely difficult challenge.
Yeah, so here's a difficult question.
I don't mean to bring this up, but I think it's a good case that people are familiar with the Boeing 737 MAX commercial airplane has had two recent crashes
Where their flight control software system failed and it's software so I don't need to speak about Boeing but broadly speaking
We have this in the autonomous vehicle space to save the autonomous we have millions of lines of code software
making decisions.
There is a little bit of a clash of cultures because software engineers don't have the same culture
of safety often. That people who build systems like at Lockheed Martin do where it has to be
exceptionally safe, you have to test this on.
So how do we get this right
when software is making so many decisions?
Yeah, and this, there's a lot of things that have to happen.
And by and large, I think it starts with the culture,
which is not necessarily something that A is taught in school
or B is something that would come, you know, depending
on what kind of software you're developing, it may not be relevant, right? If you're targeting
ads or something like that. So, and by and large, say, not just Lockheed Martin, but certainly
the aerospace industry as a whole has developed a culture that does focus on safety, safety of life, operational
safety, mission success.
But as you note, these systems have gotten incredibly complex.
And so they're to the point where it's almost impossible, you know, the state's base has
become so huge that it's impossible to, or very difficult to do a systematic verification across the entire
set of potential ways that an aircraft could be thrown all the conditions that could
happen, all the potential failure scenarios.
Now maybe that's soluble one day, maybe when we have our quantum computers at our fingertips
we'll be able to actually simulate across an entire, you know,
almost infinite state space. But today, you know, there's a lot of work to really try
to bound the system, to make sure that it behaves in predictable ways, and then have this culture of continuous inquiry and skepticism
and questioning to say, did we really consider the right realm of possibilities?
Have we done the right range of testing?
Do we really understand, in this case, human and machine interactions, the human decision
process alongside the machine processes.
And so that's that culture that we call it the culture of mission success at Lockheed
Martin that really needs to be established.
And it's not something, you know, it's something that people learn by living in it.
And it's something that has to be promulgated, you know, and it's done, you know, from
the highest level.
So I had a company of Lockheed Martin, like Lockheed Martin.
Yeah, and the same as being faced in certain, honest,
vehicle companies where that culture is not there because it started mostly by soft
engineers. So that's what they're struggling with.
Is there a lessons that you think we should learn as an industry and a society from the Boeing 737 MAX crash.
These crashes obviously are either tremendous tragedies, their tragedies for all of the people,
the crew, the families, the passengers, the people on the ground and all.
And it's also a huge business and economic setback as well.
I mean, we've seen that it's impacting essentially
the trade balance of the US.
So these are important questions.
And these are the kinds of,
we've seen similar kinds of questioning at times.
You go back to the challenger accident.
And it is, I think, always important to remind ourselves that humans are fallible, that
the systems we create as perfect as we strive to make them.
We can always make them better.
And so another element of that culture of mission success is really that commitment to
continuous improvement.
If there's something that goes wrong, a real commitment to root cause,
and true root cause understanding to taking the corrective actions and to making the system,
future systems better. And certainly we strive for, you know, no accidents. And if you look at
the record of the commercial airline industry as a whole,
and the commercial aircraft industry as a whole, you know, there's a very nice decaying exponential
to years now where we have no commercial aircraft accidents at all, right? Yeah.
Our fatal accidents at all. So that didn't happen by accident.
It was through the regulatory agencies, FAA,
the airframe manufacturers, really working on a system
to identify root causes and drive them out.
So maybe we can take a step back and many people
are familiar, but Lockheed Martin broadly. What kind of categories of systems
are you involved in building?
Lockheed Martin, we think of ourselves as a company that solves hard mission problems.
And the output of that might be an airplane or spacecraft or a helicopter or radar or something like that,
but ultimately we're driven by these.
You know, like what is our customer? What is that mission that they need to achieve? or helicopter, or radar, or something like that, but ultimately we're driven by these.
What is our customer?
What is that mission that they need to achieve?
And so that's what drove the SR-71, right?
How do you get pictures of a place
where you've got sophisticated air defense systems
that are capable of handling any aircraft
that was out there at the time, right?
So that's what you'll do to NSR 70 was build a nice flying camera
Exactly and make sure it gets out and it gets back right and that led ultimately to really the start of the space program in the US as well
So now take a step back to Lockheed Martin of today. And we are on the order of 105 years old now
between Lockheed and Martin, the two big heritage companies, of course, were made up in the
whole bunch of other companies that came in as well, general dynamics, you know, I kind of go
to end the list. Today, you can think of us in this space of solving mission problems. So,
in this space of solving mission problems. So obviously on the aircraft side,
tactical aircraft building the most advanced fighter aircraft
that the world has ever seen.
We're up to now several hundred of those delivered,
building almost a hundred a year.
And of course working on the things that come after that.
On the space side, we are engaged
in pretty much every venue of space utilization
and exploration you can imagine. So I mentioned things like navigation and timing GPS, communication
satellites, missile warning satellites. We've built commercial surveillance satellites,
we've built commercial communication satellites, We've built commercial communication satellites.
We do civil space, so everything from human exploration
to the robotic exploration and the outer planets,
and keep going on the space front.
But a couple other areas I'd like to put out,
we're heavily engaged in building
critical defensive systems.
And so a couple that I'll mention, the Aegis Combat System, We're heavily engaged in building critical defensive systems.
And so a couple that I'll mention, the Aegis Combat System, this is basically the integrated
air and missile defense system for the U.S. and allied fleets.
And so protects carrier strike groups, for example, from incoming ballistic missile
threats, aircraft threats, cruise missile threats,
and you know, kind of go down the list. So the carriers, the fleet itself, is the thing that is
being protected. The carriers aren't serving as a protection for something else. Well, that's
that's a little bit of a different application. We've actually built a version called Aegis Assure,
which is now deployed in a couple of places around the world.
So that same technology, I mean, basically it can be used to protect either an ocean going
fleet or a land-based activity.
Another one, the THAAD program.
So THAAD, this is the theater high altitude area defense. This is to protect relatively broad areas against sophisticated
ballistic missile threats. Now it's deployed with a lot of US capabilities. Now we have
international customers that are looking to buy that capability as well. And so these are systems that defend,
not just defend military and military capabilities,
but defend population areas.
You know, we saw, you know, maybe the first public use of these
back in the first Gulf War with the Patriot systems.
And these are, these are the kinds of things
that Lockheed Martin delivers.
And there's a lot of stuff that goes with it.
So think about the radar systems and the sensing systems
that queue these, the command and control systems
that decide how you pair a weapon against an incoming threat.
And then all the human and machine interfaces
to make sure that it can be operated successfully
in very strenuous environments.
Yeah, there's some incredible engineering
that at every front, like you said.
So maybe if we just take a look at Lockheed history broadly,
maybe even looking at skunk works,
what are the biggest, most impressive milestones of innovation?
So if you look at stealth, I would have called you crazy.
If you said that's possible at the time.
And supersonic and hypersonic.
So traveling at first of all, traveling at the speed of sound
is pretty damn fast.
And supersonic
and hypersonic 3, 4, 5 times is a bit of sound. That seems, I would also call you crazy
if you say you can do that. So can you tell me how it's possible to do these kinds of
things and is there other milestones and innovation that's going on? You can talk about.
Yeah. Well, let me start, you know, on the skunk work saga and you kind of alluded to it in the beginning
I mean skunk works is as much idea as a place and so it's driven
Really by Kelly Johnson's 14 principles and I'm not gonna list off 14 of them off
But the idea and this I'm sure will resonate with any engineer who's worked on a highly motivated small team before.
The idea that if you can essentially have a small team of very capable people who want to work on
really hard problems, you can do almost anything. Especially if you kind of shield them from
bureaucratic influences. If you create very tight relationships
with your customer so that you have that team
and shared vision with the customer.
Those are the kinds of things
that enable the skunkworks to do these incredible things.
And we listed off a number that you brought up stuff.
And I mean, this whole, I wish I could have seen Ben Rich
with a ball bearing rolling across the desk
to a general officer and saying,
would you like to have an aircraft
that has the radar cross section of this ball bearing?
Probably one of the least expensive
and most effective marketing campaigns
in the history of the industry.
So just for people, not familiar,
I mean, the way you detect aircraft,
so I mean, I'm sure there's a lot of ways,
but radar for a long this time,
there's a big blob that appears in the radar.
How do you make a plane disappear
so it looks as big as a ball bearing?
What's involved in technology wise there? What's broadly sort of
the stuff you could speak about? I'll stick to what's in Ben Rich's book, but obviously the geometry
of how radar gets reflected and the kinds of materials that either reflect or absorb are kind of
the couple of the critical elements there. I mean, it's a cat and mouse game, right?
I mean, you know, radars get better,
stealth capabilities get better.
And so it's a really game of continuous improvement
and innovation there.
I'll leave it at that.
Yeah, so the idea that something is essentially invisible
is quite fascinating.
But the other one is flying fast. So speed
of sound is 750, 60 miles an hour. So supersonic is three, you know, Mach 3, something like that.
Yeah, we talk about supersonic, obviously, and we kind of talk about that as that realm from Mach 1 up through about Mach 5. And then hypersonic, so high supersonic speeds would be past Mach 5.
And you got to remember Lockheed Martin and actually other companies have been
involved in hypersonic development since the late 60s.
You know, you think of everything from the X-15 to the space shuttle as
examples of that.
I think the difference now is if you look around the world, particularly the threat environment
that we're in today, you're starting to see, you know, publicly, folks like the Russians and the
Chinese saying they have hypersonic weapons capability that could threaten US and allied capabilities
and also basically the claims are they could get around defensive systems that are out there
today.
And so there's a real sense of urgency.
You hear it from folks like the Undersecretary of Defense for Research and Engineering, Dr.
Mike Griffin, and others in the Department of Defense that hypersonics is something that's
really important to the nation in terms of both parity, but also defensive capabilities.
And so that's something that, you know, we're pleased.
It's something that Lockheed, we're pleased. It's something Lockheed Martin's, you know, had a heritage in. We've invested R&D dollars on our side for many years.
And we have a number of things going on with various US government customers in that field today that we're very excited about.
So I would anticipate we'll be hearing more about that in the future from our customers. And I've actually haven't read much about this.
Probably you can't talk about much of it at all, but on the defensive side, it's a fascinating
problem of perception of trying to detect things that are really hard to see.
Can you comment on how hard that problem is and how hard is it to stay ahead, even if
we go back a few decades, stay ahead of the competition?
Well, maybe I'd, again, you got to think of these as ongoing capability developments.
And so think back to the early days of missile defense.
So this would be in the 80s, the SDI program.
And in that time frame, we proved, and Lockheed Martin proved that you could
hit a bullet with a bullet, essentially, and which is something that had never been done
before to take out an incoming ballistic missile. And so that's led to these incredible
hit-to-kill kinds of capabilities. Pack three, that's the Patriot Advanced Capability Model 3, the Lockheed Martin builds, the Thad system that I talked about.
So now hypersonics,
they're different from ballistic systems, and so we got to take the next step in defensive capability.
I'll leave that there, but I can only imagine.
Now let me just comment, sort of as an engineer,
it's sad to know that so much that Lockheed is done
in the past is classified, or today,
and it's shrouded in secrecy.
It has to be by the nature of the application.
So, like, what I do,, so we do here at MIT, we'd like to inspire young
engineers, young scientists, and yet in the lucky case, some of that engineer has to stay quiet.
How do you think about that? How does that make you feel? Is there a future where more can be
future where more can be shown or is it just the nature of this world that it has to remain secret?
It's a good question. I think the public can see enough of including students who may be in grade school, high school, college today, to understand the
kinds of really hard problems that we work on.
And I mean, look at the F-35, right?
And, you know, obviously a lot of the detailed performance levels are sensitive and control.
But, you know, we can talk about what an incredible aircraft this is, you know, supersonic, super
cruise, kind of a fighter, a, you know, stealth capabilities.
It's a flying information, you know, system in the sky with data fusion, sensor fusion capabilities
that have never been seen before.
So these are the kinds of things that I believe, you know, these are the kinds of things that
got me excited
when I was a student.
I think these still inspire students today.
And the other thing, I mean, people are inspired by space.
People are inspired by aircraft.
Our employees are also inspired by that sense of mission.
And I'll just give you an example.
I had the privilege
to work and lead our GPS programs for some time. And that was a case where I actually worked on a
program that touches billions of people every day. And so when I said I worked on GPS, everybody knew
what I was talking about, even though they didn't maybe appreciate the technical challenges that went into that.
But I'll tell you, I got a briefing one time from a major in the Air Force.
And he said, I go by call sign, Gimp, GPS is my passion.
I love GPS, and he was involved in the operational test of the system. I went
I was at Interak and I was on a helicopter, Black Hawk helicopter, and it was bringing back
a sergeant and a handful of troops from a deployed location. He said, my job is GPS. So I asked that sergeant, he's beaten down and kind of half asleep.
And I said, what do you think about GPS?
And he brightened up his eyes lit up.
And he said, well, GPS, that brings me and my troops home every day.
I love GPS.
And that's the kind of story where it's like, OK, I'm really
making a difference here in the kind of work.
So that mission piece is really important.
The last thing I'll say is, and this gets to some of these questions around advanced
technologies.
It's not, you know, they're not just airplanes and spacecraft anymore.
For people who are excited about advanced software capabilities, about AI, about bringing
machine learning, these are the app, These are the things that we're doing to
exponentially increase the mission capabilities that go on those platforms. And those are the
kinds of things that I think are more and more visible to the public. Yeah, I think autonomy,
especially in flight is super exciting. Do you see a day, here we go, back into philosophy.
Do you see if a day, here we go, back into philosophy, future on most fighter jets will be highly autonomous to a degree where a human doesn't need to be in the cockpit in almost all cases?
Well, I mean, that's a world that to a certain extent we're in today.
Now, these are remotely piloted aircraft to be sure,
but we have hundreds of thousands of flight hours a year now
in remotely piloted aircraft.
And then if you take the F-35,
there are huge layers, I guess,
and levels of autonomy built into that aircraft
so that the pilot is essentially
more of a mission manager rather than doing the data, you know, the second to second elements
of flying the aircraft. So in some ways it's the easiest aircraft in the world to fly.
And kind of a funny story on that. So I don't know if you know how aircraft carrier landings work, but basically there's what's
called a tailhook and it catches wires on the deck of the carrier.
That's what brings the aircraft to a screeching halt.
There's typically three of these wires.
If you miss the first, the second one, you catch the next one And we got a little criticism, I don't know how
true this story is, but we got a little criticism. The F-35 is so perfect, it always gets the second
wires. We're wearing out the wire because it always hits that one. But that's the kind of autonomy
that just makes these essentially up levels what the human is doing to more of that mission
manager.
So much of that landing by the F-35 is autonomous.
Well, it's just, you know, the control systems are such that you really have dialed out
the variability that comes with all the environmental conditions.
Sure.
So my point is to certain extent, that world is here today.
Do I think that we're going to see a day anytime soon when there are no humans in the cockpit?
I don't believe that but I do think we're gonna see much more human machine teaming and we're gonna see that much more at the tactical edge
And we did a demo. You asked about what the skunk works is doing these days
And so this is something I can talk about
But we did a demo with the Air Force Research Laboratory.
Our laboratory. We called it have raider. And so using an F-16 as an autonomous wingman, and we demonstrated all kinds of maneuvers and various mission scenarios with the Autonomous X-16 being that so-called loyal or trusted wingman.
And so those are the kinds of things that, you know,
we've shown what is possible now,
given that you've upleveled that pilot
to be a mission manager,
now they can control multiple other aircraft
than can almost as extensions of your own aircraft
fly on alongside with you.
So that's another example of how this is really coming to fruition.
And then I mentioned the landings, but think about just the implications for humans and flight
safety.
And this goes a little bit back to the discussion we were having about how do you continuously
improve the level of safety through automation while working
through the complexities that automation introduces.
So one of the challenges that you have in high performance, fighter aircraft is what's
called G-LOCK.
So this is G-induced loss of consciousness.
So you pull 9 Gs, you're wearing a pressure suit.
That's not enough to keep the blood going to your brain.
You black out. Right.
And of course, that's bad.
If you happen to be flying low near the deck and earn obstacle or terrain environment.
And so we developed a system at our aeronautics division called AutoGCAS, so autonomous ground
collision avoidance system.
And we built that into the F-16.
It's actually saved seven aircraft, eight pilots already.
And the relatively short time it's been deployed,
it was so successful that the Air Force said,
hey, we need to have this in the F-35 right away.
So we've actually done testing in that now on the F-35.
And we've also integrated an autonomous air collision avoidance system,
so I think the air-to-air problem.
Now it's the integrated collision avoidance system,
but these are the kinds of capabilities.
I wouldn't call them AI.
They're very sophisticated models of
the aircraft dynamics coupled with the terrain models to be able to predict when,
essentially, the pilot is doing something that is going to take the aircraft, or the pilot's not
doing something in this case, but it just gives you an example of how autonomy can be really a
life saver in today's world. It's like autonomous, automated, emergency braking in cars.
But is there any exploration of perception of, for example, detecting a G-lock that the
pilot has is out, so as opposed to perceiving the external environment to infer that the
pilot is out, but actually perceiving the pilot directly?
Yeah, this is one of those cases where you'd like to not take action if you think the pilot's
there.
It's almost like systems that try to detect if a driver's falling asleep on the road,
with limited success.
So this is what I call the system of last resort, where if the aircraft has determined that
it's going into the terrain
get it out of there. And this is not something that we're just doing in the aircraft world.
I wanted to highlight, we have a technology we call matrix, but this is developed at
Sokorsky Innovations. The whole idea there is what we call optimal piloting. So not optional piloting or unpiloted, but optimal piloting.
So an FAA certified system. So you have a high degree of confidence. It's generally
pretty deterministic. So if we know that it'll do in different situations, but effectively be able to fly a mission with two pilots, one pilot, no pilots.
And you can think of it almost as like a dial of the level of autonomy that you want,
but able, so it's running in the background at all times, and able to pick up tasks,
whether it's, you know, sort of autopilot kinds of tasks or more sophisticated path planning kinds of
activities, to be able to do things like, for example, land on an oil rig in the North Sea
and bad weather, zero-zero conditions. And you can imagine, of course, there's a lot of military
utility to capability like that. You could have an aircraft that you want to send out for a crude mission,
but then in the at night, if you want to use it to deliver supplies in an unmanned mode, that could
be done as well. And so there's clear advantages there. But think about on the commercial side,
you know, if you're an aircraft taken, you're going to fly out to this oil rig. If you get out there
and you can't land,
then you got to bring all those people back, reschedule another flight, pay the overtime for the
crew that you just brought back because they didn't get where they were going, pay for the overtime
for the folks that are out there in the oil rig. This is real economic. These are dollars and
sense kinds of advantages that we're bringing in the commercial world as well.
So, this is a difficult question from the AI space that I would love it if you're able
to comment.
So, a lot of this autonomy in AI you've mentioned just now has this empowering effect.
One is the last resort.
It keeps you safe.
The other is there's a, with the teaming and in general assistive, assistive AI.
And I think there's always a race.
So the world is full of, the world is complex.
It's full of bad actors.
So there's often a race to make sure
that we keep this country safe, right?
to make sure that we keep this country safe, right?
But with AI, there is a concern as a slightly different race.
There's a lot of people in the AI space
that are concerned about the AI arms race
that as opposed to the United States becoming,
you know, having the best technology
and therefore keeping us safe,
even we lose ability to keep control of it.
So the AI arms race getting away from all of us humans.
So do you share this worry?
Do you share this concern when we were talking
about military applications that too much control
and decision making capabilities giving to software AI?
Well, I don't see it happening today. In fact, this is something from a policy perspective.
It's obviously a very dynamic space, but the Department of Defense has put quite a bit of thought
into that. And maybe before talking about the policy, I'll just talk about some of the
why. And you alluded to it being sort of a complicated and a little bit scary world out there. But
there's some big things happening today. You hear a lot of talk now about a return to great
powers competition, particularly around China and Russia with the US, but there are some other big players out there as well.
And what we've seen is the deployment of some very, I'd say, concerning new weapon systems,
particularly with Russia and breaching some of the IRBM, intermediate range ballistic
missile treaties. That's been in the news a lot.
You know the building of islands,
artificial islands in the South China Sea by the Chinese and then arming those islands.
The annexation of Crimea,
Pairussia, the invasion of Ukraine. So there's some pretty scary things. And then you add on top of that,
the North Korean threat has certainly not gone away. There's a lot going on in the Middle East
with Iran in particular. And we see this global terrorism threat has not abated, right? So there are
a lot of reasons to look for technology to assist with those problems, whether it's
AI or other technologies like hypersonage, which we discussed.
So now, let me give just a couple of hypotheticals.
So people react sort of in the second time frame, right?
You're photon hitting your eye to, you know, a movement is, you know, on the order of a few tenths
of a second kinds of processing time.
Roughly speaking, you know, computers are operating
in the nanosecond time scale, right?
So just to bring home what that means,
a nanosecond to a second is like a second to 32 years. So seconds
on the battlefield, in that sense, literally are lifetimes. And so if you can bring an autonomous
or AI-enabled capability that will enable the human to shrink. I maybe heard the term the utaloop. So this whole idea that a typical battlefield decision
is characterized by observe.
So information comes in, orient.
How does that, what does that mean in the context?
Decide, what do I do about it?
And then act, take that action.
If you can use these capabilities
to compress
that ootaloop, to stay inside what your adversary is doing, that's an incredible, powerful force
on the battlefield.
That's a really nice way to put it. The role of AI in computing in general has a lot to
benefit from just decreasing from 32 years to one second, as opposed to on the scale of seconds
of minutes and hours making decisions that humans are better at making.
And it actually goes the other way too, so that's on the short time scale. So humans kind of work in
the, you know, one second, two seconds to eight hours after eight hours, you get tired, you know,
you gotta go to the bathroom, whatever the case might be. So there's this whole range of other things.
Think about surveillance and guarding facilities.
Think about moving material, logistics, sustainment.
A lot of these, what they call dull, dirty and dangerous things, that you need to have sustained
activity, but it's sort of beyond the length of time that a human can practically do as well.
So there's this range of things that are critical in military and defense applications that
AI and autonomy are particularly well suited to.
Now the interesting question that you brought up is, okay, how do you make sure that stays
within human control?
And that was the context for the policy.
And so there is a DOD directive called 3000.09 because that's the way we name stuff in
this world.
But it's well worth reading.
It's only a couple pages long, but it makes some key points. And it's really around making sure that there's
human agency and control over use of semi-autonomous
and autonomous weapon systems, making sure
that these systems are tested, verified, and evaluated
in realistic, real world type scenarios, making sure that the people
are actually trained on how to use them, making sure that the systems have human machine
interfaces that can show what state they're in and what kinds of decisions they're making,
making sure that you establish doctrine and tactics and techniques and procedures for
the use of these kinds of systems.
And so, and by the way, I mean, none of this is easy, but I'm just trying to lay kind
of the picture of how the U.S. has said, this is the way we're going to treat AI and autonomous
systems, that it's not a free for all.
And like there are rules of war and rules of engagement with other kinds of systems,
think chemical weapons, biological weapons, we need to think about the same sorts of implications.
And this is something that's really important for Lockheed Martin.
And obviously, we are 100% complying with our customer and the policies and regulations.
But I mean, AI is an incredible enabler, say within the walls of Lockheed Martin, in
terms of improving production efficiency, doing helping engineers, doing generative design,
improving logistics, driving down energy costs.
I mean, there are so many applications.
But we're also very interested in some of the elements of ethical
application within Lockheed Martin. So we need to make sure that things like privacy
is taken care of, that we do everything we can to drive out bias in AI enabled
kinds of systems, that we make sure that humans are involved in decisions that are not just delegating accountability to algorithms.
And so it for us, you know, it all comes back, I talked about culture before and it comes back to sort of the Lockheed Martin culture in our core values.
And so it's pretty simple for us and do what's right, respect others, perform with excellence.
And now how do we tie that back to the ethical principles
that will govern how AI is used within Lockheed Martin.
And we actually have a world, so you might not know this,
but there are actually awards for ethics programs.
Lockheed Martin's had a recognized ethics program
for many years.
And this is one of the things that our ethics team
is working with our engineering team on.
One of the miracles to me, perhaps the layman, again I was born in the Soviet Union, so I have echoes, at least in my family history of World War II and the Cold War, do you
have a sense of why human civilization has not destroyed itself through nuclear war, so nuclear deterrence.
And thinking about the future,
this technology of a role to play here
and what is the long-term future
of nuclear deterrence look like?
Yeah, this is one of those hard questions.
And I should note that Lockheed Martin is both proud and privileged
to play a part in multiple legs of our nuclear and strategic deterrent systems like the
Trident submarine launch ballistic missiles. You know, you talk about, you know, is there
still a possibility that human race could destroy itself? I'd say
that possibility is real, but interestingly, in some sense, I think the strategic deterrence
have prevented the kinds of incredibly destructive world wars that we saw in the first half of
the 20th century. Now, things have gotten more complicated since that time and since the Cold War.
It is more of a multipolar, great powers world today.
Just to give you an example, back then, you know, there were, you know, in the Cold War time frame,
just a handful of nations that had ballistic missile capability.
By last count, and this is a few years old. There's over 70 nations today that have that similar kinds of
numbers in terms of space-based capabilities
So so the world has gotten more complex and more challenging and the threats I think have proliferated in ways that we didn't expect
threats I think have proliferated in ways that we didn't expect. You know, the nation today is in the middle of a recapitalization of our strategic deterrent.
I look at that as one of the most important things that our nation can do.
What is involved in deterrence? Is it being ready to attack or is it the defensive systems that catch attacks?
A little bit of both.
So it's a complicated game theoretical kind of program.
But ultimately, we are trying to prevent the use of any of these weapons. And the theory behind prevention is that even if an adversary
uses a weapon against you, you have the capability to essentially strike back and do harm to them
that's unacceptable. And so that will deter them from making use of these weapon systems.
from making use of these weapon systems.
The deterrence calculus has changed, of course, with more nations now having these kinds of weapons.
But I think, from my perspective,
it's very important to maintain a strategic deterrent.
You have to have systems that you will know will work when they're required to work.
And you know that they have to be adaptable to a variety of different scenarios in today's world.
And so that's what this recapitalization of systems that were built over
in previous decades, making sure that they are appropriate, not just for today, but for the decades to come.
So the other thing I'd really like to note is strategic deterrence has a very different
character today. We used to think of weapons of mass destruction in terms of nuclear,
chemical, biological. And today we have a cyber threat. We've seen examples of the use
of cyber weaponry. And if you think about the possibilities of using cyber capabilities or an
adversary attacking the US to take out things like critical infrastructure, electrical grids, water systems.
Those are scenarios that are strategic in nature to the survival of a nation as well.
So that is the kind of world that we live in today.
Part of my hope on this is one that we can also develop technical or technological
systems, perhaps enabled by AI and autonomy, that will allow us to contain and to fight
back against these kinds of new threats that were not conceived when we first developed
our strategic deterrence.
Yeah, I know that Lockheed is involved in cyber.
So I saw that you mentioned that.
It's incredibly, nuclear almost seems easier than cyber because there's so many attack,
like there's so many ways that cyber can evolve in such an uncertain future.
But talking about engineering with a mission, I mean, in this case, your engineering systems
that basically save the world.
Like I said, we're privileged to do to work on some very challenging problems for very
critical customers here in the US and with our allies and brought as well. Lockheed builds both military and non-military systems.
And perhaps the future Lockheed may be more in non-military applications if you talk about
space and beyond.
I say that as a preface to a difficult question.
So President Eisenhower in 1961, in his farewell address talked about the military industrial complex,
and that it shouldn't grow beyond what is needed. So what are your thoughts on those words
on the military industrial complex, on the concern of growth of their developments beyond what may be needed? That what where it may be needed is a critical phrase, of course.
And I think it is worth pointing out, as you noted, that Lockheed Martin, we're in a
number of commercial businesses from energy to space to commercial aircraft.
And so I wouldn't neglect the importance of those parts of our business as well.
I think the world is dynamic and you know there was a time and it doesn't seem that long ago to me
it was well I was a graduate student here at MIT and we were talking about the peace
dividend at the end of the Cold War. If you look at expenditure on military systems
as a fraction of GDP,
we're far below peak levels of the past.
And to me, at least, it looks like a time
where you're seeing global threats changing in a way
that would warrant relevant investments
in defense, defensive capabilities.
The other thing I'd note, for military and defensive systems, it's not quite a free
market, right?
We don't sell to people on the street.
And that warrants a very close partnership
between, I'd say the customers and the people
that design, build, and maintain these systems
because of the very unique nature,
the very difficult requirements,
the very great importance on safety and operating the way
they're intended every time. And so that does create, and it's, frankly, it's one of Lockheed
Martin's great strengths, is that we have this expertise built up over many years in partnership
with our customers to be able to design and build these systems that meet these very unique mission needs.
Yeah, because building those systems very costly, it's very little room for mistake.
I mean, it's just Ben Rich's book and so on, just tells the story.
It's no record, I can just reading it.
If you're an engineer, it's read as like a thriller.
Okay. Let me, let's go back to space for a second. I guess.... Let me let's go back to space for
say I guess I'm always happy to go back to space. So a few quick maybe out
there, maybe fun questions, maybe a little provocative. What are your thoughts
on the efforts of the new folks, SpaceX, and Elon Musk. What are your thoughts about what Elon is doing?
Do you see him as competition? Do you enjoy competition? What are your thoughts?
Yeah, first of all, certainly Elon, I would say SpaceX and some of his other ventures are definitely
competitive force in the space industry.
And do we like competition?
Yeah, we do.
And we think we're very strong competitors.
I think it's, you know, competition is what the US has founded on in a lot of ways.
And always coming up with a better way.
And I think it's really important to continue,
to have fresh eyes coming in, new innovation. I do think it's important to have level playing
fields. And so you want to make sure that you're not giving different requirements to different
players. But, you know, I tell people, you know, I spend a lot of time at places like MIT,
I'm going to be at the MIT B-Werwerks Summer Institute over the weekend here. And I tell people, I spend a lot of time at places like MIT, I'm going to be at the MIT B-Werwerks Summer Institute
over the weekend here.
And I tell people, this is the most exciting time
to be in the space business in my entire life.
And it is this explosion of new capabilities
that have been driven by things like the massive increase
in computing power, things like the massive increase in computing power,
things like the massive increase in comms capabilities,
advanced and additive manufacturing are really bringing down the barriers to entry in this field
and it's driving just incredible innovation. It's happening at startups, but it's also happening
at Lockheed Martin. I may not realize this, but Lockheed Martin working with Stanford actually built the first cubes
that was launched here out of the US.
That was called QuakeSet.
We did that with Stellar Solutions.
This was right around just after 2000, I guess.
And so we've been in that from the very beginning.
And I talked about some of these like Maya and Orion,
but we're in the middle of what we call smartsats
and software-defined satellites
that can essentially restructure and remap their purpose,
their mission on orbit to give you
almost unlimited flexibility for these satellites
over their lifetimes. So those are just a couple
of examples, but yeah, this is a great time to be in space. Absolutely. So Wright Brothers flew
for the first time, 116 years ago. So now we have supersonic stealth planes and all the technology
we've talked about. What innovations, what innovations obviously can't predict the future,
but do you see Lockheed in the next 100 years?
If you take that same leap,
how will the world of technology engineering change?
I know it's an impossible question,
but nobody could have predicted
that we could even fly 120 years ago.
So what do you think is the edge of
possibility that we're going to be exploring the next hundred years? I don't know
that there isn't edge. We've been around for almost that entire time, right? The
Lockheed Brothers and Glen El Martin starting their Companies, you know, and the basement of a church and a old, you know service station
We're very different companies today than we were back then, right?
And that's because we've continuously reinvented ourselves over the all all of those decades
I think it's fair to say yeah, I know this for sure the world of the future
It's gonna move faster. It's going to be more connected,
it's going to be more autonomous, and it's going to be more complex than it is today.
And so this is the world, you know, as a CTO Lockheed Martin that I think about, what are the
technologies that we have to invest in, whether it's things like AI and autonomy? You know, you can
think about quantum computing, which is an area that we've invested in to try to stay ahead of these technological changes
and frankly, some of the threats that are out there.
I believe that we're going to be out there in the solar system, that we're going to be
defending and defending well against military threats that nobody has even thought about
today. We are going to be,
we're going to use these capabilities to have far greater knowledge of our own planet,
the depths of the oceans, all the way to the upper reaches the atmosphere and everything out
to the sun, to the edge of the solar system. So that's what I look forward to. And I'm excited, I mean, just looking ahead
in the next decade or so to the steps
that I see ahead of us in that time.
I don't think there's a better place to end,
okay, okay, thank you so much.
Lexus.
It's been a real pleasure and sorry,
it took so long to get up here,
but I'm glad we were able to make it happen. you