a16z Podcast - a16z Podcast: Autonomy in Service
Episode Date: May 25, 2018with Gregory Allen (@Gregory_C_Allen), Gayle Lemmon (@gaylelemmon), Ryan Tseng, and Hanne Tidnam (@omnivorousread) We now live in a world where connecting the dots between intel and modeling threats h...as become infinitely more complex: not only is the surface area to protect larger than ever, but the entry points and issues are more diverse than ever. This conversation, with Gregory Allen, a Fellow at the Center for a New American Security and co-author of the Belfer Center report on AI and National Security; Gayle Tzemach Lemmon, Chief Marketing Officer of Shield AI and the author of The Dressmaker of Khair Khana and Ashley's War; Ryan Tseng, CEO and Co-founder of Shield AI; and a16z’s Hanne Tidnam, considers AI and automation in the context of national security. Given the nature of today's conflict situations — which are over the last few decades increasingly in urban environments, in counterinsurgency operations, and often in ‘boots on the ground’ environments where it is very difficult for service to distinguish between civilians and combatants — how can new autonomous technologies actually improve how we protect the lives of servicemen and women on the ground? How might they enhance critical human decision making moment to moment, to save more lives? And more broadly, how is AI shifting national security power dynamics around the globe? ––– The views expressed here are those of the individual AH Capital Management, L.L.C. (“a16z”) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments and certain publicly traded cryptocurrencies/ digital assets for which the issuer has not provided permission for a16z to disclose publicly) is available at https://a16z.com/investments/. Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.
Transcript
Discussion (0)
The content here is for informational purposes only, should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16Z fund. For more details, please see A16Z.com slash disclosures.
Hi and welcome to the A16Z podcast. I'm Hannah, and today we're talking about AI and automation in the context of national security, given the nature of today's conflict situations. How did these technologies change how we protect lives in those conflict situations? And also, how is AI shifting power dynamics around the globe? Joining me is Gregory Allen, fellow at the Center for New American Security and co-author of the Belfor Center Report on AI and National Security. Gail Lamon, chief marketing officer,
Shield A.I. and the author of The Dressmaker of Care, Kana, and Ashley's War, both of which dealt
with post-911 conflicts, and Ryan Sang, the CEO and co-founder of Shield A.I. We're all aware that the
nature of warfare and national security today is no longer hundreds of thousands of men on a field
facing each other, nor is it simply nuclear races. So what are today's conflict situations,
actually? What do they really look like? United States has been at war continuously now, basically,
since 2001 in a non-stop situation. And those wars, for the majority of their time frame,
have been in counterinsurgency operations, which does involve boots on the ground and is very
people intensive. And we've been doing that for a long time. We're talking about wars that are
increasingly fought in urban environments, settings where it is increasingly difficult for service
members to distinguish between civilians and combatants, and where combatants are basically
using that strategically. Special operations has been asked to do a great deal in these post-9-11
conflicts. 2016 was actually the first year in which special operations combat deaths outnumbered
those of conventional forces. And think about that is less than 5% of the entire United States military.
Less than 1% of this country has fought 100% of its wars for 17 years. If you think about the numbers,
it really points to the mission set. These are not.
nameless, faceless people whose lives are caught up in these conflicts. And I think that technology
has tried, but not necessarily really succeeded in catching up with what is happening on the
battlefield. So if we sort of telescope out, how is that reflected in the overall U.S. national
security strategy for dealing with conflict? What's the relationship between on-the-ground missions
to the overarching military strategy? The most recent U.S. national security strategy anticipated a shift
in posture for the United States military, from the type of wars that we have been fighting,
and we have now a lot of experience fighting, from the types of conflicts that they want to
focus on preparing for. For the first time, in a long time, the United States did not name
terrorism as the top national security threat facing the nation. Instead, great power conflict
is that now. Large scale. Exactly. So the primary sort of named competitors or potential
adversaries in the national security strategy would be China and Russia. And if you look towards the
preparations that the militaries are thinking for in their long-term strategy, both for organization
and for acquisition, that's really where they are gearing towards. But at the same time, we are
still in two wars. We're still fighting the first category as well. Exactly. So we are struggling to
try and make this transition without actually transitioning. Even in these new or potential conflicts of
the future, we're seeing the same uncertainty. So if you look at Russia,
moving into the Ukraine in the past couple years, there was a huge amount of deception and uncertainty
in terms of what was actually happening on the ground. Yeah, let's talk about what that really
looks like as it plays out what we know, what we don't know, from both kind of mission control
and also immediately on the ground. Like what's the information flow like currently and what are
the tools that are providing that information like? The number one challenge is getting eyes
and ears in the right places. So we have
very advanced technologies in terms
of satellites, high altitude platforms, and we
also have very brave young
men and women that are willing to get very
close to the information in order to collect it.
But the challenge with these approaches
is that the quality of information is not
at a standard where you can say with certainty
what's happening on the ground, either
because of the way that it's collected or because
the limited amount that might be available.
And that's because it's from satellites.
It's either because it's from satellites and then the
availability of resources on the ground
is quite limited. Can you give an example of the kind of information that we might have and the other kinds that we do not? So just a very simple example is what's indoors versus what's outdoors, right? People spend the vast majority of their time. First of all, their macro trends for is urbanization. And then on top of that, you know, people live inside. They don't spend their days sitting out in the middle of a field with all of their activities in open view. And so kind of at a very basic level, we can't see the vast majority of the world in terms of where people and things are located.
I would add to that a distinction between the amount of data that we collect and the amount of data that we analyze.
Right now, for instance, on just drone platforms alone, more than 95% of the data that is collected is never viewed by anyone, ever.
And that is simply because we are collecting far more data than we have humans that are able to analyze it.
So there's one sensor in particular that is capable of observing and basically an entire city at one time.
But the problem with this sensor is that there's not a sensor.
enough humans to be watching it all of the time. And so really, it's primary use cases as a time
machine. Once a IED improvised explosive device goes off, we then look at that footage and
rewind it to the day before and say, okay, who must have planted that explosive? Because we don't
have enough people to watch the sensor to see the explosive being planted, but once it's gone off,
then we know where to look. It's like security footage at a gas station. Exactly. Once it gets robbed,
you can look back. So this analysis shortage is really a human bottleneck. Within the U.S.
military, there are literally thousands of people whose primary job is to watch drone footage
and analyze it for information that is relevant to the conflict at hand or U.S. national security.
But there's far more data than you could ever hire enough individuals to go after.
There are two ways that I think artificial intelligence can really make a big impact.
One is just helping pre-process that information before it's presented to people.
And there's a great opportunity there, and there's some programs within the DOD that are
focused on doing that.
The second is actually just getting better information in the first place.
So when you don't have somebody on the ground and you can, so let's say, for example,
you're looking for a person and you cannot actually see that person because the satellite
is so far away.
Right, you see a rooftop.
Exactly.
And so just getting higher quality information in the first place can dramatically accelerate
or reduce the amount of information that you even need to go through to make a positive
identification of that event or object or thing that you're looking for.
You're really talking about providing mission critical information.
at mission critical moments.
And how do you get the right information
to the right person at the right time
in a moment when it can really make a difference?
So if we're seeing rooftops,
how does that play out right now?
There's some amount of analysis
that is conducted ahead of time
to try to give the best possible picture
to these people
before they go on their operation.
But in many circumstances,
when people are asked to conduct operations,
there are still a good amount of uncertainty.
And so brave men and women
are asked to, for instance,
go into buildings,
not knowing whether or not they're booby-trapped, whether or not they're walking into an ambush,
or a number of other possible risks.
There is a huge opportunity to improve the gap of uncertainty that exists.
Yeah.
Yeah, because I just imagine it's 2018.
No, it's kind of amazing that you're still walking into.
Still walk into a black, dark room and have no idea what's on the, I mean, that does sort of feel like archaics.
It's a shocking gap, right?
I mean, in some ways, when you think about what is possible versus what is,
right now, more oftentimes a reality than not.
So what is AI doing with that specific kind of immediate unknown and relaying information
in the moment?
That's a good question, because clearing buildings of threats has been one of the most costly
missions for U.S. forces in terms of human life, and one of the most costly missions
for civilians in terms of human life since post-9-11.
We've applied an artificial intelligence to a drone that's able to fly through buildings
and basically, in a completely autonomous manner, it looks for people inside of those buildings.
And so rather than an 18-year-old or 19-year-old being the first person through a door,
you can throw a robot inside, which will provide a very clear picture about what's going on in the inside.
Similar to people listening has seen the movie Prometheus, where they've got these robots that explore caves.
And you really can make a difference in terms of mission effectiveness, right?
I think we have to remember the stakes.
When you think about the consequences of not having that information, we have seen over and over people walking into booby-trapped compounds or compounds where people who understood when U.S. forces were coming and evacuated but left things for them, namely explosives, for them to find.
And what happens when you don't have that information ahead of time is, you know, you can have really tragic loss of life.
And that kind of thing is happening all the time.
Every single night. I do think we're a country that has gotten perilously far away.
from what we've asked of people in uniform.
So that's an interesting point
that it's bringing you back to a very immediate
and this is the information that they need right now.
Can you break down the technology
that makes that possible that wasn't before?
So for a long time,
we've had machines that could significantly
outperform people
and their ability to execute mechanical functions
provided they were repetitive or well-constrained.
What AI represents is now the ability
to allow machines to apply them to,
a much larger spectrum of activities.
And so for us, the way that we think about the tech stack
is in terms of a decomposition of intelligence
and what does it mean to be what we would consider
a resilient, intelligent, intelligent machine.
That breaks down into two buckets of things.
So the first is what we call perception, action, and cognition.
And the second, we call introspection, adaptation, and involvement.
So let's start with the first, perception, action, cognition.
perception is the ability to look around the world and understand what you are seeing.
And it could be through a machine, a camera, but it doesn't necessarily need to be looking at the physical world.
It could exist in the cyber domain.
It could exist in the electronic domain.
But kind of at a fundamental level, there are objects in these locations, and I recognize what they are.
And for Shield, is it purely visual?
No, we use a combination of cameras, Lidars, and radars, and many of their sensors, to help machines navigate the world to get into the areas they need to.
in order to collect the information.
The environments that our machines operate in
are relatively challenging
in the sense that there's a lot of dust,
there's a lot of unstructured obstacles.
Battlefields are dynamic at the same time.
There are people moving around and so on.
So you need a lot of complementary sensors
in order to ensure reliability.
How about things that a human would walk in
and notice like smell?
I smell gas.
Robots can carry similar sensors
that don't necessarily smell,
but they can look for things.
that you wouldn't be able to see.
Explosive residue, for example.
So beyond human perception there.
How does that work?
There are chemical sensors.
There are hyper-special cameras
that can see things beyond what our eyes
or normal cameras would be able to see.
And so they actually do have
superhuman sensing capability.
But the key is to turn that from pixels
and data into actual understanding
that the machine can use.
Because for a long time,
we've actually had the ability to apply these sensors,
but it always came back to humans needing
to assess the information in order to determine courses of action.
Which is no good if you're walking into a doorway, right, then.
Correct.
So perception is this notion of, here's where everything is in the world.
Cognition is, given my prior experiences, and what I want to achieve, this is what I should do.
And then action is just, I affect the world or move myself in the way that I need to
in order to take whatever step I decided to do.
And then we just go through this perception, cognition, action loop over and over and over again
as people and machines do the same thing.
Now, in order to achieve really advanced levels of performance, there's another component,
and we call this loop the introspection adaptation and involvement loop.
So introspection is the notion of what are my capabilities or what is my health.
And so this could be something as simple as what's my battery life,
or it could be something more complex, such as I know that I'm good at doing X,
and therefore I can behave optimally in these circumstances,
and I know that I'm bad at Y,
and therefore I'm going to spin up millions of simulations to become better at Y.
And what are some of those X's and Y's?
It could be.
I know that if I try to fly through a doorway that is 24 inches across, I know that I can do that very well.
If I know that I need to coordinate the exploration of a village with 100 other robots
and the communication network is going to be jammed the whole time, I might not know how to do that well today.
I need to solve this better.
And I need to solve this better.
And so then adaptation is, given my awareness of my capabilities,
how can I change what I'm doing or change something about the circumstance
in order to improve the likelihood that I succeed?
And finally, evolvement is that given an encounter with a situation enough times,
the machine then becomes very good at it.
It feels like a very granular, immediate level of all this large-scale AI and machine learning
playing out into moment by moment I'm walking through a door. What am I going to see right before I
get there? But you're gathering this incredible amount of information about the spaces, about the
context, about the environments, about all kinds of things that humans aren't even picking up on.
Are there other uses that are less immediate when you're doing all this information gathering
that you can see this information playing out in sort of longer term ways in terms of either
on the ground conflict like this or national security, ways that you can see.
see that information being used beyond on-the-ground decision-making.
An awful lot goes on in a conflict zone, and there's a ton of different diverse types of
machines in the environment and sensors in the environment.
The United States military has outfitted itself with an extraordinary diversity of sensors,
and they are collecting an unimaginable amount of data.
But most of that data just goes into cold storage, never to be seen again, because there
aren't enough people to analyze it and to drive insights from it. Now, with advanced machine
learning, we are for the first time really seeing an opportunity to make use of data sets that
historically would lie dormant. So the archives are suddenly newly useful. There's two types of
data here. One would be the sort of data that the United States military knows that it wants to
collect, which might be like intelligence or reconnaissance imagery, like satellite imagery or drone-based
imagery. But there's also this whole diversity of data of what is going on within the mission,
within the platforms that we are using. For instance, any kind of flight scenario, what occurred
with the airplane while it was flying, while it was executing the mission. That data is not normally
saved or archived in a way that would be accessible to an algorithm trying to learn about
what happens when we fly a military aircraft in general. So what kind of use would that
information be applied to? What will it actually change? The opportunities there are really
interesting for applying AI to enhance our training and simulation capabilities because we can
learn more about the truth of the types of situations that we encounter and then create
simulations based on that truth upon which to train. And also to think about our strategy
and tactics and our organizational efficiency. That goes from the full spectrum of military
logistics, to enhancing fuel efficiency, all the way down to getting into the nitty-gritty
of combat operations and thinking through how do we reduce casualties and loss of life on our
side? And then how do we also reduce unintentional casualties and loss of life on the other side?
Right now, when we see a building and U.S. troops are receiving fire from that building,
we have to make the decision, do we take the easy way out, which would be to call it an airstrike
and topple the whole building? Or do we recognize that there might be.
non-combatants in that building that we don't know about? And do we choose the harder choice of
going in on the ground? And that's often used against the United States. So if you look at Syria,
the last stand of ISIS in the town of Rukkah, you had ISIS really using human shields.
So you cannot leave floor two of this building. We have floor four of this building. And we know
that that will keep you here and that will protect our lives because we're endangering yours.
The shift in tactics from the early days of the Iraq War to the more counterinsurgency strategy that we saw really throughout the second half of that conflict was all about the United States saying that we believe we need to take the higher risk and endure higher casualties because winning the support of the local population and showing them that we absolutely care about their lives and quality of life as we are engaged in this conflict is crucial.
And so what I think is very exciting about artificial intelligence is can we still make the hard choice to not call in an airstrike or not call in artillery, but can we use technologies such as robotics, such as AI enhanced sensors, in order to reduce the risk when we make that hard choice of loss of life on our side, but also, again, unintentional loss of life among the civilian population.
You know, I think tying together this whole idea of current conflicts and future conflicts, you know, as we look at great.
power conflict. There is a sense that the post-war rules-based order is facing increasing
threat. Secretary Mattis calls it the greatest gift of the greatest generation is this
rules-based order we've all lived in and now kind of take for granted. I think, you know, we all
think it's free. We think that it is possible to ignore it. And we also think that it's
permanent. Right. And the truth is it's none of those three things.
To the question about, when we're collecting all of this data, what do we do with it? I think that
there's absolutely huge opportunity to improve human understanding to enable the best possible decisions.
And we should view that as kind of the critical use of the data.
So let's fast forward 50 years. Will battlefields be predominantly composed of machines or predominantly people?
I think it's reasonable to believe that certainly there will be far more machines in the future than there are today.
And so therefore this data represents the opportunity to train these machines to reach the level of capability that they need
in order to protect national security
and global stability.
We think a lot about taking that data
not only deriving human insight,
but deriving machine insight
so that it can continually evolve
and advance its performance.
What does that look like?
And our chief science officer,
Nate, took a quadruder
that doesn't know how to fly
that has the most advanced controllers
designed by people ever.
And in a period of a few days,
the quadcopter, just through its own experiences,
learned to reach boundaries of performance
that far exceeded
what could be realized by a controller designed by humans.
And this was notable for a couple of reasons.
One, the learning was lifelong, and it was doing it unsupervised.
A lot of times the challenges with these machine learning approaches are
you worry about them learning the wrong thing,
and therefore you have people kind of in the loop, cross-checking,
whether or not the machines are learning the right thing.
And it was able to do this and continues to be able to do this
to learn basically forever from its experiences.
And the things that it learns are within performance boundaries.
So the humans are still setting all those boundaries, and it's just continually honing and learning new things within those?
We believe in the role of having humans in the loop and allowing them to learn particular skills within performance boundaries,
but still having people there with final authority on what they actually do is a key concept.
But also, it's finding its own boundaries in terms of what's physically possible, given environmental constraints, given its own health, and so on.
And it's able to transfer that learning to every other robot in the fleet.
immediately, and it's also able to transfer that learning to machines that have different
computations, sensing, and actuation constraints, and each of those machines are able to
introspect, identify the differences between themselves and the learning machine and only
take lessons that are relevant to them. If you have performance guarantees or boundaries
for the system and you know that all the learning will take place within the performance
guarantees, people can anticipate everything that it will do ultimately. They might not know
how it's going to get there, but they know the behaviors will be bounded.
I mean, you talk about robots fighting wars eventually,
but what you're also describing is a technology for humans to make better decisions.
No, I think that's a critical point.
I think people want to go to what robots are going to be able to do tomorrow,
but I think we really come back to what can we do to protect lives today.
The conversation at ChildEi is really always about the idea of getting the best decisions,
getting the best information, and making sure that you're creating the most knowledge.
I think it's really important when people think,
of artificially intelligent systems
that they also think of them
as information and intelligence gathering tools.
And I think that's often lost in the discussion
about AI and national security.
Right, when we go to a place of Terminator,
you're not thinking about the information.
Right, it's Hollywood version
versus the battlefields of reality.
And we just did that here.
Yeah, we did.
When we're talking about performance guarantees
in the context of collecting information,
then immediately it jumps to,
oh my goodness, AI is this terrible thing
These machines are just learning to collect information better, which will save lives.
Historically on the battlefield, new technologies change power dynamics.
So from small, like, I have a bow and arrow, you have a rock, I have a gun, you don't have a gun,
all the way up to, like, I have a nuclear missile and you don't.
I hear the immediate way that AI is changing that power dynamic on the ground going into unfamiliar environments.
Are there bigger ways that it will shift the relationship between developed or undeveloped countries
or different players in conflict.
Advanced AI techniques come from a long history of the military's use of automation on the battlefield.
So the first aircraft autopilot was developed with a use case of military aircraft in mind.
And that was in the 1920s.
So we've had autonomy since the 1920s.
Yes, absolutely.
But I think what's interesting is that this is all with traditional software programming architectures,
with a very long list of if-then statements.
Ultimately, all of which were typed in by some human.
And what's different now is the use of machine learning,
whereby, to a sort of oversimplified sense,
the system is programming itself based on exposure to examples and data.
Right. Humans aren't necessarily labeling the features anymore.
Well, they might be labeling the training data,
but they are not, in as many words, programming the system in the traditional sense.
So the military has this whole series of verification and validation procedures
that it has developed for traditional software.
How do we know that our automatic systems, our autopilots, or our heat-seeking missiles,
or anything that we do that use as software is going to do what we want it to do?
Well, we have evaluation procedures for such software.
But that's for traditional programming architectures, right?
Machine learning is a new programming architecture.
And we are pretty optimistic overall that we think that this can actually enhance safety.
But that's not an inherent feature of the technology.
Today, electricity is by far the safest way to light your home, far safer than using candles.
But that's not an inherent feature of electricity.
It's very easy to start a fire using electricity.
And in the early days of electricity, they started a lot of fires.
And so right now, I would say the military is, in my view, doing the right thing, which is its early use cases of AI are far removed from the use of force and, in fact, are in non-safety-critical applications, such as data.
analysis. We're not so certain that all other countries on Earth are going to abide by that.
There was a recent headline in Defense 1, Russia to the United Nations don't try to stop us from
building killer robots. And I wish that was a joke headline, but that's actually a pretty
accurate summary of what Russia said at the most recent UN meetings on autonomous weapons.
So we're doing this within a global security context of renewed great power conflict.
And other countries see artificial intelligence as a way to close the gap between their militaries and that of the United States.
And not everybody is functioning under the same conceptual framework there.
So how is the policy community responding to this?
How is it actually perceived right now?
You do see this concern among the policy community.
Will the folks that we are up against have the same ethical framework?
And I do think that is a question that will be facing policymakers in the future.
It's actually pretty easy to policymakers to say, oh, AI is quite interesting.
I think the challenge is persuading them of the scale, of the importance of this technology.
They're hoping that they can mostly do things the same way they've always done them with a few tweaks here to update for the new technology.
No, this is a complete revolution.
It will take decades to unfold, but it will be on the same scale as the invention of aircraft, right, for the early.
It's a whole new paradigm.
It's a whole new paradigm for national security.
And if you think you can get by with the old rules and the old approaches that led to success in the Cold War, those just aren't going to apply here.
So it's really the scale of recognizing how much change is required and how much investment is required to realize that change.
There's a kind of pacing question in that, which is do we want people going slowly to try to understand this enormity or do we move ahead quickly?
There's a tension there.
The most recent defense budget basically said, let's buy a ton more weapons.
of weapons that have already been designed.
And politically, that's incredibly popular, right?
Because those weapons are built in congressional districts all over America.
It's very easy to say, let's just spread the money around.
But in my view, that's the equivalent of Kodak in 1991, radically increasing its investment in film cameras.
Right.
You're buying a lot of stuff that is probably going to be obsolete in the not too distant future.
And so what I wish the military was doing was thinking more on the modernization question, investing more
research and development and preparing themselves for the AI revolution.
The United States has the software talent to make a difference in this conversation.
So I do think it's important that we keep in mind that it's not all doom and gloom.
And there's actually a real protection that this technology can offer.
One thing that I think is very interesting about AI in contrasting with previous technological
revolutions is that the source is very much in commercial industry.
I am not breaking some like security clearance or classification rules to tell.
you this here, but there is no super secret government lab with like advanced AI way better
than commercial industry. The government, the military, they are behind commercial industry.
And is this the first time we've seen that swap? It's very unfamiliar territory for the U.S.
military. As a commercial startup working with government, how do you see them responding to these
technologies coming from outside instead of within? Yeah, I think there's recognition that this
is a new paradigm. And so I think that's why you've seen DAUX is probably the most.
predominant example of the DoD attempting to respond to the new dynamic. It makes B2G feel like
B2B. Which is a pretty wonderful thing when you think about it. A lot of the legacy defense
companies, they sort of have two competitive moats. One is their experience in aircraft or boats or
whatever. And the second is their familiarity with the government contracting process,
which historically is incredibly painful. China has just announced that they were investing
$2.1 billion to open up a new AI research center that is all
consistent with their strategy of military civil fusion, that's very different than in the United
States, where the Department of Defense and the national security community generally has had a
very hard time persuading big Silicon Valley tech companies that they should devote the time
and effort to help the DOD think through AI. That's why the DOD is so excited to work with
startups because they don't have the legacy that some of these bigger technology companies do.
That's really interesting. Thank you very much for joining us on the A16B podcast.
Thank you.
Great to join you.
Thanks. It was great to be here.