Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Solving The AI Black Box: ZK-Proofs in Defence Tech
Episode Date: December 26, 2025In this episode, host Sebastian Couture is joined by Ismael Hishon-Rezaizadeh, CEO of Lagrange, to explore the intersection of frontier cryptography and national security. Ismael discusses the transit...ion of zero-knowledge (ZK) technology from a "token-centric" crypto tool to a vital component of defense, specifically focusing on its role in securing autonomous drone swarms and closing the hypersonic missile gap.They delve into Deep Proof, Lagrange's ZK-machine learning library, which facilitates verifiable AI execution while protecting sensitive model intellectual property and private input data. Ismael introduces the concept of "Accountable Autonomy," arguing that cryptographic proofs are necessary to ensure that lethal "kill chain" decisions are made by the correct models under verified inputs, removing the risks inherent in "black box" AI decision-making. Finally, the conversation touches on the geopolitical competition with China, the importance of domestic chip manufacturing, and why the US market's ability to align private sector innovation with military needs is a decisive strategic advantage.Topics00:00 Intro & Context04:15 ZKML vs. Venice09:30 Protecting Model IP15:00 Dual-Use Defense Pivot21:45 The Palantir Comparison27:10 US-China Chip Race35:20 Drone Swarm Consensus42:15 Accountable Autonomy Explained49:00 Kill Chain Verifiability55:30 EU vs. US DefenseLinksIsmael on X: https://x.com/Ismael_H_R Lagrange Labs: https://www.lagrange.dev/Anduril Industries: https://www.anduril.com/Gnosis: https://gnosis.io/Sponsors: Gnosis: Gnosis has been building core decentralized infrastructure for the Ethereum ecosystem since 2015. With the launch of Gnosis Pay last year, we introduced the world's first Decentralized Payment Network. Start leveraging its power today at http://gnosis.io
Transcript
Discussion (0)
Welcome to Epicenter, the show which talks about the technologies projects and people driving decentralization and the blockchain revolution.
I'm Sebastian Quichu, and I'm here today with Ismail, CEO Lagrange Labs.
Lagrange believes that there is a gap in cryptography within national security and defense.
We realized there was an opportunity to take things that were developed in crypto that are frontier hard tech,
and to purpose that outside of crypto.
The zero knowledge proofs provide this way to glimpse inside the black box.
to determine that the output that you got back has come from the correct model with these set of correct inputs,
which allows you to determine why the decision was made, under what circumstances the decision was made,
and under what circumstances the decision should be adjusted or things should be changed going forward.
I'm here today with Ismail, CEO of Lagrange Labs.
Hey, how are you?
Thank you so much for having me, Sebastian.
I was on Epicenter recently, and there's been so much exciting progress we've had as a company since then.
I'm so excited to be back and talk about a lot of that today.
This episode is brought you by NOSUS, building the open internet one block at a time.
NOSIS was founded in 2015, and it's grown from one of Ethereum's earliest projects
into a powerful ecosystem for open user-owned finance.
NOSIS is also the team behind products that had become core to my business and that are so many
others, like Safe and CowSwap.
At the center is NOSIS chain.
It's a low fee layer one with zero downtime in seven years, and secure.
by over 300,000 validators. It's the foundation for real world financial applications like
NOSISPAY and Circles. All this is governed by NOSISDAO, a community run organization where
anyone with a GNO token can vote on updates, fund new projects, and even run a validator from home.
So if you're building in Web 3 or you're just curious about what financial freedom can look like,
start exploring at NOSIS.io.
Okay, Aesmail, welcome back. You've been on just a couple of months ago, but so much has been happening
over at LaGrange that I want to get you back on
to talk about some of the interesting
military and industrial applications for ZKML
that you guys have been working on.
A lot of exciting partnerships have been announced.
But I guess like before we start
kind of bouncing off of that conversation from July
where we talked about Deep Prove,
for folks who have not listened to that episode,
we should go back and listen to it.
Let's maybe, you know, set the context here for
what is LaGrange building and what is Deep Prove?
and what are you building in the space around ZK machine learning?
Yeah.
So Lagrange likes to think of itself as the preeminent company for frontier research
in applied cryptography across both commercial and national security defense.
And what that means is we are uniquely positioned to build zero knowledge proofs,
things like phythomorphic encryption and then consensus,
in a way that no other commercial company has the capacity to do
really across anywhere in crypto or defense right now.
And so the tech we build, the core of this is something called Deep Proof, which is a zero-knowledge
machine learning library.
Effectively, it takes a model, think of an arbitrary machine learning model all the way
from an LLM to a computer vision model used in a drone or to a simple MLP-used decision, the
movement of assets on chain.
And we're able to do two things.
We're able to firstly prove that that model has executed.
correctly by effectively generating a zero knowledge proof of the correctness of the execution
of that model.
The same way that a ZK roll-up proves that all the transactions executed correctly, ZK machine
learning proves that AI executes correctly.
And secondly, we can do this over a configurable set of private inputs.
The model can be private or the input data can be private.
And this is a very powerful feature because it indakes privacy into how AI can be used across
both commercial and then obviously government defense and national security settings.
Privacy of AI is a very, very big question.
And a lot of the incumbent attempts to add privacy to AI are based on, you know, air-dap systems,
sequestering data, sequestering cloud environments, but or hardware-based security,
but they aren't based on first principles innovation in cryptography.
Lagrange takes the approach that it's going to go to first principles.
It's going to rebuild the...
mathematical fundamentals of how we can bake privacy into AI and then we're going to purpose
it as we do across dual use applications.
Let's maybe just kind of talk about one of the use cases that people are most familiar
with, which would be like chatbots, right?
Talk to Chachybtee or Gemini and how the inference is generated in those cases versus, say,
some of the other more private.
or privacy preserving chat bots like Venice, for instance,
and in contrast that to a first principles approach,
because it sounds like what you're saying,
that those systems don't necessarily use a first principles approach, is that correct?
Yeah, I think Venice uses T's.
I mean, Venice is like a, they take open source models,
they deploy that on the network and they make it private,
which that's great.
I think that's fine for some people.
You're not going to be having a drone swarm that needs
to maintain privacy over computer vision inputs that's using Venice, right?
You're not going to have an LLM used for, you know, by a field operator to determine a plan
of action for some type of defense purpose that uses Venice.
These are interesting curiosities that I think are built by crypto companies, but they tend
not to be commercial applications.
I would argue that for something like OpenAI or Enthropic or DeepMind to actually take one of their closed source proprietary models and add privacy on top of that, it requires a solution that is low touch, doesn't degrade performance, and doesn't result in the underlying model IP being leaked, right?
If you deploy OpenAI's model on Gemma, sorry, on Venice, you're effectively just going to just lose all the IP.
on that. The weights, the biases, the architecture, all to be public. It's a very different
question, right? So if you want to use frontier closed source models, you have to be able
to bake cryptographic security in at the level of the model developer. And that's what
zero knowledge of machine learning does. It lets you add privacy on top of models and lets you add
verifiability on top of models wherever they're deployed.
So just so we understand this correctly, a model that has,
has ZK properties has to be trained.
So from the ground up, it has to be trained as a ZK model.
You can take an existing open source model and make a ZK.
No, that's not what I'm saying.
What I'm contending is that you don't want to use an existing open source model because
the performance of existing open source model is not good.
The majority of commercially interesting models are not existing open source models.
Most people, when they use chatbot, they're using Claude or they're using a
you know, 05, or they're using, you know, Gemini.
They're not using Gemma and GBT2.
Right.
Those are interesting applications.
If you really want to ask your LLM something, you don't want the world to know,
but you, you know, you're so care about privacy of what you're asking that you probably
shouldn't be asking these questions to begin with.
generally where I see ZK machine learning useful is in applications where these are frontier proprietary
models and need to protect both the two aspects of privacy, both the privacy of the person
who wants to input data sent in and the privacy of the model.
Okay, interesting.
So what you're saying is that in order for a ZK model to the any model, any model, right,
any model, the efficiency of that model and its ability to be a private model is very much tied
to how it's trained? Or are those two things uncorrelated?
Yeah, I think for any AI model in the world, right, if you develop the best AI model to do
something, you're not going to open source it, right, unless you run a charity. But, you know,
if you develop a great generative model for video, right, V-O-3.
or Nanobanana 2.
You're not going to open source that.
You're not going to send it over to China.
You're not going to give it to your competitor.
You're going to deploy a website,
and you're going to have an API,
and behind that API, your model is going to sit on a server,
and people are going to come in,
they're going to ping it,
and you're going to charge them per credit.
You're not going to give it to them for free.
So you effectively need to protect privacy of two things
when you're talking about AI.
You need to protect privacy of IP, intellectual property
for the actual model.
And you also, in many cases, want to protect consumer privacy.
So the data that someone is sending to that model, a lot of the private AI solutions are just taking open source models that are publicly available and deploying those in trust execution environments or in a cloud somewhere that they call air gap.
And they are not actually adding primacy on top of commercially relevant models.
Right.
Okay.
They're not taking the frontier models, Nanobanata 2, V-O3, Claude.
They're not adding privacy on those.
or taking Gemma 3 with a billion parameters
that nobody's going to use for anything
and they're pulling it as a curiosity.
How much of ZK machine learning
and the ability to do privacy within models
is hardware dependent?
None of it.
None of it.
It's not, I mean, you get faster inference on GPU
and you get faster proving on GPU, right?
So there is obviously some questions of hardware.
But what I meant by hardware dependent is, like,
how much of that is,
dependent on the ability of T.E's.
None of it.
Not a.
ZE chemistry learning is entirely just mathematics.
Got it.
Okay.
Which is arguably why it's more interesting
because it doesn't require any specific hardware
to have provable AI that's also private.
Precisely.
Interesting.
Okay.
So a lot of the focus around Lagrange
in which you guys have sort of been focused on
in the last couple months,
and certainly the announcements that front of the company
have been around military and industrial applications.
Why is there such an importance here within these very highly sensitive applications for AI and privacy?
What's the kind of edge that you guys have there?
I love this question.
So Lagrange, for the majority of its history, has been a commercial-focused company.
We've developed privacy solutions and cryptographic assurance for AI usage as well,
other types of computation that were relevant database computation earlier in our history,
for solely commercial applications.
There's a belief that I have, though, and this is something that I've seen talked about
by Alex Karp and the Technological Republic.
It's talked about publicly by Palmer Lucky a lot, of businesses pursuing incremental and
trivial applications of technology that marginally improve consumer life instead of materially
influencing national security and defense, right?
This is a generation of businesses that have been developing chat apps.
It's a generation of food delivery and marginal convenience improvements in the bay.
It's the generation of engineers who are brought up to build a slightly better delivery
app instead of trying to close the hypersonic missile gap with China.
It effectively has separated arguably the most competitive and technologically advanced area
of this country from all areas of national security and defense.
And this has been like what U.S. tech was for the better part of 15 years, right?
Before Anderil and Shield and some of the more American dynamism and reindustrialization
companies started raising very large rounds the last three, four years, there was very little
venture financing that deployed into anything in traditional sectors that was doing aerospace,
defense, or national security.
And that's not a successful state of a country.
What we require as a nation is for the most innovative people of that country, the people
who build the best technology, who push the state of the art, who compete to develop the things
that make our nation special, to purpose that, not just to improve the quality of life of
people in Silicon Valley who want their food delivered on a drone, but to purpose it to increase
freedom, to increase the hegemony of the United States globally.
And to do that, you require dual-use applications of frontier tech.
And so when I look at crypto, I see very much the state of Silicon Valley 10 years ago, right?
We've had an entire industry that put a billion dollars into frontier cryptography development
for zero-knowledge proofs, fully homomorphic encryption and consensus.
Well over a billion if you include consensus, probably a billion for zero
DK, a couple hundred million for FHE in funding, and maybe, you know, five, six billion for
things that are raised with novel consensus, maybe three, four billion.
But all of that was purpose for one thing in crypto.
How can I launch a more efficient token?
And so we've put effectively the America's best in cryptography, consensus, mechanism design,
distributed systems, all of this, not to building things that make America safer, make
America more secure, allow us to dominate internationally, but we've instead purposed
at the things that just flip a quick buck for investors wanting a token.
When we looked at this, we realized there was an opportunity to take things that were developed
in crypto that are frontier hard tech and to purpose that outside of crypto, not just to commercial,
but for dual use, for defense, national security, military, and government.
And that's why Legrange started doing this.
We have three professors in applied cryptography in the U.S., our research teams led by Babas Pop Monshu,
who chairs the cryptography department at Yale.
We have eight PhDs in applied cryptography on our team.
We have multiple patents we file in the U.S. on novel proof systems.
We have five or six papers we've authored on the company's history.
And yet, like many other companies, all we were purposing this for was launching
opens. Now, I think there's a tremendous commercial opportunity to instead purpose that,
not just us, but the whole industry, to things that are a lot more important. Yeah, I mean,
I think it resonates with me this idea that a lot of our, a lot of the innovation in tech
in the last 20 years has been around increasing convenience. Maybe this is also sort of a
reflection of just like how cheap capital has been in the last, you know, 20, 30 years.
And also like a relatively peaceful state of the world in the last 20 years.
I mean, certainly since the Cold War.
Debatable.
Yeah.
Right.
I mean, debatable.
But I mean, like people's principles every day sort of like considerations were not,
were no longer whether or not, you know, a nuclear bomb would be dropped on their heads.
which arguably was the case, you know, for like a generation before us.
Now that that has that that the world dynamic is definitely different now.
And I think that that is reflected in the fact that there's more capital for
for military and industrial applications.
But the the return profile on those I think are much different than the, you know,
the delivery app type of things for investors.
How do you think that's going to change?
I mean, already I think we're seeing that changing in capital funding in crypto.
How do you think that's going to change the entire sort of VC landscape where, you know, the time to return is probably a lot longer in a defense company that's building, you know, dorm store technology over, you know, delivery app that's going to be sold to private equity in like a couple of years or, you know, maybe.
sold off on like the next funding round. Yeah, I mean, I would actually disagree. I think that the
reforms and procurement have allowed defense and national security focused companies to see
substantial growth at rates that are comparable to even the delivery apps, right? You look at public
markets, Palantir has been one of the fastest growing companies on the equity markets. I mean,
Palantir is a comp of, you know, what you want your public company to do. And you can
Compare Palantir's performance over the last, you know, let's say 48 months compared to Uber,
DoorDash, and other more convenience-oriented companies, there's, there's no question that
you would want your money parked in Palantir. It's a hard tech problem, right? You look at
another dual-use company, SpaceX, right? SpaceX is IPO, and it's going to be one of the largest
IPOs of all time, right? I'm hearing people saying one and a half trillion has to be the
valuation, right? I mean, these are, these are incredibly hard to fathom outcuts, right? I mean, Uber's what,
like a 50 billion market cap business? And, you know, Palantir, I think,'s over a trillion.
SpaceX could be over a trillion, and the role's going to come out probably within 24 months.
And it's a once again, it's going to be a fantastically large business. Their growth has just been
tremendous. I think the fact that the prime defense contractors,
the Lockheed, the Raytheons, the Northrop Grummans, the General Dynamics, the General Atomics,
they have much more competition now and is a much larger willingness within the Department of War and other agencies
to purchase technology and to solicit bids for technology from non-traditional defense contractors.
And that is going to mean significantly more innovation, significantly more alignment between U.S. private sector innovation and U.S.
military and U.S. defense capabilities. I think that just starts with the willingness of any
administration to purchase defense-relevant technology from non-prime contractors. We've seen that
that is a trend. So I do think the return profile there is going to be very, very strong.
And I think that's why we've seen so much investment in this, right? I think last week it was,
I can't remember the name, hypersonic missile company raised that was closing the gap, you know,
three, 400 million.
And you don't see rounds like that for things outside of American Dynamists that are just,
you know, foundation models in Web 2 Venture right now.
As you were talking, I was looking at the market cap for Lockheed Martin and Raytheon.
I had no idea what these were.
But, yeah, Raytheon is like 240 billion.
Lockheed's 100 billion.
Kind of interesting, though, that software companies.
companies in the space like Palantir are, you know, chasing valuations that's like over a trillion.
Or I think four-fifth is looked it up.
I was wrong.
Oh, okay.
$450 billion for...
Yeah, $4.50 billion.
Yeah.
For Palantir.
Yeah.
While the, you know, the guys building jets are far below that.
Like, I guess, like, in terms of the, you know, the geopolitical aspect here, you, you wrote somewhere
I think it was like a corn market cap article.
I've got the quote here.
That was interesting.
So superiority in software has become the most efficient way
to sustain U.S. leadership worldwide.
I mean, I think that's true.
I mean, to a large extent,
a lot of that software superiority has gone
into things that are maybe more convenience driven
and now that the world is changing.
There's a reaction to that
and that is going down into more military
and industrial applications.
To the extent that that has been possible because of the availability of chips due to somewhat friendly commercial relationships with China and by extension Taiwan,
the ability for the U.S. to be superior in software has been dependent on access to chips.
And that access is now possibly compromised in 2027.
Yeah.
I would say broadly that the ability to be superior in software does come out of chips,
but it also comes down to energy independence, which China also has risks if they move on Taiwan, right?
The majority of Chinese oil moves through the Malacca Strait, which effectively, if Chinese energy was significantly impaired,
having large amounts of chip access
would not necessarily improve
their global position software or AI development.
But yes, it would substantially harm
the United States position in AI development
if chip access from Taiwan was significantly reduced
or cut off.
And chip manufacturing in Taiwan was cut off.
And I think there's a recognition of this
and there's efforts that both private companies
and then obviously the US government is taking
to improve
domestic chip manufacturing and build a foundation for U.S. electronics chip manufacturing.
But it's very unlikely that this will catch up to the capacity that Taiwan can produce
within a reasonable period of time.
There is, I mean, there is like the time it takes to reach capacity, which I think,
you know, we're probably talking about, you know, in the decades.
Decades, yeah.
But there's also access to the rare minerals that make those chips possible to produce.
and I think the Yves and mostly our Chinese.
Is the West kind of by that fact doomed long term unless there are there are, you know,
sustainable business relationship with China, commercial relationship for China that allow them to
purchase these rare metals and also manufacture chips at a scale that allows us all to grow?
No, I would go the other way. And I would say that if China is not able to affect a military action against Taiwan by 2030, and they've publicly said 2027 is the point that they're going to do this. And most intelligence analysts expect 2027. China has a substantial risk of never being able to. And therein significantly reducing its.
prominence internationally. The Chinese navels effectively bottled up in the South China Sea. They're
blocked off by the Ring of Islands consisting of Japan, Taiwan, and the Philippines.
All of their oil moves to the Malacca Straits. Their population is aging. Within 15 years,
the average population in China is going to be the same as the average population in Japan,
which is not very good. Chinese equities have not been doing well. The problem with the SEO system,
with state-owned enterprises that China has is that they are not as competitive at implementing
and developing new technology as U.S. counterparts.
They don't have an open and free market economy.
There is brain drain from China.
There's a lot of intellectual talent in China that would rather study and work in the United
States in a free country than they would within the hierarchy of a state-owned enterprise in China.
I actually don't think that China's position is necessarily very strong.
We also have seen Chinese military equipment fail.
Pakistan was mostly using Chinese military equipment during the recent conflict with India,
and most of the equipment didn't work.
With the recent strikes by Israel on Iran, most of the anti-aircraft systems that were being used by Iran were Chinese,
and most of them didn't work.
So there's levels of grift and corruption and inefficiencies in the Chinese system that are intrinsic to how the Chinese government
structures, control and enterprise. And I think a lot of that boils down to historical issues,
issues over freedom, and issues over free markets. And so I think that the reindustrialization
narrative that the U.S. private sector has been financing very aggressively, so building
U.S. domestic manufacturing capacity, building AI manufacturing, building the capacity to produce
to mine to refine rare minerals in the United States, building chip production in the U.S.,
building drone production in the U.S., building decentralized manufacturing so that I didn't
contested time.
There's a very hard to knock out manufacturing base, plus energy independence, small modular
reactors.
U.S. is very far ahead, I would argue, in the economic structures that will allow it over a
significant periods of time to innovate and outperform China.
One area that I think is kind of interesting is this is, well, I mean, I guess a military
development that has really accelerated since the war in Ukraine is, is drone technology.
And particularly they the use of low cost drones to achieve like all of the sort of like
all the positive advancements we've seen in in Ukraine there.
And it seems like, I mean, from where I'm sitting,
the U.S. approach has been to kind of like throw lots of money at this problem.
And with like no boots on the ground, right?
Whereas like Ukraine has been actively pursuing this problem and
demonstrating that they're able to like utilize cheap commercial drones for all
sorts of military applications.
Yeah.
What's the, like, to what extent is the bottoms up approach that we're seeing Ukraine,
a more effective approach to generating outcomes over like spending billions of dollars
on like new drone technology that may or may not be, I mean, maybe used in conflict,
but like by the time that technology gets used in conflict, there's been like billions of
dollars spent on that technology.
We'll only know if it's effective, right?
like when it's been used?
Is the other thing?
Yeah, I would say it's slightly different than that, right?
So a lot of the drones being used in Ukraine right now
are repurposed civilian drones, right?
So DJI drones, like the DJI Mavik.
And a lot of these are FPV drones that are being, you know,
flown by an operator like the racing drones and an FPV view
that they strap the RPG on top of.
And in contested environment,
strap a fiber optic cable on.
And so these are like very unsophisticated but very effective
improvised munitions, right?
Which are, I think, in many ways,
a great piece of evidence on how warfare is evolving
and how battlefield strategy is evolving.
And I would argue that U.S. drone doctrine
has historically prioritized very large platforms, right?
The U.S. predator drones and the Reaper drones
have been highly effective at pinpoint strikes on targets
in the Middle East, on the war on terror.
the General Atomic's Reaper platforms and Predator platforms.
Now, we're seeing both Air Force and Navy prioritized new CCA initiative,
the collaborative combat aircrafts, which are effectively unmanned UAVs that pair with
existing U.S. fires.
So you'll have an F-35 and you'll have five CCA drones, which are very large combat aircrafts,
flying alongside of it, which is going to hopefully increase the capacity of U.S. aerial programs
to defend and to affect missions in highly adversarial environments, both in terms of combat
with other aircrafts and then combat on ground strikes. But so it is true that a lot of the
U.S. platforms are much larger than the platforms that are being used in Ukraine right now, right?
The majority of the programs, the Shield AI X-Bat and V-Bat, the Anderil Fury, the Anderil Omen,
these are like Group 3, Group 5 UAVs.
You know, the U.S. has groupings of what they've considered UAVs.
Most of the ones used in Ukraine right now are the Group 1 and the Group 2 UAVs.
So very small, managed by a single operator.
Now, it is true that the U.S. has less of a small manufacturing capacity and small drones
of China does and also has less competitive swarm technology that China does, which is actually
one of the problems that Lagrange is currently working on, which is how can you have swarms
that operate in highly contested environments when baselinks are knocked out? How do they effectively
form consensus and reorganize themselves to be able to affect emission outcome, even if half the
drones get shot down, lit on fire, jam, etc. But I would say that U.S. battlefield doctrine is
changing, and there's going to be a prioritization of both very large platforms and a small platform.
platforms. Hegg Seth has talked publicly a lot about that. I think there's needs to be catching
up that the U.S. has to do here. But I wouldn't say the U.S. isn't learning from the Ukraine
more. I think it's probably one of the areas that the U.S. is studying most aggressively.
It's also worth mentioning that probably one of the most effective and undermentioned pieces
of battlefield technology in the war in Ukraine is electronic warfare, jamming things that knockout
drones. And the U.S. has substantial work that's doing.
in this end, right? So how can you effectively knock down large amounts, large swarms of unmanned
vehicles as they try to affect some action on the battlefield? So let's talk about verifiable AI for
defense and some of the work you've been doing with Anderil. You announced recently that
the Grange was building within their lattice SDK, which I'm not super familiar with. Maybe
We can talk about what that SDK is and also how Lagrange is integrating with an ISTK
and how relevant is ZKML in that context.
Yeah.
So to take a step back, one of Lagrange's core thesis is that there is a gap in cryptography
within national security and defense.
Effectively, the modern paradigms for implementing cryptographic security and defense systems
are based on air gaping the system.
system. So just private clouds, private networks, private links, and the assumption that
these communication links can't be broken since they're encrypted, which broadly is true.
Where we believe this type of framing falls down is in highly decentralized combat environments,
which is what we see in modern drone warfare. When you have hundreds, if not thousands
of different drones across potentially different manufacturers that need the capacity to form
consensus or form agreements on the state of a battlefield, and then be able to affect action
subject to the onboard AI or whatever command and control system provided them with a mission
objective.
And so when we look at that problem, it's very analogous in our mind to the problems we see
in crypto, which is you have 10,000, 100,000 nodes globally that need to reach consensus.
from the state of a shared ledger very, very quickly based on a set of transactions that could
be submitted at any single note at any point in time.
It's very similar in our view to the question of how do drones reach coordination and
tested environments.
And so a lot of the technology that Lagrange is building orientes itself around adding enhanced
cryptographic security to the use of autonomous weapons and drones in highly adversarial
environments. And so that's things as simple as proving the computer vision models that run on top
of drones on edge devices, or proving the correctness of commanding control systems and AI that
runs on commanding control systems. So Anderil's lattice SDK is an open SDK that is developed by Anderil
to coordinate across assets and drones that are built by Andrell as well as by other companies,
wherein drones can interface, coordinate, and allow an operator to control those drones in complete downstream mission objectives.
This is everything from their cbed sentry and UVVs all the way through Omen and Fury and a lot of their larger airborne platforms.
And so what Lagrange built was a proof of concept demonstrating how command the control system that made relevant battle,
field decisions with AI could be made accountable and provably correct.
We've open sources, we've released it publicly, and we think this is hopefully the start
of enhancing how cryptographic security is going to be used in defense relevant programs
that involve autonomous systems.
You guys have written about accountable autonomy and why that's important.
What does that mean?
And why is it relevant to the context of AI and defense?
Yeah.
There's a few places where accountable autonomy really matters, right?
If your baseline assumption is that in a contested environment, you only have crash fault tolerant requirements.
So there's no Byzantine actors.
Then you assume that the drones onboard AI can't be compromised.
So if the drones onboard AI can be compromised and in theory someone can jam the thing,
they can spoof communication with it, take it over, update the software, then it's very,
very important that you can prove the correctness of the model running there such that the
outputs that you're getting back if another UAV is communicating and interacting with the first
asset is provably correct, right?
It's the same thing as crypto, right?
You have, how do you verify that an AI model was run correctly on the blockchain?
We can't have everybody rerun it.
So you have to have the person who ran it, prove it.
That's very true if you're assuming Byzantine assumptions.
Now, it's debatable if you should assume Byzantine assumptions in a war zone or not.
We're not going to get into that.
Some people think you should.
Some people think you shouldn't.
Someone once said to me, we can trust the hardware, the hardware blows up.
So, there you go.
But now, if you have a command and control system and you want to ensure that the outputs that are being consumed by the end device or the edge device are correct, that's another place where you need zero knowledge machine learning.
Effectively, how can you ensure that one system that's controlling a bunch of downstream assets or one system that an operator is using and one AI model that operator is using to control a bunch of downstream assets is in fact only making, for example, kill chain relevant decisions,
as a result of the correct AI models being used in the command of control system.
That's one of the use cases that we think is the most relevant, right?
So assume you have an LLM that's job is to coordinate some mission objective.
It's supposed to take in a bunch of sensor input from a bunch of UVs
and then coordinate some subset of UVVs to go and, you know, affect some battlefield action.
In this case, you would require that the UVVs that are affecting that
battlefield action would only affect that battlefield action if there is accountability that the
correct model using the correct inputs has in fact recommended that action. Otherwise, the kill chain
decisions shouldn't be made unless the correct chain of command, the correct chain of both data
custody and inference custody has been followed. This is what we talk about accountable autonomy.
That autonomous systems are not just operating as these black boxes making random battlefield
decisions, but that they're made entirely with an operator's decision or entirely by the
decision on a model wherein there's hard constraints put in place by an operator.
Sorry, what's a kill chain decision?
A decision to affect, let's say, a kill action, to fire a missile, to use some armament
for a battlefield purpose.
Okay.
And in this context, I guess the assumptions that the AI is providing that kill chain
decision and then an operator is deciding to execute that decision or not.
That's often how it works, right?
So it's a structure of multi-stage process describing how adversaries can conduct attacks.
So, you know, under what circumstance, under what assumptions, under what sensor feed
inputs should an autonomous system engage in an attack?
Right.
Should an operator in a loop?
Should it just entirely be AI?
If it's entirely AI, this is a place where we highly believe that verifiability is paramount.
Right.
I guess like, so I'm not super familiar as this, but I suspect there are some sort of Geneva Convention laws that relate to the use of AI in military applications.
I'll have to check.
I don't believe, yeah.
I don't like this.
Let's assume that, you know, there are or there may be, like, in the future, some Geneva Convention rules around, like, the use of AI in war.
To what extent is provability and, to what extent is provability and accountable autonomy important then in ensuring that decisions made by AI are, in fact, lawful?
And I guess this could also extend maybe to domestic use of AI in policing, for example.
It's not only in the military context, but also it's like national like policing.
Yeah, this is a great question.
So I would say that the use of autonomous weapons is very, very contentious right now,
especially autonomous weapons that don't have an operator to lip.
So effectively a drone that can make a kill chain decision without an operator saying, hey, yes, I approve this to occur.
The U.S., I believe, doesn't have autonomous weapons that can make kill train decisions about operators being in the loop.
That's part of U.S. military doctrine, there always must be an operator in the loop.
That's my current understanding.
I don't know if that's changed.
But, you know, to be honest, and Andrews talked about this, Shield has talked about this, the use of autonomous weapons is not a new thing.
This feels new, right?
You know, when Caesar was fighting Versan Gedericks,
he dug a spike pit surrounding the Golic army.
That is nothing more than a ton of weapon.
You walk and you fall into the spikes.
Right?
They don't have to be there.
You can go anywhere you want in the middle of the night.
There's a spike that you can step on.
World War I, World War II, the use of mines.
All of these are autonomous weapons.
You get too close to a mine.
You could be a school bus.
you could be a family of refugees and horrific, there's horrific implications.
So the idea that, you know, an armament now has a computer vision model that says, hey,
if this is a Russian tank, yes, explode.
If this is a school bus, don't.
That's a net positive.
That's substantially better than the state of the art of a mind that doesn't have any type of AI usage, right?
But just the question over, hey, we're going to outright ban the use of AI and weapons,
is actually much worse than the alternative.
I'd much rather that a missile has onboard guidance to reroute and avoid hitting the wrong target, right?
If it's firing at a target and the comms link is jammed, I'd much rather correctly rearm and hit the correct target instead of hitting a school.
all of this stuff I think is super important.
The more precise weapons can be,
and AI is an incredible way to increase precision of weapons,
the less collateral damage and casualties you'll see.
And so that's something I do firmly believe.
But to your point, I do believe that there has to be conventions
and organization around how these decisions can be made such as are lawful.
How is auditability conducted?
How can we quickly audit to make sure that all of the use of a
Thomas weapons are following a predefined set of rules, right?
Is the AI operating correctly, right?
If a weapon missed, is it the fault of an operator,
the fault of the onboard AI, is it the fault of an adversary taking over the system?
Is the fault of the system being jammed or having a kinetic interference?
All of this stuff is super important.
And when you add AI, you open up a bunch of questions over what verifiability and auditability
AI means. And that's where zero knowledge machine learning is super, super important. And that's been
one of our very large pitches that we've made to almost everyone we can talk to in defense,
that zero knowledge proofs provide this way to glimpse inside the black box to determine that
the output that you got back that indicated that an action should be taken has come from
the correct model with these set of correct inputs, which allows you to determine why does the
was made, under what circumstances decision was made, and under what circumstances
decision should be adjusted, the model should be adjusted, or things should be changed
going forward. Yeah, I was thinking about this. Like, as with a lot of things with
AI, I think it quickly becomes a race to the bottom where, you know, if we have increasingly
more, more, more AI in military decision making, then like at the bottom, it just becomes
about taking out the other person's AI, right?
If we're narrowing every military decision to, like, is this a military target?
And at the same time, you have, so it's like increasing use of AI, then at the end,
you're just trying to take out the other person's AI or the other country's AI.
And you're no longer trying to take out anyone other like military or, you know, infrastructure.
It's all about just taking out the AI.
But I would say taking out AI has a lot to do with infrastructure, right?
I mean, I think that's the whole
real industrialization.
It's the energy grid.
It's the where you train the AI.
It's where you run the inference and data centers.
It's where you,
I'm at me.
I'm at a black area of infrastructure, but.
Oh, yeah.
I got you.
Of course, like, yeah,
there are, you know, electricity infrastructure,
IT infrastructure,
but like, yeah, at the end of the day,
comes down to that where then I guess,
like, you know, having decentralized AI
that's able to function autonomously
in a more of a mesh system
becomes more interesting.
It does.
I want to talk about the token here because, you know,
I know you guys still have some applications and products that are geared to more
towards the more crypto application, the crypto use cases.
But, you know, when you're talking to Andrew or Lockheed about implementing
Lagrange technology and the token has a lot to do and a lot to sort of do in the
decentralized nature of the,
of the ZK verification aspect of the product.
How does that conversation go?
And I mean, does the token have anything to do with the military applications?
Or is it strictly for the crypto-related use cases?
Yeah, so the token is staked into Lagrange's Prover Network,
which is a decentralized network of operators that can generate proofs
within an auction mechanism that has been designed and developed by our team.
Where work is assigned into the network, the network generates proofs,
and sends those proofs back to,
whoever requested them. The decision of whether or not you want to deploy deep proof or any of
Lagrange's technology in decentralized or centralized capacity just depends on whether or not
the person in question would prefer one of the other. I would say for all of our crypto use
cases, which is a lot we work with probably every AI crypto company in the space right now.
We've announced up with Sentient, Gaia, Mira, ZeroG, OpenLedger, really all of these
cryptoX AI companies. It all is going to be running through and runs through our Prumer network.
where our proven network powers all of the ZK machine learning that we get at crypto.
Now the defense applications, it'll just depend on the decisions that our defense counterparties
want to make.
I can't promise one way the other.
We have decentralized deployments and our software also works on centralized places as well.
LaGrange Foundation is the foundation that issues a token and LaGrange Lab is a development entity
for the LaGrange Prover Network, as well as Deep Prove and a bunch of other technology in cryptography.
And we are very excited to see growth of our technology.
technology across dual-use purposes, right? Think of Palantir. This is very different deployments
of Palantir for public sector and for private sector. You know, Gotham, for example, is a platform
that Palantir develops entirely for police forces. Something like Foundry and Hey, IP are really
strongly commercial, but also have government application as well. And I think that LeGrange is
committed to being a dual-use company, which means that we will continue to expand our adoption
in crypto, will continue to expand our adoption in enterprise, and we will continue to expand our
adoption in government defense. What are your thoughts on Europe in the sort of military geopolitical
landscape, specifically when it comes to AI-driven military technology? Yeah, I mean, look, I think
there's some great military and defense contractors out of the EU, out of Europe, Rafael,
Talas in France, BAA systems in the UK. I would say that these are more legacy to,
style primes more so than they are, you know, up-and-coming, highly competitive manufacturers
of new drone platforms, new AI platforms.
But, you know, we know the teams, we met the teams at these companies and we, you know,
think they're fantastic, frankly.
But I would say that, you know, where I think a lot of the innovation is happening in Europe
and modern military doctrine around drones is obviously in Ukraine we touched on before, right?
Ukraine uses AI in drones.
One of the very useful things you can use for AI in drones is allowing those drones to operate in jammed environments.
So, you know, assume you have a bunch of electronic warfare jammers that are trying to knock out an operate lake.
Right.
The one way you can do that, the very naive way, is just attach fiber optic cable to this.
This is what a lot of the FPV drones are being done.
And, you know, now Russia has launched these like big mothership drones that have like six kilometers of
of fiber optic cable attached to one of smaller drones.
You know, I personally don't think that that's a very future-proof strategy.
There's, you know, kinetic issues when you have a bunch of fiber objects attached.
But AI provides another alternative, which is you can still jam the operator link.
You can have a bunch of electronic warfare jamming it.
But if the drone can complete its objective in the last mile without needing an operator link
because it has onboard AI, then it can do this.
This is, if you remember, the Ukraine operation where they attacked the Russian jets.
It took out the Russian jets.
All of those drones, I believe, were using onboard AI.
Since the Russian air base was obviously jamming, and so these models were trained to recognize weak points into Russian jets.
They got close enough, and then the jet was able to kind of loiter as required,
then be able to affect that nation on top of the Russian jet at the weak spots.
And that wouldn't have been possible without onboard AI.
So I think that there is substantial military innovation in Europe right now.
And I think a lot of is driven by the war in Ukraine.
But I'd say like the world leader right now, I'd argue in drone innovation is the U.S. and China.
With the advent of AI in military technologies, it feels like a huge turning point to most of the military technologies that we've had up until now that have been mostly kinetic.
To what extent do you think there's a sort of like getaway phenomenon or runaway phenomenon where when one sort of country or military alliance really takes off, the other ones can no longer catch up just because their superiority.
That AI superiority just makes them so much more powerful.
Is there a new sort of dynamic at play here that really changes the trajectory of military advancement in a way that we haven't seen previously?
in military developments?
I think warfare continually evolves, right?
And I think the strategies that are employed by nations to counter the strengths of other
nations' militaries just continually change, right?
I mean, the entire U.S. power projection platform often predicates on the U.S. aircraft carriers,
wherein, you know, a U.S. carrier group can effectively go very deep blue waters and project
the US authority and US power anywhere in the world.
And so what has China done to try to combat that?
Well, they've developed, you know, cutting edge hypersonic missile technology.
And in a conflict with China, and then Pete Haguefiz has talked about this, so it's not
me making it up, it would be very, very hard for U.S. carrier groups to be able to combat
just a large barrage of hypersonic missiles that were fired by China at U.S. aircraft carriers.
And so if the 16 U.S. aircraft carriers could be knocked out in the first 20 minutes of a conflict
with China, it would substantially harm the U.S. position, right?
And so there's obviously a lot of work being done in the U.S. to improve the U.S. domestic
hypersonic missile manufacturing capacity as well.
Right?
You look at, for example, the role of tanks in the war in Ukraine, I would argue, is analogous
to this, right?
A lot of these have been effectively left to become artillery more so than they are now kind of these forward-moving, heavily armored platforms.
Since FPV drones that cost $1,000 an RPG attached can disable and take out these tanks at will.
So a lot of these tanks have been now, both on the Ukrainian side and the Russian side, set back under effectively like mosquito netting to stop drones from getting through, where they can lob,
long-range, effectively artillery shells, but they aren't kind of these forward-advancing units.
And it's an entirely different paradigm.
I think this stuff continually changes.
So you look at like what I would argue makes the U.S. so special, and what results in the
U.S. being the strongest country in the world for hundreds of, you know, a couple hundred years,
is effectively the market.
It's the fact the U.S. can commercially compete by having a tight alignment between the public sector and private sector.
This is the same story of the U.S. military manufacturing base, U.S. manufacturing base during World War II.
It's the same story of DARPA funding research across Massachusetts and San Francisco areas during the Cold War.
And it's what we're seeing now with the manufacturing of weapons and autonomous weapons.
happens in these kind of reindustrialization and American dynamism companies.
When you align these things and you have what are the largest, most productive
and most competitive free market in the world that is putting resources behind it, you're
able to produce stuff that other countries cannot.
This is why I tend to be very short on the whole state of enterprise, durability long
term.
I don't think you can centrally plan an economy long term.
I agree.
And we'll have to leave it there.
Thank you so much, Ms. Mel, for coming back on.
and sharing your thoughts on on this like really interesting topic which um i think is only going to get
more interesting as time goes on unfortunately i mean fortunately or unfortunately uh you know war
war continues to be uh something that happens across the world um and i don't think that's
going to go away anytime soon and uh if if the western world can continue to project uh its
values through these technologies, then I guess that's a better state of affairs than the alternative.
Yeah. I don't think anyone in crypto wants to live in a world where the West doesn't win.
It just is not amenable to the quality of life, the standard of living and the freedoms
and civil liberties that we've grown accustomed to. And I think there's a moment where there's an
opportunity for companies in crypto like Lagrange and hopefully others to assist.
with ensuring that all of us and as well as all of our descendants can benefit from the same liberties that we have.
Thank you.
