Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Lagrange: ZK-Proving AI Alignment - Ismael Hishon-Rezaizadeh
Episode Date: August 16, 2025In an age when AI models are becoming exponentially more sophisticated and powerful, how does one ensure that proper results are being generated and that the AI model functions in desired parameters? ...This pressing concern of AI alignment could be solved through cryptographic verification, using zero knowledge proofs. ZKPs not only allow for verifying computation at scale, but they also confer data privacy. Lagrange’s DeepProve zkML is the fastest in existence, making it easy to prove that AI inferences are correct, scaling verifiable computation as the demand for AI grows.Topics covered in this episode:Ismael’s background and founding LagrangeAI x crypto convergenceZKML use casesAI inference verifiabilityAI safety regulationsRevenue accruing tokensPitching Lagrange to enterprise clientsAssembling a dedicated teamCryptography researchEpisode links:Ismael Hishon-Rezaizadeh on XLagrange on XSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: one of the largest node operators worldwide, trusted by 175,000+ accounts across more than 60 networks, Chorus One combines institutional-grade security with the highest yields at - chorus.oneThis episode is hosted by Sebastien Couture.
Transcript
Discussion (0)
I think AIX crypto, by and large, is a scam.
The majority of businesses I see building an AIX crypto are doing nothing more than trying to launch and sell a token to unsophisticated retail market participants.
When I started Lagrange, the cost of generating a proof for a ZK EVM was like a dollar or in a range of tens of cents per transaction.
Ridiculous. Now it's about a hundredth of a cent. It allows you to ensure
that the correct model is being used for the inference that a system is receiving.
And it also lets you ensure that there are properties of privacy over the use of that aim.
There is a subset of public market participants in crypto who trade charts
behaving the same way when they're trading Bonk versus when they're trading Pengu,
versus when they're trading with versus when they're trading Doge versus when they're trading LA.
All they care about is trading on price action.
in trying to catch a runner.
And if the only participants in your market
are trading on those characteristics,
whether or not you are an infer protocol
or you are a meme coin,
you effectively have converged
the same market dynamics of the meme coin.
If you're looking to stake your crypto with confidence,
look no further than Corus 1.
More than 150,000 delegates,
including institutions like BitGo,
Pinterra Capital and Ledger trust Corus 1 with their acids.
They support over 50 blockchains
and are leaders in governance
or networks like Cosmos, ensuring your stake, is responsibly managed.
Thanks to their advanced MEV research, you can also enjoy the highest staking rewards.
You can stake directly from your preferred wallet.
Set up a white label note.
Restake your assets on eigenayer or symbiotic or use their SDK for multi-chain staking in your app.
Learn more at corus.1 and start staking today.
Hey guys, I want to tell you about NOSIS,
a collective of builders creating real tools for real people on the open internet.
NOSUS has been around since 2015.
In fact, it started as one of Ethereum's very first projects.
And today, it's grown into a whole ecosystem designed to make open finance actually work for everyday people.
At the center of it all is NOSIS chain.
It's a low-cost, highly decentralized layer one that's compatible with Ethereum and secured by over 300,000 valid errors.
So whether you're building a DAP, experimenting with DFI, or working on autonomous agents,
Nosis chain gives you a solid, neutral foundation to build on.
But NOSIS is more than just infrastructure.
It's also tools that people can actually use.
Like Circles, for example,
lets anyone issue their own digital currency
through networks of trust, not banks.
And then there's Metri.
It's their smart contract wallet
that makes it easy to access circles,
manage group currencies,
and even spend anywhere visa is accepted,
thanks to their integration with NOSISPAY.
All this is governed by NOSISDAO,
where anyone can propose, vote,
and help guide the network.
And if you want to get involved,
running a valider is super easy.
All you need is one GNO and some basic hardware.
To learn more and start building on the open internet, head to NOSIS.
NOSUS, building the open internet one block in a time.
Welcome to Epicenter, the show which talks about the technologies, projects, and people driving decentralization and the blockchain revolution.
I'm Sebastian Kuccio, and today I'm here with Ismail from LaGrange Labs.
How's it going, man?
Doing well.
Thanks so much for having me, Sebastian.
Yeah, so I mean, we've known each other for a while.
This is actually your first time on Epicenter, though.
I think you were probably on the interop at some point, like a while back.
We've done some podcasts before.
I was, yeah, a few times.
First Epicenter episode.
Disclaimer, I'm an angel investor in LaGrange Labs.
I like to get that out of the way early on.
But, you know, I'm going to grill you anyway.
But, yeah, let's dive into it.
I wouldn't expect anything different.
Yeah, let's dive right into it. So, like, Lagrange has been a while, has been around for a while. And like, you guys, like, started as, um, this highly research-driven ZK proof, um, kind of ZK project that has now evolved into all sorts of verticals, including AI and Defi and, uh, and scaling. Um, yeah, but first, like, yeah, let's talk a little.
bit about your journey and what sparked the idea for Lagrange and how does your background
kind of fit in this verifiable computations narrative? Yeah, that's a fantastic question. So Lagrange
has from day one been hyper-focused as a zero-knowledge-proof company. And throughout our history,
we have targeted a variety of problems that we solve with zero-knowledge-proofs. In the very early days of
Lagrange. This was things like Interop or Defi, co-processing, and over time, we have scaled the
business, scaled the go-to-market motion, and tackled increasingly large problem spaces and
increasingly large tans. The current version of Lagrange, the business that we are today,
has a very large part of our go-to-market focus oriented towards AI, both the application of
zero-knowledge proofs to improve the trust and safety of AI in crypto, as well as to improve
the trust and safety of AI in traditional sectors using advanced cryptography and zero knowledge
proofs. But as a team, we've always been laser focused on ZK, and that's been in our DNA.
Our chief scientist, Babas, chairs the cryptography department at Yale. And under him, we have a very
large research team in applied cryptography with a bunch of world-class researchers, people like
Demetris Papadopoulos, who's a professor at HKUST, Stravon-T, Shravon-Trenvossin, Nikla Gai, and a bunch of
great, great people on the team.
And so, unfortunately, I'm not a ZK researcher myself.
I was a venture investor before, and then I worked in financial services before then,
leading digital asset strategy for a large insurance company.
But, you know, the bread and butter, the DNA of the business has always been ZK,
and that's embodied with the research team that we've built at LaGroche.
And how much of the team now, like if you were to sort of,
you know, look at the different people on a team, how much of the team comes from the ZK
research background versus now like the AI component? Yeah. So I would say that everyone at the
company is a ZK person. We're a cryptography company. So we don't build new foundation models. We don't
build LLMs or agents or really anything besides zero knowledge proofs. What we do is we apply
advanced cryptography and zero knowledge proofs to AI. And so,
So we do have people on the team who have familiarity with AI.
We have people in a team who've worked at companies doing AI engineering.
But as a business, what differentiates us isn't the AI talent and the AI skill.
It's the cryptography.
And it is how do we take cryptography and apply that cryptography to companies that have
AI expertise?
The same way that you don't have to be a insurance expert or a consumer
expert to build AI that's used for consumer or financial service purposes, we don't think you have
to be an AI expert or it shouldn't have to be an AI expert to build cryptography that can be
very impactful to AI. It allows AI companies not to have to deal with cryptography when they
work with us. All they have to do is use our technology. We don't necessarily have to deal with
AI either. They just have to be able to use very simply a technology that adds a real zero to one
improvement and the security and trust properties of what they built.
So there's always been an AI crypto convergence narrative since going as far back as
2015, 2016, you know, guys like Trent McConaughey who've been on the podcast,
multiple times have been kind of pushing like different narratives around AI and crypto,
whether it's for private AI or user own data or, or,
provable AI and you guys are like now at the forefront of that. How much of this is sort of hype,
right? And what's the genuine signal versus noise sort of thing that people should look at
when analyzing or sort of observing projects building in the AI space within crypto?
Yeah. So, you know, you started this by saying you were going to ask some tough questions. So I'm going to
give some tough answers. I think,
AIX crypto, by and large, is a scam. The majority of businesses I see building an AIX
crypto are doing nothing more than trying to launch and sell a token to unsophisticated
retail market participants. The reality is that there are some things that have been financed
and built in crypto that are actually very, very relevant for AI. And so one of those we believe
is zero knowledge proofs, right? It's a technology that comes from academia, but has been
productionized in crypto because of the demand for scalable,
provable, provable block space, right? ZK Roll-ups. And because of that,
you know, private capital has flowed into ZK in crypto and about a
billion dollars of venture dollars have been spent on R&D for
zero knowledge proofs in the space of crypto. Now, great, we can scale
blockchains more. But what are the other applications of this
technology that we have as an industry plugged a billion dollars into? And
And AI, in adding trust and safety to AI, we would argue is one of the largest markets.
So it's not that AIX crypto where we're doing the centralized agents to move your, you know,
to rebalance your yield aggregator on chain.
It is how do we take a fundamental piece of infrastructure that's useful in crypto and that
is a technical breakthrough financed by crypto and apply that to other sectors and other areas.
That's how we think of where our business is positioned.
Crypto is a capital formation mechanism.
Cryptography is just mathematics.
It isn't a crypto thing, right?
Cryptography has existed well before crypto and it will exist for the entirety of crypto.
It's security internet and now it secures AI.
Now, where I don't think CryptoXAI is a scam is in some very specific applications,
such as, you know, I think sourcing of GPUs from, you know, very large subsets of users
who may have latent compute sitting around, I think is a very interesting thing.
The athers of the world, the prime intellects of the world.
I also think it's a fantastic market in AIX crypto for certain types of agentic things, right?
Or you want to have like natural language-based wallets.
I think is very interesting.
It decreases the user experience or improved user experience of using crypto.
I think those things are very cool.
But by and large, I think the, of the hundred companies that you see that are announced,
stuff every week, building an AIX crypto, maybe two of them are not scams.
Yeah, I think that resonates with me. And I think one of the things that stands up from
what you just said is that AI and crypto is not just one vertical. It's different types of
problems that are trying to be solved, you know, whether that is, you know, scaling access to
GPUs, applying LLMs to user interactions, or, you know, in your case, providing provability,
and verifiability to AI inference.
Those are all like very different problems that use crypto and cryptography in very different
ways.
Let's like maybe dive into the ZKML use case a little bit.
And like for people who are not familiar with this particular technology and how you guys
are solving some really tough problems there, like what is what is DKML and how does
LeGrain sort of fit in this use case?
Yeah.
So what ZKML lets you do is effectively two things.
It allows you to ensure that the correct model is being used for the inference that a system is
receiving.
And it also lets you ensure that there are properties of privacy over the use of that AI.
Now, where is this valuable?
Well, most places that use AI have a remote system where that model is running.
that is communicating with something else that is dependent on it.
You can think of this in aerospace defense as command and control systems.
You can think of this in healthcare as a user who's interacting with a diagnostic LLM
or a doctor that's interacting with a diagnostic LLM run by a third-party company.
Or you can think about this in crypto as a user who wants to generate a bunch of transactions
from a natural language prompt for buying an asset on a theory.
and bridging it onto B&B and then swap me into something else.
In all of these situations, you have someone with a material amount of financial value
tied to the correctness of an AI output.
What zero knowledge proofs lets you do is to generate a proof that effectively with this model
and this input, this is the output, and you can be 100% sure of that.
That's the first property you get from ZKML.
It's a very, very powerful one in the field of applying security to AI and safety to AI.
Now, the second property is privacy, right?
There's a lot of conversations about AI ethics, and privacy is generally central to all of those
conversations.
But what privacy is is twofold.
It is, how do I ensure that the model that's being used doesn't potentially, or the person
is running the model doesn't potentially have access to the underlying user data?
So you say, hey, I have this, you know, this weird chest pain and I want to interact with a diagnostic model,
but I don't want the mega corp running this diagnostic model to be like, hey, I have chest pain.
Let me serve, you know, Sebastian ads for chest pain medication.
That's kind of a dystopian future of all of your health data becomes just, you know,
dispersed across the internet with whoever's running these models.
And so that's where privacy is like very, very important in AI.
And the second place where privacy is very important is in keeping the models private, right?
So closed sourcing of models and closed sourcing of weights generally is significant in fields like
healthcare and financial services were fine-tuned models.
The weights of those would be considered PII or client information.
And so being able to keep the model private actually allows you to use it in some interesting ways.
And so the two things you get from ZKML is being able to keep a lot of information private that otherwise would have to be public and being able to add security on top of the use of AI.
Okay, right. So we have inference verifiability, which is like a very important use case in military and industrial settings where you you want assurance that this query, this prompt has been sent to a
particular model and that the inference comes from that particular model.
And then you have privacy.
I think, I think, so I, I, I want to maybe just kind of zoom in on the inference
verifiability part because I think for, for most people who think about ZK and, and kind
of zero knowledge circuits, what comes to mind is like a computation environment that is, that's
very limited, right? So the types of computations that one can do inside a ZK proof are like quite
simple and rudimentary. While when thinking about AI inference, it's like this very complex
compute problem. So can you maybe like clear a little bit of that misconception and how we actually
get to do inference in a ZK proof? Like how does that actually work? Yeah. This is a
this is a really a very good question. So I would say that part of the hard thing about staying on top
of ZK for the broader market is how fast ZK changes. Right. So, you know, I, since I started
Lagrange four-ish years ago, about four years ago, we've seen an order of magnitude improvement
per year in the performance of ZK. And that's consistently been every single year, all the way
from improvements in the core cryptography, improvements in tricks and circuit writing that make
things faster, improvements in hardware acceleration. All of this has just drastically improved
the performance of the space. When I started Lagrange, the cost of generating a proof for
ZK EVM was like a dollar or in a range of tens of cents per transaction. Ridiculous. Now,
it's about a hundredth of a cent I saw from ZK. Sink's newest benchmark for Bujim 2. Fantastic
improvements in speed. Now, AI would have been a pipe dream to prove.
in ZK four years ago.
Today, it is actually quite performance.
So our library deep proof that we built can actually generate proofs of GPT2,
Lama, and Gemma, which are two, which are three open source models that are open source
LLMs.
And we can do those, and obviously I'm not going to claim the performances like anywhere
near real time, but we can do those with relatively reasonable performance.
And for a lot of very, from a lot of much smaller model architectures, we can generate proofs in the order of seconds.
And that's without the specialized hardware that we expect to be available in the next year, which should add a one to two order of magnitude improvement in the proving times of these systems as well.
Right.
And so, but you're saying GPT2 and Lama, those are fairly old models, right?
I mean, yeah.
So what, what we like, is it expected that, you know, we'll be able to.
to do verifiability on, you know, large, like very performant models like Gemini, I2.5 Pro or,
or like, GROC for super heavy.
Like, is it, is it reasonable to think that ZK can also verify inference on, on these very
complex models?
Right.
So like, GPT2, Lama, Gemma, those are like, let's say, 10 figure parameter models, right?
So, you know, Gemma, I think is this version of Gemma that are,
1 billion parameter, 5 billion parameter.
There's versions of Lama that are like 6, 7 billion parameter.
GBT2, I think, is sub 1 billion parameter.
It's like 600, 700K.
But you're talking about, you know, how many orders of magnitude you need to reach
in a performance improvement to run those models efficiently.
So getting from a 5 billion parameter model to a 50 billion parameter models of one order
of magnitude improvement in memory optimizations and proving time.
Getting from a 10,000 parameter model to a 5 billion parameter model,
or a 10 million parameter to a 5 billion parameter is what?
It's four, three orders of magnitude.
So we're closer to being able to run a frontier model with 50, 60, 70, 100 billion parameters
than we were to being able to run Gemma or Lama or GPD2 a year and a half ago.
So we can run in the current version of Deep Proof a variety of LLMs that are transparently
smaller in size than what would be used for a lot of chat apps today.
But we're probably about a year, 18 months from being able to run the frontier models
that people are familiar with.
Generally, we're actually, to what I'm seeing, not seeing an increase in parameter count,
proportionate to the improvements we're seeing in ZK performance every year.
Right. We, we, we did not go from 50 billion parameter or 60 billion parameter models last
year to, you know, 500 billion parameter models, trillion parameter models this year, right? And we did go from being able to prove, you know, eight figure parameter models, being able to prove low 10 figure parameter models in a year. So the rate of ZK improvements a lot faster than the rate with which models are on. Interesting. So, so you think that ZK will be able to continue to,
continuously catch up with the speed at which AI models are also improving.
At least inference, right?
I think there's a question of whether or not, you know, if you're training a model on,
like I think it was the newest GROC one.
They're training on that giant data center that they did, you know,
debt financing for in a range of, you know, several billion dollars.
I don't think that you'll be able to generate efficiently a proof of the training of Brock 4 or Brock 5 in those types of environments for a very long time.
But I do believe you will be able to generate proofs of inference in the next 12, 18 months for any model that you want to with reasonable performance.
And I actually don't think it's a bold prediction.
I think it's a rather conservative prediction.
So what is the incentive for closed source model?
providers to
implement ZK proving of their models.
So you think this is something that will,
can at some point be included in all models,
or does it, you know, will it remain some sort of like a premium feature
that only sort of enterprise and governments and military clients would have access to?
Yeah.
I mean, I think it depends on who wants to pay for it, right?
The number one, the siding factor of what people integrate I generally find is the economics of it, right?
And so if you are a user who is very privacy concerned and verifiability concern, there's obviously a subset of users who are, you know, there's always going to be open source models that you can use that you can run your ZK on top of yourself, right?
You can use deep seek with ZK proofs at some point in a reasonable future and, you know, have privacy.
private guarantees afterwards. And that's great. That's exciting. And then there are applications
where you're like, okay, I want Rock to be used for defense purposes. And how do I know that a remote
system that's communicating with XAI servers, it hasn't been tampered with? How do I know that nobody in
the back end at XAI has pushed to change the code that's going to take down, you know, an entire
fleet of U.S. defense zones, right? That's a situation where you really, really do need verifiability.
and there's no shortage of money
that will be willing to be paid for that.
The great thing about ZK,
and we touched us with the privacy earlier,
it's actually very well suited
for closed source models
because you can keep the model priped.
So I can prove to you
that a commitment to the correct model
was used to generate this inference output for you
without actually having to ever show you the model.
So you can have a commitment to GROC
that just says, hey, this is GROC,
and here's a proof it came from GROC.
And you never have to,
have to actually see the weights, the biases, the model architecture, anything of the closed source
model. And so it's very, very relevant to use ZK& Enterprise applications because you actually can
have guarantees of correctness over AI output and privacy over the underlying models that are kept
closed source. Yeah, this just made me think, like, I recently finished reading Nexus, the
Yuval No Harari book.
And, you know, part of his thesis is that AI poses a risk to democracy in its current form
and the way that AI is being used like on social media that could like create sort of like
misinformation and can be used adversarially by our, by our, by our, by our enemies to create social unrest.
And, you know, I think ZK could be, in this context, could be used to curb some of that,
but it would have to be sort of a regulatory requirement for AI companies to also include
ZK proofs for all of the inference so that, you know, when you're looking at the social media post,
you know that this is like an AI generated thing versus something that's not.
Have you guys given any thought to that?
And what's your view on, like, having ZK being sort of part of the AI?
stack from a sort of like a regulatory perspective?
Yeah.
I mean, I think there will be increasing regulation surrounding AI trust and safety,
all as well as a trust and safety of data used to train AI.
Some of that, those problems can be addressed with ZK, and I'd like to see them addressed
with ZK, and some of those problems can't be, right?
Things like, you know, preventing AI providers from scraping private user data and
using user chats to train next generation of models.
that has potentially generative capacities that were predicated on non-public information or sensitive
information that they shouldn't have had access to a training. Those things are always going to be
concerns, and they require regulation and ZK proofs, you know, maybe in some architecture could solve it,
but it would be very, very complex. And the simplest answer is just, you know, having somebody with a
clipboard run after the 10 companies that are actually doing this and pointing at them and saying,
stop doing that. That's probably the cheapest way to solve that. Maybe not to be.
most durable long term, but probably in the short term, the cheapest. Where I think ZK is uniquely
positioned is in applications that actually have an imperative that is not established by a government,
but is established by an economic motivation to use ZK. And this is where I think the most value in
technology comes from, right? You know, why do we have the centralized systems and blockchains? It wasn't
because, you know, some government bureaucrats said you have it. You have to have it. It's because
there was an economic motivation to build the centralized blockchains to protect, you know,
non-custodial user assets. And that was the entire basis of our industry. What was the basis
of financing for ZK? It wasn't a, you know, a government bureaucrat saying, hey, you know,
you should build private and verifiable scalability. It's because there was massive hacks in crypto.
And then people go, hey, maybe we should scale blockchains in a more secure way. So we stopped losing
our money. Right. And so where there is a market for ZK in AI is applications that cannot
actually even use AI in the current form because there's lack of safety and there's lack
of privacy over it, right? Healthcare is an example of that. Aerospace defense is an example of that.
Institutional finances is an example of that, right? Like there's a bunch of companies that
can't use GROC because they can't just pass, you know, insurance participant data over to XAI.
And there's a team of lawyers there who say, no, you can't do that.
We're going to go to jail.
We're going to get sued to oblivion.
So these are the places where does it actually a very large market for ZK in AI as well as actually in crypto, right?
How do you ensure that the, you know, egentic LLM you're using to construct your transactions won't rug you?
These are where it's very, very valuable in my view.
And I hope this regulation that also pushes things in our favor.
But I don't think those are the driving motivations that is going to transform this industry.
Yeah. Can you talk a little bit about your collaboration with Nvidia?
Yeah. So, you know, we recently announced some really big collaborations, one of them with
NVIDIA, one of them with Intel, and one of them with a very large hyperscalor cloud provider.
And so in all of these, the kind of the central point is very simple. It is there is a imperative on
the use of AI and confidential AI within a bunch of sectors of these companies sell to.
And so there has not been a company before LaGrange that has had a commercially viable product that has the capacity to actually be able to start addressing these problems.
I wouldn't claim that the version of deep proof we have now is the version of deep proof that we'll have in 12 months, 18 months, 24 months.
But directionally, it is moving fast than anything has been able to move previously to address these problems.
And that's opened up a lot of opportunities to us commercially to actually be able to work with some very, very large AI.
companies to start exploring what it looks like to use AI to improve trust and safety of
deployments that they have. All of these AI companies have health care, defense, institutional
finance relevant contracts, kind of service to be relevant contracts. They have international contracts,
very complex legal requirements surrounding how data can be transited between countries and how
AI can be used between countries. And what we have is a technology that's uniquely positioned
to address many of those problems. LaGrange recently launched its token is the law token or the
LA token. And LA. L.A. So yeah, it's got the finance listing, Coinbase. What's the role of
the L.A. token and what's planned here for like staking and governance, et cetera? Yeah.
So, you know, we were very, very excited to finally be able to unveil and to launch the LA token.
The Ligrange Foundation did a fantastic job orchestrating and coordinating that whole process.
And so, you know, we were very lucky as well to be listed on a variety of top liquidity venues, Binance, Coinbase, Upbit, and many others.
And we were very excited to see an overwhelming community support behind the launch of the token.
the utility of a token as designed by the Lagrange Foundation is as a fuel for the cryptographic
engine that Lagrange builds. Effectively, there is a network of provers that generate proofs
for deep proofs or DCMachine learning as well as a bunch of other commercial applications we target
as well, ranging from roll-ups to co-processing to more, as well as verifiable database infrastructure.
And at the end of the day, the token is used and staked into individual proofers in the network
who have an economic motivation to generate proofs correctly.
If, for example, they don't generate a proof on time or they fail to participate in an auction
the way they were supposed to, they can face a penalty in a form of slashing or non-payment.
In the current version, you know, it's possible to stake the LA token into Provers.
and there is programs designed by the LaGrange Foundation to incentivize the staking of LA tokens
based on fees that the network collects from being able to render inference
and render proofs of inference to many of our counterparties.
And you tweeted something a little while back,
which I thought was kind of interesting.
If your intrep protocol has no revenue, it's just a meme coin.
can you unpack this this thought and you know why do you think i mean i think it's it's it's obvious
right that the crypto needs to move towards a more revenue generating model than a simply like
up only model um how will revenues flow back to token holders in the case of the la token
yeah this is a great question and i'm glad you asked it because that was one of my favorite tweets
but there's a subset of public market participants in crypto who trade charts.
And all they do is they trade listings and charts.
And those listing and chart traders are behaving the same way when they're trading bank
versus when they're trading Pengu versus when they're trading with versus when they're trading
Doge versus when they're trading LA.
all they care about is trading on price action and trying to catch a runner or momentum in the chart.
And if the only participants in your market are trading on those characteristics,
whether or not you are in for protocol or you are a meme coin,
you effectively have converged the same market dynamics of the meme coin.
People trade goes up, people sell goes down.
And that is really not an inspiring or long-term durable way to build.
a infrastructure protocol.
The objective of an infrastructure
protocol should be to create
net new value such that the
economics broadly of that infrastructure
protocol are accretive to
the network dynamics that
include the underlying asset.
And that is, for example,
why hype has done so well in market.
That is, for example, why
the many other L1s that have high demand,
Solana, Ethereum, have done well
by and large in market. It is the
hope that many investors have brought to something like pump in the last, you know, 30 days.
But anyway, to get back to the point, if you do not have revenue and you do not have traction,
your token is nothing more than the meantime point. And so Lagrange, as, you know, we've talked
a lot about today, has material traction, both outside of crypto and within crypto in the adoption
of our technology in both enterprise AI, financial services, aerospace defense, and,
and crypto asset sectors.
And because of that, we've tried to design our network in a way where the fees that accrue
from the generation of proofs and from agreements that we have for the generation of proofs,
agreement the foundation has for the generation of proofs, accrue back to people who have
staked and who are generating proofs within our network.
And so this is why we, you know, we're very excited with many of the traction numbers
that we have right now that are publicly verifiable on chain,
wherein you can see the movement of fees for the generation of proofs to proveers in the network
and very strong demand for the generation of proofs that's visible in the network.
And so long term, we think that the majority of fees that accrue for staking the LA token
will come from fees that are paid directly for the generation of proofs, such that is a positive
economic market wherein there is an incentive to hold and stake the LA token into the network
that isn't simply just trading chart action and isn't simply just trading on a meme coin.
So when operating in the enterprise space and selling Lagrange products to enterprise customers,
how is the crypto component perceived and how do you get over some of the company?
the objections that people might have simply by virtue of like LaGroche having a crypto component,
you know, some companies or like clients might see that as a risk. And, you know, I know that
like working with our press clients, it could be complicated to disassociate, you know,
the technology from a lot of the negative press that crypto gets. Yeah. Yeah, that's a great question.
So Deep Prove is a library. Are ZK machine learning technology?
is a library. You could run it on top of our Prover Network with the same security guarantees
as you running it on top of an edge device used in a battlefield. There is no difference in
where you choose to operate that library. The library will operate with the same safety guarantees
over proof generation anywhere. And so some people really like the centralized proof generation,
but they go and they seek that out. So when we work with enterprise clients, we don't sell them
on the centralized proof generation.
We sell them on core cryptography.
The entirety of the internet
has been secured with cryptography, right?
So TLS on top of HTTP is what enables
online banking. It's what enables payments infrastructure.
It's what enables everything you do on your phone.
It's what enables social media.
It's one enables online dating.
The modern society that we have today
is predicated on the use of cryptography
to add safety and privacy
on top of web connections.
What Lagrange does with ZKML
is adding those same two properties,
safety and privacy on top of AI.
And that is what we sell
when we interact with enterprise clients,
web two customers, etc.
It is two properties that
unambiguously need to be included
on top of AI
for us to have a robust and functioning
and safe economy
that predicated itself on top of AI,
the same way those two properties had to be added on top of, you know,
ICT, internet connectivity technology,
to be able to add those properties on top of the web.
And so that is what DeepProve and RZKML work is sold as and what it sells.
Now, there is a subset of customers, right, wallet providers, for example,
or people who really like the centralized proof generation
because they think the properties you get over live-ness guarantees,
remove dependencies on cloud providers who might shut off, right?
And in that case, you know, we have a Prover Network that's fantastic and can be used for that.
But when we sell to Web 2, we don't sell the crypto token.
We don't sell, you know, people having to use or interact in any way with the crypto token.
We sell a core technology and impactful technology.
And as a business, we have also our crypto network, which we think probably is the best way
to generate proofs long term.
We think the whole world is going to use
the centralized proof generation long term.
But we want to see people using proof generation first.
And then they'll eventually, in our view,
start moving to the centralized appointments.
So far we've talked a lot about AI,
but you guys are also doing a lot of interesting work
on the scaling side,
particularly those recently announced
that you guys are working with Matter Labs
to handle a lot of the proofs
on ZK Sync.
Can you talk a little bit more about
the co-processor and some of the other products
that are in the La Grange product line?
Yeah. So it's a really good question.
So as I started with a little bit today,
we're a ZK company.
And where we see the largest cam for ZK today
is on adding trust and safety on top of the use of AI.
But that's not the only thing that effectively
we sell that uses ZK.
So we sell verifiable database
infrastructure where you effectively are able to have a database that is represented by
commitment to that data, so like a hash of all of that data, that we can prove the correct
queries on top of, which is very useful for a lot of contexts where you want the correct
provenance of data. It's very useful if you want to introspect into a chain and query over the
history of chain. And we have a very large market for that that we sell to within DFI and
NFT protocols. Many of those we've announced like Gearbox, Azuki, et cetera. We also,
have work that we've done on using our approver network to generate proofs for roll-ups, right?
And so we have a very large deal that we've signed and we've announced with Matterlabs.
We're up to 75% of Matterlap's proof generation for the next two years we've done on LaGanche.
And for us, that's a very exciting market opportunity.
We think the Matterlabs team and the ZK Sync ecosystem is, you know, one of the largest
and one of the most important ZK roll-up ecosystems in crypto.
And we love being a part of supporting them in their growth ambitions.
So just switching gears a little bit, I want to ask you some questions about your personal journey as a founder.
And, you know, what's the thing that you're the most proud of at LaGrange, but that most people either don't know or don't care about?
Yeah.
I think that there is a fallacy in founding companies that the journey to be successful is linear.
and that you catch lightning in a bottle, you become successful,
and then all of a sudden you're off to the races and everything goes great.
The truth is that at Lagrange,
we've had very many periods where things were going our way
and very many periods where things weren't going our way.
And the resilience of the company and the team
is the thing that has allowed us to continue to excel as a business.
And that's the thing that I'm the most proud of about our business.
You see a lot of companies in crypto.
that they come up with a cool idea, they raise a big round, they launch a token,
things don't go their way, and they go to zero.
And then the team goes on to the next thing.
All you see company is that, you know, they come up with a great idea.
They raise a first round.
They, you know, everything's very exciting for them.
They never end up catching that momentum again.
They never raise a second round.
Nothing ever happens.
We, I have to spend significant time raising my first round.
People don't know this.
I actually failed to raise my first seed round twice before the third time that I succeeded on it.
We had many periods in the history of LaGrange where, you know, the market was swinging away from ZK.
People weren't excited about co-processing. People weren't excited about roll-up proving.
And consistently, what the research team at LaGrange has done, the engineering team, the business team, everyone, was stick to the fundamentals that we believe works, which is building technology that our customers love, and then aggressively commercial.
commercializing those into large tam verticals. And through that strategy, we have been able to weather
very many bad periods and get to very many, very positive periods. And that's a resilience that I
think too few companies in crypto prioritized. They prioritize the fast exit, the hot trade, the cool
narrative. And they don't build aggressively on a fundamental that carries them through both
bare and into bull markets. Yeah, I think fundamentals are highly, highly,
underrated in crypto. I mean, it seems so obvious, right, by like more and more, I'm finding that
the thing that sets high-performing teams and successful teams apart from the rest is just
fundamentals and first principles thinking. You talked about the team and how it's grown
and everything. What is a piece of advice that you would give to aspiring
and crypto founders that are building a great team that want to build a great team for the long term.
Yeah.
I mean, the only way I think to be successful is to hire the best people, especially in a very
research-oriented sector.
You need to go out preemptively and find the best people to work with, hire them, and then
be able to retain them.
And so a lot of the early hiring at Lagrange, I'm not even.
early hiring. A lot of the hiring until today is done by me for a lot of the research sectors,
right? I've run all of the interview processes for anyone who's interviewed with LaGrange
for the first three years of our history, the first person I met was me. I took a very high
amount of ownership in trying to run the interview process the way that I thought had to be run
to attract the best talent. And because of that, we were able to get a lot of very, very, very good
talent. Now as we've grown, we've changed processes with some roles I interview for, some role
to donate every four. But for anyone who's starting out, I would recommend that they take as much
ownership as possible on trying to run their interview and hiring process and then do as much work
as possible in one-on-ones. I have one-on-ones with everyone on the team at the Grange still at a very
regular cadence. We like to keep our team small. We like to make sure that we have offer packages
that are competitive with the best companies in the space.
We've done that since day one.
And we make sure that people who join Lagrange
have a very, very, very high retention rate
when they're at the company as well.
What stands out in your interview process, do you think,
from other teams?
Like, what's the one thing in your interview process
that stands it out as a great way to find the best talent?
So I'll give these secrets because obviously we,
I think founders should know this.
But early on, I wouldn't have shared these secrets.
But the one that really was helpful was I was the first person who everyone would talk to.
So when someone is interviewing with a company and the founder comes on for the first interview and says,
this is a one-on-one interview with me and this role is so important to us and the role that you're interviewing for is so important to us,
you will directly interact with me throughout this entire process and I'll guide you through it.
generally people who are top of line and are trying to take a bet on an earlier stage company,
enjoy that level and appreciate that level of attention.
Secondly, when we were competing against larger companies for very, very good talent,
I would fly out to the city of that person who we made the offer to to meet them in person
as part of making that offer.
And we would spend time, we would take them to dinner, we would get to know them,
we would get to know them personally,
we'd make very clear that if they were to join with Ronge,
they were joining a company that prioritized them and prioritize winning.
And that very few founders, even today I see, are willing to fly out and meet a company,
someone in person who they make an offer to.
This is one of the best ways I used to try to close deals when I was a venture investor, right?
If it's a hot company and a hot founder we're trying to invest in, then I would fly out to meet them in person.
And I see no difference where if you're, you know, one of the main differentiators you have as a founder
is your talent that you're able to hire, that you shouldn't be doing the same thing.
And I've told this to dozens of founders and secret, and none of them have done it.
And so maybe I'll say it publicly and people will start doing it.
But after all this time, I rarely see any founders doing it.
Still.
Yeah.
It seems like such a simple thing, right?
It's all about relationship building.
And if you're competing against whether it's other VCs for a deal or other companies for talent,
having that building that sort of personal relationship with that person,
person early on can make it sort of make a break to deal and like flying out to meet that
person. It's definitely, you know, it's, yeah, it's an effort thing, too. Yeah, it's an effort thing,
totally. Yeah. And then another question, like, starting starting a company, you know, and
scaling that company to, you know, tens of people can be challenging for some people. And there's a lot
founders, I think that have a hard time getting over that, that, that sort of getting over
that scale. So they may be able to run their company when there's a handful of people,
but then it gets harder and then they might, they might kind of cap out. And then you have another,
another CEO sort of like come in and take that company to the next level. Like, how do you as a
founder think about operating at all of those different levels? Like if Lagrange, you know,
scales now to over 100 people, you know, sometime in the future. How do you think about your
role as a CEO operating at those different levels of scale? Yeah. So I have two answers for this.
Firstly, I think it ties into what we talked about for. It's an effort thing, right? Nobody wants to
spend in the first year of their company six hours to eight hours a day interviewing candidates,
right? Nobody wants to. It's a lot of work. And you know, if you have to, if you have
to also run your business and you want to win the best talents. You're interviewing everyone
personally and you're flying out everywhere. It's a lot of work. And it's like it's kind of a pain
in the butt. But if you want to win, it's a decision you have to make if you're going to do it.
And so I think there's a subset of founders who accept what being a founder is, which is doing what's
required at any stage in the business for that business to succeed, right, even if you don't like it.
Right. Your job as a founder isn't to be an engineer. It isn't to be a salesperson. It isn't to be a
Twitter. It isn't to be a head of HR. It's to win. And at every point in the business is
something you have to do to win. The most important thing is going to be different at every
stage. And early on, it requires a lot of effort along the things we just talked about in my
view. And later on, it requires a lot of effort along different axes. And so, you know,
I think as you scale your business, you have to accept that that things change. And, you know,
there's been a bunch of bumpy roads in my journey as a founder getting to the company,
scale that we are now.
And it's going to be a bunch of other bumpy ones
getting to the next scale as well,
I'm sure of.
And it just, you know,
it will require a significant amount of effort for me,
from the management team,
from the,
from everyone at the company to continually hit the milestones
we need to grow these scales.
And, you know, I think teams should be cognizant of that.
They should accept the reality ahead of time
so that they're equipped to be able to tackle it
when they face it in the moment.
How do you juggle with sort of
remote versus in person.
Are you guys mostly a remote team or do most people sort of come to an office?
Yeah, we are a fully remote team, which is a decision that we made because of our requirement to optimize for talent quality because we're a research organization.
You know, a lot of our researchers are based all over the world.
They're some of the top people who've authored and published papers in applied cryptography and computer science, specifically in things like ZKML and
verifiable database design.
Our chief scientist Babas Papamante,
who chairs the cryptography department at Yale,
Demetris Papadopoulos, one of our distinguished researchers,
is a professor at HK-UST in Hong Kong.
Obviously, it would be very hard to base the whole company in Yale and Hong Kong.
You know, Nicola Gai and Franklin,
Delehi, two of our fantastic,
Nicola Gai is a fantastic senior researcher on the team,
and Franklin's our head of engineering.
They're both based in Paris.
So we have clusters of people all of the world,
but it would be very, very hard to force everyone to move to one city.
There's personal things.
It just would be very hard.
So we are a remote team.
I think there's something very special about being an in-person team.
It's very special about in-person time you get as a team,
especially if your remote team that's very little in-person time you get.
But, you know, it's just something you have to work around.
We've always hired very, very, very high-agency people.
Everyone at the team has a tremendous amount of autonomy.
And that has just, you know, been very positive for some people who really, really enjoy that.
And some people don't, but we've been very lucky at the ones we've hired really do enjoy that level of autonomy.
They like to be able to not have someone, you know, breathing over their neck in an office.
They like to be able to execute at the highest level on their work on their own time and then be able to contribute to a team also doing the same thing.
And for us, we've been able to make it work.
How often do you guys get together as a team?
Do you have sort of quarterly retreats and how do you structure those so that you guys can get the most kind of out of that face time during those moments when you see each other in person?
Yeah, I think we always try to do off-sides.
I think companies should do off-sites.
It's very important.
We also have smaller team off-sites where people, you know, coalesce around a conference or subsided team generally, the business facing people and the more go-to-market facing people are.
are often at more of the commercial crypto conferences.
The research people are generally more at research conferences
and have, you know, kind of smaller meetings there around publications.
The engineering team has, you know, kind of meetups that they sometimes do
to do in-person hacking together.
And yeah, I mean, just very broadly, we also do off-sites as a team as well.
I think meeting people in person and, you know, having a team meet in person at regular cadence,
there's no replacement for that.
Yeah.
So before we wrap up, I wanted to ask you about some of the cryptography research that you guys are working on.
You mentioned that there's a paper coming out this year.
Yeah.
The paper came out last year, Dynamic Snarks.
It's a fantastic work that was authored by a research team on a new paradigm for zero knowledge proofs, fully updated zero knowledge proofs.
And this work was published or accepted into SBC Science of Blockchain Conference, one of the top academic conferences.
in crypto for
a ZK and consensus
and generally science
of blockchain designs,
hence the name of the conference.
This is hosted generally
either at Stanford
for a bunch of years.
Last year is at Columbia.
This year it's at Berkeley.
And so we're actually the only team in crypto
that's had work accepted two of the last three years.
We were waitlisted, unfortunately, on the third year.
But the dynamic start work is
going to be presented next week
at Science of Blockchain conference,
Wei J. Wong, one of our PhD interns last summer,
is the lead author,
and then Travon,
Demetris, and Bavis are three of the other authors on the team.
And so, yeah,
anyone who's going to be at Berkeley next week for Science of Blockchain
or whenever this airs,
if you were at Berkeley for Science of Blockchain,
hopefully you saw the talk.
And there's a bunch of other fantastic work
coming from our research team this summer as well,
all the way from things like new star constructions,
to things like privacy preserving inference and privacy preserving
NPC based inference like Coastark work.
And then some other stuff I can't talk about
that are kind of more foundational to ML
and applications of ZK within
some more foundational constructs of ML.
And so across the board,
you know, I think one of the things we prioritize as a company
is to actually do fundamental ZK research
alongside our commercialization aims
for Deep Prove and other ZK technologies.
You know, I'd like to think of, you know,
what deep mind or open AI were to AI research.
We are to ZK research.
We hire the best people.
We retain the best people.
We have fantastic groups of people who are, you know, active professors or are active PhD
students who are joining the company for different periods of time on sabbatical or on
continuous, you know, part-time basis to be able to construct systems and research into
improvements in systems that are, you know, a standard deviation more advanced than what you
would get from purely commercially minded team.
Cool. Well, where can people go to learn more about Lagrange? What's your, what's your,
what's your, what's your, what's your CTA, you call to action for the audience?
Yeah. So I would, I would suggest anyone who wants to build a deep proof, go on GitHub, go to
Lagrange and look at Deep Prove. It's up there. Anyone who wants to follow us and, you know, learn about
some exciting partnership updates and research updates.
Follow us on Twitter at LeBronge Dev.
Or if you are interested in kind of having a more personal relationship with a team,
I'd recommend you join our Telegram, sorry, our Discord channel,
which is linked on the website and is a great way to interact with our community team
and to interact with the founders, a team as well as a whole.
And so I also would say anyone who really wants to,
you're welcome to reach out to me on Twitter or on Telegram,
and anyone who's really, really motivated will be able to find me.
Cool. Ismail, thank you so much for coming on.
Likewise. Thank you so much for having me.
