Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Sreeram Kannan: EigenLayer – The Ethereum Restaking Protocol
Episode Date: November 17, 2022EigenLayer is a 'programmable slashing' layer2 protocol built on Ethereum which leverages security through the method of restaking. This is a bootstrapping mechanism which allows existing Eth2 stakers... to access the collateral in the staking system to provide additional services, for additional yield, while taking on additional risk. We were joined by founder Sreeram Kannan who explained to us the concept of re-staking, how this works in the EigenLayer protocol, use cases, and the roadmap ahead.Topics covered in this episode:Sreeram's background and how he got into cryptoHis role at the University of Washington Blockchain LabEigenLayer and the concept of re-stakingThe risk with 'programmable slashing'The participants in the EigenLayer economyImpact on decentralization and home stakersEigenDA and other EigenLayer use casesCombatting malicious slashingThe EigenLayer roadmapEpisode links: Episode B007 - Devcon Panel – Censorship Resistance and Credible NeutralityEigenLayerEigenLayer on TwitterSreeram on TwitterSponsors: Omni: Access all of Web3 in one easy-to-use wallet! Earn and manage assets at once with Omni's built-in staking, yield vaults, bridges, swaps and NFT support.https://omni.app/ -This episode is hosted by Friederike Ernst & Felix Lutsch. Show notes and listening options: epicenter.tv/470
Transcript
Discussion (0)
This is Epicenter, episode 470 with guests, Sri Ram Canon from Eigenlayer.
Welcome to Epicenter, the show which talks about the technologies, projects, and people driving
decentralization and the blockchain revolution. I'm Friedrich Erns, and I'm joined by Felix Lutz
as my co-host today, and we are speaking with Three Ram Canon, the founder of EigenLayer, which
is a restaking protocol. And what that means, we will get to in just a minute. But before we speak
with Mr. Ram.
And let me tell you about our sponsor this week.
Omni is your new favorite multi-chain mobile wallet
that puts the power of Web 3 at your fingertips.
In just three tabs, you can stake and manage your assets
on over 22 built-in protocols, including all major EVMs,
layer 2's non-EVMs like Cosmos, Solana, Nia, and more.
Omni abstracts away all the complexity while being fully self-custodial,
meaning getting yield on your crypto has never been this easy.
Omni also has multi-chain NFT support,
so you can view all your NFTs in one place,
and you can flex your cleanest NFD
by setting it as your app background.
Don't forget to check out the Explore section in the app
for your daily fix of the hottest gaps,
yield and news across chains.
And there's a lot of news recently.
Omni recently upgraded to its app
to provide with more functionality
than tens of different Defi apps and wallets combined.
To highlight their transformation,
they renamed from Stake Wallet to Omni,
the next generation superwit.
Join thousands of users on this next generation wallet
by downloading it today on iOS or Android at Omni.app.
So, three-ram.
It's a pleasure to have you on.
I recently moderated a panel at DefCon
where you were a very contentious,
I guess it was on credible neutrality.
It was so contentious that we released it
as a special episode in App Center.
We'll talk about credible neutrality
and kind of centralization and looming decentralization.
I mean, a little bit later in the show.
But before we kind of kick it off properly, tell us, how did you get into crypto?
Thank you so much, Frederica.
I enjoyed that panel as well.
And, you know, we could use more frank, candid conversations in this space.
And telling a little bit about myself, I got into
blockchains per se in 2018 around January but my interest in pair-to-pay systems dates far back
my PhD thesis was basically in peer-to-peer and wireless networks this is my master's in PhD
span from 2006 to 2011-12 so I've been thinking about these systems at that time my interest
in wireless was primarily pair-to-peer wireless and ad hoc mesh networks was mainly
thinking through imagining a world where we don't need centralized
intermediation for me to talk to somebody else and we were thinking one of the
big use cases for something like this would be last mile coverage right like in
developing countries where there may not be enough wireless infrastructure but
actually we were pleasantly surprised with the scale of infrastructure
deployment in developing countries on the world where the need for something
like that didn't emerge. So after my PhD, I switched out and was working on computational
genomics for several years. And we were mainly working on things like how do we analyze data
coming from DNA, RNA sequencing, how do genes regulate each other, all this kind of stuff,
totally different from what I do right now. But in January 2018, around that time,
my PhD advisor, Pramod, who's now at the Princeton Blockchain Center.
So he called me and he said, Shriam, did you hear about this thing called Bitcoin?
I said, I heard about it, but I don't know much more.
And it's like, oh, the things that we used to think about a lot, you know,
how to maximize throughput, minimize latency in peer-to-peer type systems is what, you know,
Bitcoin is facing.
There's a whole bunch of problems.
Do you want to come and work on it?
And as interesting as it was technically, I had already been burnt once trying to do something in the peer-to-peer space.
And also I had a kind of intrinsic distrust of financial speculation.
And I looked at it at that time, and I wanted to see if there is some more fundamental reason for me to commit to work on that.
or like, let's say, a 10-year time scale.
And initially, I was not convinced,
and it took me three, four months of, like, wandering around.
And one of my, like, basic paradigms,
which is, comes from, like, evolution.
And you can ask, like, what is the kind of evolutionary advantage
of the species, Homo sapiens?
And, you know, an immediate guess would be something like,
oh, we are intelligent,
and therefore we've kind of taken over this planet.
But I think a moment's examination would suggest that that's probably not true,
because if you take one intelligent person and take a gorilla and then give them an island,
you know, who has better survival there?
So the thesis, this is actually most clearly articulated by U.L. Noah Harari in his book, Sapiens,
that the reason humans have taken over this planet is because we cooperate.
flexibly in large numbers because we cooperate flexibly in large numbers.
I really like this thesis and it's kind of a foundational mental model for me.
And as I think through the thesis and looked at something like Bitcoin,
it became clear that the ability to cooperate is limited by trust, right?
Like I'm going to cooperate with you if I know I can trust you.
And if I can remove the barriers to trust, then there will be more cooperation flexibility in large numbers.
So I saw this as kind of like upgrading societal infrastructure and even providing basically an evolutionary advantage for our species in our ability to cooperate flexibly in loss numbers.
So that became like once I saw that paradigm that essentially crypto can play.
a role in upgrading our cooperation infrastructure, I became fascinated with it. And over the next,
you know, few months, I started diving in deeper. And so I would say that I've gotten deeper and
deeper down that rabbit hole in the last five years. So that's how I got it to crypto.
Nice. So yeah, before we talk about, I'm glad the project we were here to talk about, I guess,
one step before that or where you're still part of is the kind of the university of Washington
blockchain lab.
Can you talk a little bit about what the goal of this lab is and what you've been researching
there or you're still researching there?
Absolutely.
If you are.
Yeah.
At the time, you know, I was working on, like I was saying, computational genomics, but I was
a assistant professor at the University of Washington, Seattle.
and what ended up happening was as I got more interested in this,
I found that there is a bunch of basic questions which are unanswered, right?
So just looping it back to my thesis,
cooperate flexibly in large numbers.
You know, Bitcoin already showed us that, you know,
you can do trustless cooperation.
I would characterize Ethereum as having showed us that we can cooperate flexibly.
You know, Harari uses the word flexibly to differentiate from other species,
which only cooperate genetically, right?
Like army ants cooperate in large numbers, but only genetically.
And, you know, the way I think about Ethereum is it enabled more flexible cooperation
because you can programmatize, you know, applications that can then build on top of this common trust structure.
And so the thing that I thought was missing at that time is the in-large numbers part.
And in large numbers, basically to me meant we need a much more.
scalable substrate. So most of our research ended up being thinking about how do we do scalable
blockchains and what are the core features of consensus protocols? How do we get scalability? How do we get
the game theory around this right and so on? So this is this became the agenda and this became big enough
you know to start the UW blockchain lab. So earlier it was part of my other research just one one strand.
And the goal was basically to understand and create enough primitives so that we can have scalable blockchains.
So that was kind of the agenda for the lab.
And while doing this, one of the things that I ran into was when you're thinking about how to build new, say, consensus protocols or scalability or any of these things, there are very few avenues for you to deploy them into production.
So what do I mean by that?
You know, imagine you had a great idea for distributed application, like a smart contract.
Then you could then take it and run it on top of any major smart contract blockchain, say Ethereum or any of these other chains that came after that.
But if you had a great idea for how to improve the consensus protocol or scalability or adding new features like how to build better oracles or how to build better data availability or any of these things,
there is no place for you to go and deploy any of these innovations. In fact, every new innovation
requires you having to create yourself a decentralized trust network. The way I think about
decentralized trust is it's like a unicorn. It's so rare and it barely exists. And so it is
completely untenable to ask like a good distributed systems engineer to also create
decentralized trust along with each of their innovations. It's a
appears rather insane to me, that this is the expectation that we have,
that each person has a good idea for a consensus protocol,
a good idea for a virtual machine,
also go and create a social revolution to create decentralized trust.
It's just simply untenable.
So initially, I didn't understand all of this.
My initial thinking was, hey, we have these school ideas for how to improve consensus protocols.
Maybe somebody will take up and use it.
And what I saw was there is a lot of governance bottleneck and pressure
in big systems, which rightfully should exist to, you know, the next upgrade needs to be
made sure that, you know, it's really, really accurate, correct, and safe, and it's a long
process to get there, and there is no place to do rapid experimentation.
So I saw that, and I was a little bit frustrated with this state of the ecosystem, that basically
the idea that smart contract developers had a variety of options for experimentation, whereas
infrastructure builders, which I thought was the core bottleneck, which was limiting the scope of
blockchains, did not have the same playground for innovation.
And so this became a little bit of an obsession trying to figure out how do we borrow existing
large trust networks to then go and create and let anybody innovate on top of a common substrate.
it. And I would say that was kind of the seeds for what became Egon-Layer later, is this
obsession in trying to figure out how do we leverage existing trust networks to do new things
that it was not designed to do. So that's a little bit of prelude on what we were doing
before and how it led to this project.
Fantastic. So Eigen-Layer lets you piggyback of an existing decentralized trust
network, it does that through a mechanism called restaking. Can you explain to us what that is?
Yeah, absolutely, Frereka. When you ask this question, how can we leverage an existing
trust network to do other things? It begs the following, you know, secondary question, which is,
what is the root of trust of these existing networks? And we can take Ethereum as an example,
or Bitcoin as another example.
So what is the root of trust of Bitcoin?
The root of trust of Bitcoin is the proof of work
which powers, you know, security for this network.
And similarly, what is the root of trust for Ethereum
is the proof of stake that powers the security for this network.
What do we mean by proof of stake powers the security for this network?
People are taking their stake, their eat,
and then locking it in a contract,
and then saying that I am abiding by the conditions of block production,
of this network of Ethereum.
And if I deviate from it, if I follow it, give me rewards.
If I deviate from it, I have liability
that I may lose my eat.
And this constraints the set of possible behaviors
that participants in the block production system
can exert.
They have to make valid blocks.
If they make invalid blocks or double sign blocks and so on,
they are liable to lose their eat.
So this is the root of trust.
So the root of trust of Ethereum is people,
putting down ETH and committing to both positive and negative incentives for actually making
this block in comparison to say Bitcoin, prove a work where people are buying and investing like
mining equipment and using that to mine Bitcoin blocks and the space of there is a positive incentive
for them. The positive incentive is that if you continue mining on the longest chain, you'll get
rewards and there is a negative incentive which is that if you know you don't you try to attack the
system or whatever the there's no programmatic negative incentive but there's a subjective negative
incentive which is that the bitcoin to usd price may go down and that adversely impacts the value
of your investment which is this mining equipment so that's the economics underpinning the root of
trust in bitcoin and say the root of trust in ethelia what
we found was the Ethereum root of trust or in general proof of stake route of trust is much more
programmable. And what do I mean by that is you can take the same stake and opt in to additional
conditions, additional positive incentives and negative incentives because it's stake. And stake is
programmatically controlled by the blockchain. Whereas you cannot do the same thing on Bitcoin.
You cannot take Bitcoin root of trust and then say that I can also opt into additional negative incentives
because any additional negative incentives have to translate to Bitcoin to USD price movements
and that's not possible for us to modulate.
So we found that and it is more fundamental than that because, you know, the core idea is
you cannot, the Bitcoin blockchain cannot go and burn your mining equipment if you do something wrong.
and that's because that's just, there is a digital to analog translation barrier.
Whereas, you know, your eat is stored in a contract and you can basically, you know, burn, loose, transfer, whatever those that eat.
So now having got into the root of trust of proof of steak, we can ask, how can we leverage the same eat to then also secure other things?
imagine a world where all stakers opt-in restake, which is basically put their stake at, commit their
stake to additional services. They say that, hey, I'm going to also run this other data storage
network, this Oracle or this new chain, any of these other things. And they say that if I do
not behave correctly on any of these services, I'm liable to lose my eat. And if, if
If it turns out that all of them, all the 100% of each stakers opt in, then you have kind
of gotten the same route of trust that is underpinning the Ethereum blockchain to also
opt in to your service.
So you can think of once you have like everybody opted in, everybody restaked, then it's almost
as if the Ethereum protocol upgraded itself to make sure you are basically this other service
is being run by everybody who's also running the Ethereum staking.
And you can think of two different ways in which trust transfers.
One is trust arising due to decentralization.
Each stake is widely distributed and stakes.
So that is one dimension.
So to just access that dimension, it is sufficient
if the same set of each stakers also go and run this other service.
But to transfer the economic incentives,
we need the stake to be committed programmatically to additional slashing conditions relevant
to each of these services. So eigenlayer allows basically is a platform that lets stakers
opt in both there like decentralized trust and economic security to opt in to validating
all of these other services. To give a bit of context on the name eigen layer, eigen and German
for your own. It's we we envision a world where anybody in
come and build any new service without having to go find or build their own decentralized trust
network. They can leverage this existing massive trust network of Ethereum and then build
this on top. We find the timing to be, this is the right time to build something like
eigenlayer because we just went through the merge and we are in a fully proof of stake world
right now in the Ethereum landscape. As also the layer two landscape, have a layer two landscape
having significantly driven technology forward, for example, by having sophisticated fraud
proofs and validity proofs. And these may be needed for services to exert slashing. So if a service,
for example, if I'm running a chain, and in that chain, if you double signed a block or if you
signed an invalid block, it should be transparent on the Ethereum blockchain, whether that was
correct or not. And to do that, you need to have either fraud proves or validity proofs,
and the emergence of the layer two actually helps us in basically writing these slashing conditions
sharply. So that's a kind of quick overview of Eichler. There's tons to unpack here. Maybe kind of
let's roll up historically a little bit before we kind of dive into the weeds here. There used to be
the concept of merge mining. I mean, the concept is still around. It's
just doesn't really get talked about anymore.
So basically the idea behind merge mining was that if you, in proof of work,
obtain the right to build a block, you would be conceded to build the block on a different
chain as well.
So basically, so that basically several chains can use the same proof of work.
Is this the proof of stake equivalent to merge mining?
Yeah, absolutely.
I think this is, so the restaking, another way of thinking about restaking is just merge staking, right?
And I think this is probably one of the reasons why nobody else came up with it and we came up with it is because, you know, all the OGs who have been around in crypto like you all here have looked at this paradigm of merge mining.
And merge mining is the idea like Frederica just explained that basically if you mine a block in Bitcoin,
you also kind of mine a block in this alt-coin or this new ecosystem.
And merge mining had very limited crypto-economic transfer.
So what do I mean by that is if you merge-mine Bitcoin and some other chain in some other alt-coin in parallel,
Well, firstly, you need a lot of Bitcoin miners to opt in to this other system in order
to have any security transfer because even if you had a majority honest in Bitcoin, if only
5% is merge mining your other coin, then basically you don't have any measurable security
because we know 51% of Bitcoin miners are honest, but doesn't mean 51% of 5% of Bitcoin miners
are honest.
So you need all the Bitcoin miners dropped in.
So that's number one.
But even if every Bitcoin miner opts in to this other system, there is still a massive limitation.
And the limitation is that even if 100% of Bitcoin miners are mining this alt-coin,
if they all coordinated to attack this alt-coin, what happens is the alt-coin system doesn't work,
but that does not translate into any kind of like a meaningful loss for them in the Bitcoin.
because you know they can continue to use their bitcoin mining equipment to mine bitcoins
so at the end of the day the transfer of economic security from bitcoin to any of these are
all coins which were merged mine is very weak and because it's very weak uh these systems
did not did not evolve to the extent that people anticipated early on and these
once these flaws were identified people kind of gave up on this paradigm
And I think this is the reason why nobody really revisited this paradigm with the new lens of staking.
What we did, you know, as kind of not having lived through all that experience, is we said, oh, yeah, okay, staking means that is slashing.
And what this means is if you had like, you know, a $20 billion staked in Ethereum, but even if 5% of it, like $1 billion gets staked in your alternative network, if you misbehave, if you can lose all your eat, right,
the $1 billion of E.
That is a concrete negative penalty.
So the transfer of economic security
is nearly perfect in merge staking,
whereas the transfer of economic security
in merge mining is very limited or non-existent.
So once we kind of groked around this,
and this really needs two things.
It needs proof of stake.
It also needs a powerful general purpose programming,
because otherwise you cannot
have strong programmatic conditions which
slashed the money when you're misbehaving in all these protocols.
And both of them were kind of nicely getting well developed
in Ethereum.
So that's the context for why I think many others who
may have come up with it, MISD,
because of the trajectory of merge mining, which
basically didn't work.
And I think merge taking works perfect.
Yeah, I think I've seen you describe also
Eiglayer as this programmable slashing protocol, right?
And I think one thing's interesting, I guess to some degree, if you think about sharding,
too, right?
Like, I guess if you double sign on one, another shard, you kind of also slashed on the main chart,
which essentially is similar, just that in this case, maybe in Eanglayer,
you somehow basically have, like, charts that do, like, different tasks.
It's permissionless sharding, basically.
If you want to think about it.
Yeah.
So it's very interesting.
concept for sure.
I have this one question that I definitely want to ask why we're already on the
kind of meat of it with the slashing, which is essentially on Ethereum right now,
if you are a big enough like staker, you have this concept of correlated slashing, right?
That if like 33% of the network double sign at a certain point, you can have 100% slashing.
Now, I guess I'm wondering since, of course, if you add more
risks or more services on top that can be slashed, more conditions. And this case might be one of
them. You might potentially go above 100% slashing. Is that how do you like deal with that?
Is there some kind of systemic risk there that the system generates and how if there is or like
how do you think about it and how do you limit that this happens, which obviously we don't like to
see. No, no, we don't like to see that at all, and we try to minimize the chances for something
like that. But let's zoom out a little bit and see what we envision the main use case of
eigenlayer to be. That we envision eigenlayer to play the biggest role is build services for the
Ethereum ecosystem. So if you look at what an application like ADAP today needs,
it is paying Ethereum for, you know, decentralized trust on block building, right?
Like on block making and block validation.
But there is a whole domain of other things that are all needed in order to make this
application usable.
This could be things like I need oracles, I need bridges, I need MIV management, I need
data availability, I need, you know, faster settlement, whatever the set of other services, you
in a modular world that you may want in order to actually
have the application be trustless.
And if you look at this, and when we talk about systemic risk,
I think there is a kind of natural thing
that people freak out when they see like,
oh, your leveraging trust is going to be like trigger systemic risk.
But actually, I think eigenlayers significantly
reduces systemic risk.
And I'm going to kind of make a case for that.
And why do I say that?
So what is the systemic risk that we see today?
If you're an application and you consume, you know, trust services from Ethereum,
but you also consume trust services from oracles, from bridges, from, you know, other things that you all depend on.
And in some sense, trust is naturally based on the minimum bottleneck, right?
like whichever is the bottleneck trust that determines like how much trust you're getting.
So you have, you know, ecosystem service from Ethereum that basically makes blocks,
but you also need all these other things.
So imagine if you had like $20 billion staked in Eath on L1,
but you have like $1 billion staked in your Oracle, $1 billion in your bridge,
and $1 billion in your data availability and other things.
Now the trust that you're getting as an application,
what is the cost of attacking all these applications?
find the least trusted one and take over that thing.
That is all that is needed to actually attack the whole system of applications.
And, okay, now imagine in an eigenlayer world, you are fully over leveraged,
which is what we are kind of worried about.
What does over leverage mean like every staker is doing everything, okay?
Every staker is participating in the Oracle, every staker is participating in the bridge,
every staker is participating in the data availability.
In fact, I'm going to argue that this is the best possible world we can build.
Why is this?
Because now to attack any one of the applications, you have to attack either the CoreL1 or one of these services.
And if you attack any one of these services, you're going to slash 51% of that service.
So that's the mental model for building on eigenveir is you should have, if the safety of that service failed, then you should be able to slash a majority of the stakeholders.
So if you take this mental model, then essentially what you can do is to now attack any one of these services,
you need to attack like this $20 billion state.
You cannot attack $1 billion state and take over the system.
So the cost of corrupting the ecosystem actually increases under perfect restaking.
But so does the reward, no?
So does the reward.
The reward doesn't increase, actually.
Why?
Because you could have already taken this.
that gain by just attacking the L1.
So the reason the reward doesn't increase
is the same set of DAPs.
So there is a set of Dabs already living on Ethereum.
They trust the Ethereum L1, right?
And now they were dependent on all these external
non-ecosystem services, like oracles, bridges,
which were not done by the L1.
They were dependent on all of these.
And by attacking L1 anyway, you could have attacked these dabs.
But because it is, trust is the min of many things,
I trust you and Felix and somebody else, and anyway I'm trusting, let's say, Frederica with my life,
I might as well trust her with my bag and, like, other things.
Instead of, like, giving those other things to other people, then, you know, my trust exposure
actually increases.
But the cost of corruption doesn't increase, the profit from corruption doesn't increase,
because Frederica could have already taken my life, and if she doesn't.
The same thing is happening with the L1 is basically already the DAPs are trusting the L1 for
about production. If you double sign, if you do any basic issue at the L1, you could have
taken away kind of like value from the tabs. I'm with you part of the argument. So I mean,
obviously if you have a system that is composed out of individual components, the weakest link
is what you need to break in order to break the system. Totally with you on this. But if you have
different chains or different applications, you know, secured by, you know,
the same stake, to me, that's not the same thing. So basically, if you have, say, the
oracle that would have been incorporated anyway, yeah, that's still the same system. So if I look
at Ethereum now, say Ethereum, the, you know, ether is worth worth, I'm making this up now,
150 billion, give or take, but the eth ecosystem is worth 500 billion or 800 billion, whatever,
I haven't done the maths, but probably ballpark correct.
And that's probably fine.
So basically, but if you have, if you were to secure the entire global economy with 150 billion,
this would probably not be fine, right?
So basically, to me, the question is, is there a way to mathematically deduce
how much you can actually secure economically with the stake that,
we're talking about here, namely the 150 billion of ETH. Yes. So the first point, I think,
you know, what you're saying is, well, if there were applications that already existed on
Ethereum today and only they are being served by these additional services, then yes, the profit
from corrupting doesn't increase because they're already there. But if it does increase
the economy significantly because of like, oh, actually the same money is now securing
a much bigger economy, then, you know, you'd be under serious trouble. But let me make a
contra-pasture to this. Imagine we had four years back, okay? And, you know, somebody comes and
tells me, ETH is worth whatever, $100 million, $200 million today. And, you know, there's no way
that ETH can secure like a $500 billion worth of an ecosystem. And that didn't turn out to be the case.
it is now securing a $500 billion worth of ecosystem.
And you can ask, why is that?
Because ETH grew in value relative to the ecosystem securing it, it is securing.
And this is for the good, right?
Because actually, you know, the value flow from the fees actually sustains enough value
to actually make sure that ETH is valuable enough.
And one could make the same argument that by making what the Ethereum secures a little bit more meta,
by making, it's not only what is run on the Ethereum virtual machine,
is basically anything that the Ethereum Trust network can secure,
you're actually just increasing the scope of applications.
What could be built as applications on EVM versus what could be built those applications
which are natively new distributed systems that can then be programmed on top.
And you could make this argument that basically it will grow because you're receiving fees
from not only blockmaking, but also from all these other services,
accrue back to Ethereum.
But I think there's a separate part to your question,
which is a more mathematical kind of a question.
And the question is, what are like exact equations
that like calculate and understand this over-liveraging?
So to do that, we have to actually look at what's
happening in the ecosystem today already.
And you look at it, you have 20 billion
dollar. You said Ethereum's worth $150 billion. Yes, that's true, but each stake is worth
maybe less than $20 billion today. But it is securing this $500 billion ecosystem.
Why are we not worried about this over-leverage? What is going on, right? Like, what's the
underpinnings of not worrying about this over-leverage? And I think there are two things,
two ways to look at it. One is a practical way, and then I'll go to the mathematical way. The practical way is
you look at it and you say, oh, you know, you have $20 billion
to take. And if you're an attacker,
firstly, you have to acquire $20 billion
or whatever, two-thirds of $20 billion.
And because there is a slashing protocol at work,
you're going guaranteed to lose that $6 billion, $7 billion,
$13 billion, whatever, depending on what type of attack
you're pulling off.
So you're guaranteed to lose a whole bunch of money.
Whether you are able to make away with a whole bunch of money
is anybody's question, right? Like whether you are, you will be able to run away with more than
$10 billion from this ecosystem is, and is questionable because, because of what? Because
you don't have exchange liquidity at that scale. You don't have, you know, exit points,
and there is frictions and society will fork you out and all these other things that essentially
constrain the profit from corruption. So the cost of corruption, the cost to the attacker, is guaranteed.
that they have to take this $6 billion or $13 billion risk.
And the profit that they can make is potentially limited.
And these two together constrain the system enough
that practically we don't see these attacks.
But you're evading my question, right?
So basically because I'm asking what's the upper bound
and you say, no, no, no, but it's lower than the upper bound,
which I am sure it is.
I mean, so basically I totally concede that, say,
you break Ethereum, the maker tokens probably will crash.
I mean, totally with you on that.
But I mean, yeah, I mean, is there a way to calculate number?
Yeah, that's the next part to the question.
So I just answered the practical way.
We can also actually understand these more mathematically.
And to do this, actually, you need to redesign Ethereum a little bit.
And so, for example, what you need is, okay,
how do we mathematically understand, like, what is the limits of leverage and why are we not over-liveraged today on Ethereum, right?
Before we extend leverage even more to other things that I'm talking about, we have to understand why we are not all-liverage today.
So the first part was a practical argument arguing that there is a kind of hardening of security at a certain scale, and that's the argument.
But the second part is a more mathematical argument, and what you can do is you can say that the total,
amount of transaction. So whenever there is some kind of an event, there is either a social response
or other response which happens within a certain time. And if you can bound the total value flow
within that amount of time. So if you can bond that, you know, you will not be able to move
more than whatever, you know, six billion or seven billion or something within that time,
which is the incidence response time. Incidence response time may be like time to detect a double
double signed block and then shut down for the transfers. It may be the time for like a community
to fork out an obvious like, you know, a fraudulent fork. So essentially what you want to do
is you want to bound the total volume that of transactions within the event horizon. Right. Like so
there's an event horizon and then there is a total volume that total transaction volume. And right now
there is no nice way to do it because we don't know how much Binance.
We don't know how much crack and transferred.
We don't know how much some other like mom and pop exchange transferred.
We don't know how much somebody sold a Mercedes Benz for and so on.
Right.
Like there's all this activity happening outside and there is no protocol level monitoring.
And why am I focusing on only exit, exit paths is because you can kind of categorize transactions
into two types.
One is transactions that are internally atomic.
I'm selling my ETH and getting an abortee.
right like I'm selling my ETH and getting a Bordia.
This is an atomic transfer inside the blockchain.
If it reverts, both revert.
I either have the ETH or I have the have the board tape,
and I'm kind of fine both ways.
But there are transactions which are not atomic inside the blockchain,
right?
One leg of the transaction is happening on the blockchain
and another leg is happening in the real world, right?
These are the ones that get screwed
if you actually have like, you know,
blockchain reversals and reorgs and things like that.
And what you can start doing is if you can bound the total value of non-atomic transactions
per unit time, then you can actually start saying that actually $20 billion is not a bound
on whatever is the amount stake does not abound on the total value at risk.
It is only a bound on the total volume transacted within the event horizon.
So that is, you can actually, okay, so this is a whole other discussion.
I have a full design for it.
how to actually modulate the Ethereum protocol where the slashed funds are used as insurance
against Riyarks.
So you can actually start selling insurance bonds against the slash funds or at least a portion
of the slash funds.
Even if half of the funds are burnt, the other half is used for insurance.
And anybody who's transacting and wants protection against, you know, these bad events actually
takes out an insurance from the Ethereum protocol.
And by doing this, what happens is the Ethereum Protocol creates common information on the total volume transacted within a unit time because you wouldn't transact huge volume without having commensurate insurance.
So anyway, this is a whole other like rabbit hole.
I'm happy to talk more about.
But I think you need more sophisticated systems to actually have mathematically bound the over leverage on Ethereum today.
And we are building some of these into part of our protocol, at least the roadmap of our protocol,
but we do need more native support from Ethereum.
What kind of heuristics are you using for the event horizon and the volume?
I mean, how would you define the event horizon?
How long do you think that is on East 2?
Okay, so the event horizon depends on the type of bad event you're worried about.
and I think one important kind of bad event we should be worried about is short-term reorgs.
I claim a block is finalized, right, and I make a lot of transactions.
And then, like, I go and create another, as a stakers, they create another, like, fork
with another finalized block, right, which should not happen and is slashed by the slashing
protocol.
But, you know, if there is more money to be made, it could happen.
But with East two, that's much more different.
No? I mean, so basically reorging, I mean, so you can miss a block and you can kind of, you can, you can produce a network split by submitting a block deliberately late, but you kind of eliminate most of the, of the reorg mess that we had with Ethereum 1, no?
I mean, eliminate to the extent that it creates an economic damage to the attacker. But if the economic profit to the attack,
is 500 billion.
Okay, that's true.
So that's the thing that absolutely, right?
Like proof of state created slashing and slashing creates an economic damage.
But I may take the economic damage because my economic profit is higher than the economic damage.
And so I think that's the thing that, because you know, producing invalid blocks is not a valid attack on full nodes which can validate the blocks.
So really the only major safety attack is reorging finalized blocks, right?
blocks, right? And you can reorg finalized blocks if you have majority stake and you're
willing to lose it. And so the question is like suppose somebody reorgs a heat to block, right?
And at what time scale? So if they reorg a block which is 10 days old, likely we will all,
even if we slash it, we will continue with the other fork, right? Like the fork in which,
you know, which was not 10 days old. But if it was 12 minutes,
old, which one will we continue with? I don't know, right? And so the event horizon is basically,
you know, the time to detect a reorg attack or the time within which you can effectively
make a reorg attack, which will not be rejected. And so that's roughly what I would say.
I mean, it's in the order of basically minutes or hours rather than in the order of like
months or weeks, right? So that's the, and so essentially you have to just bound the economic
volume traded within that period. And I think that's why Ethereum is safe today because of these
reasons. And we can imbue something like EigenLear with the exact same set of conditions.
But we can even make it more programmatic because we are building it. We can say that you can
not transact more than a certain value per unit time and so on. And that's enough to basically
make it very difficult to execute these attacks.
Right, yeah, thanks for this excursion, and we would love to see the design of the insurance thing.
That actually sounded super cool.
I hope we'll learn more about that soon.
I wanted to take it back, obviously, about the topic we also want to talk about, like, Eichnaker itself, right?
A little bit kind of the economics, the participants.
We were talking a little bit about the validators basically opting into these other services.
Probably you could also see maybe like liquid staking protocols.
forcing their validators essentially to opt into certain services.
And then on the other hand, you have people paying for these,
these Iglare services that are being provided.
Is that correct?
And then I guess how or who pays, first of all, is it the applications?
How do they pay for it?
What are kind of like models and then who then receives the,
I guess that's the validators again.
but maybe you could talk a little bit just about like how this.
So the economy that is underpinning this.
So one way we think about the economy underlying eigenlayer
is to start with first principles.
And the first principle is the core value of optimization of blockchain
is decentralized trust.
And how we think of eigenlayer is a marketplace for decentralized trust.
If decentralized trust is such an important thing
in this blockchain economy,
we need a marketplace where decentralized trust is bought and sold.
And people had recognized this in other ways, and one way of thinking about it was block space as a kind of like a unit of decentralized trust.
But we think that's not the right level of abstraction.
Block space is not the right level of abstraction for the generic nature of decentralized trust.
You may want to run a new distributed system.
You may want to run like a secure multi-party computation, whatever, that was not natively in the protocol.
So the right unit is, you have this decentralized trust network and you are basically committing to do additional validation.
And the question is, how much value are you willing to take for it?
And so to elucidate this economy a bit, so there are two sides to this market.
One side is stakers who are then offering their decentralized trust services to others, right?
And the other side of the market is,
We think of them as middleweds, but they could be generic distributed systems, right?
Services that are built on top of this.
These services, you know, just to make it concrete, let's think of it, think of the data availability service that we're building.
You can think of it just simply as a data storage service.
It's not a data storage, it's data availability.
But just for simplicity, let's think of it as, hey, I'm going to take some blob and throw it into this network,
and they have to store this blob for this amount of time.
And now you want to do this.
Who is paying for this?
So somebody creates this service.
Let's say, you know, Frederica and Felix wrote the service.
They want to create a data storage service.
They wrote it.
And then they are tired of, like, pumping and chilling a new token.
So they say, no, I'm not going to do it.
I'm just going to run it on eigenlayer.
Just throw it on this distributed network.
And they do it.
And they say, oh, you know, there's a Frederica and Felix wallet.
So you have a wallet and you say that anybody who's paying, and you also create an economy around it,
you say anybody who wants to store data on this distributed network needs to pay $1 per byte or one eat per byte or whatever, right?
One eat per gigabyte.
Okay.
So you have this one eighth per gigabyte.
And now somebody else who wants to use this service, they come and they have some interfaces.
They store the data on this like decentralized network and they pay that one eighth per gigabyte.
to store that data.
And, you know, when Frederica and Felix created it,
they also created a distribution economy,
which said that, yeah, we will take 30% of this,
like, 1.E.4 gigabyte,
and the remaining 70% goes to the validators.
And the validators look at it, and they say,
does this make sense for me or not?
And they opt in if it makes sense,
if that economy makes sense for them.
And so every time when somebody comes,
stores data, they collect that 1e8, and that 1.8.3 goes into your wallet, and then the
remaining 0.7.E gets redistributed among all the stakers. So there is really a third party,
like there was originally a service, and then there was the stakers. Then there is users of that
service, which could be roll-ups, which could be applications, like distributed applications,
which should be end users who just want like a Dropbox type thing to be built on a blockchain.
So that's the economy.
The economy is basically the creator of the service decides how value is a portion between the creator, the innovator, and the service providers, the stakers, right?
So how this economy is distributed.
And once they set forth the set of these conditions, what they actually do is, you know, a service creator.
So to delve into this a little bit more, the service creator also creates like a container, which you can,
the stakers should be able to download and run, which does this particular service,
downloading and storing the data for this amount of period if fee has been paid.
And they also create a smart contract.
The service, or middleware, creates a smart contract, which talks to the eigenlayer smart contracts
and establishes who can participate in the system, eat stakers, with whatever, you know,
at least so much state, or do you allow stake deed holders, what is the
entry condition to participate in your particular service. That's the first part. The second part is
what is the payment condition? Or it is actually one-eighth per gigabyte and 0.3 goes to the creators
and 0.7 goes to the stakers. That's the payment conditions are encoded in the smart
contract. And finally, negative incentives like slashing are also encoded in the smart contract.
It says, oh, if there is a recall game and then I say that randomly I'm going to recall some bytes
and you have to produce it, and if you don't produce it, you'll get slashed.
So some kind of like a negative incentives.
Those are encoded into the smart contract.
Now, if you're a staker, and you're participating in eigenlayer,
you can go and express your preference
whether you want to opt into this particular service or not.
And you go in and sign a thing and say that, yes,
I want to opt in to this particular service.
Then you're registered for that service,
which means you're supposed to be providing it.
And if you violate some conditions stated in the smart contract, then you will get slashed.
But if you don't violate any of those things, you will continue to receive payments at the encoder rate.
So that's the core economy of eigenlayers.
So to build an eigenlayer service, you have to write an off-chain container that Stakers can download and run.
It can be an arbitrary language, as of now.
And there is an on-chain slashing contract or on-chain service contract you have to write,
which controls gating who participates, positive incentives and negative incentives.
These are all encoded into the smart contract.
And so when somebody is opting into a particular eigenlayer service,
they know exactly what set of things they are opting into.
How much due diligence does this require from the validators, right?
So basically, if I'm a mom-and-pop validator,
do I know which things I should be validating on?
And if it's too difficult to discern which ones are good things to co-validate for,
I might be foregoing yield, which might make it economically unviable to validate myself.
And I mean, this is something that the network needs, right?
And so basically, we've seen recentralization, you know, for other reasons.
So, I mean, basically, people just stake with their exchanges or, you know, there's, you know, Lido and, you know, don't get me started on proposal builder separation and MF boost, whatever.
So basically, I mean, we've seen these that does this, where this add to this entire situation?
It adds and it subtracts.
And I'm going to explain both sides.
How much due diligence does a home staker need?
And what we're trying to do is to create audit economy around these eigenlayer services.
So there is going to be just like smart contract audits is how users trust smart contracts
to do the things that they say in the white paper or whatever other things.
So there is a barrier for a user to use a smart contract.
In the same way now, there is a barrier for Staker to opt in to new services.
So there is an attendant audit economy that is needed.
And we want to absolutely minimize the amount of diligence that somebody has to do.
And there may be multiple categories of services, those that are kind of vetted by us or some other reputed agencies and so on.
And, you know, and stakers may feel more inclined to opt into them.
And there may be others that basically, so you can imagine a world where a staker,
Homestaker says, yeah, you know, give me Frederica's curation of services.
I only like opt into everything that Frederica says, right?
And that should be possible.
And if somebody says, no, no, actually, I want to be the one who wants to make the decision,
that's also, you know, something that is available in this free economy.
Okay.
So there is a bit of barrier on auditing, wetting, like what these services are.
So that is something that, but it can.
can be kind of delegated, right?
Like just that ability to say that I'm trusting X for doing the delegation and I'm just following
along.
And in terms of missing out on yield, you know, I think like everywhere else there will be a power
law that will be maybe three services that account for pretty much all of the yield, like
90% of the yield in this kind of a platform.
And it's the same way like we have daps and, you know, there are thousands of daps and maybe
three account for pretty much all of the fees today on Ethereum.
And there'll be a similar thing and where a homestakers just opt into these three services and essentially get all of the yield.
And so one of our interest is in making sure that these services are as lightweight as possible, that a homestaker should be able to opt in.
And this is a kind of like a guiding principle for us is actually we think something is scaling only if each node needs to do very little, but the system can do a lot.
Like a scalable system basically means that each node does little, but the system together does a lot.
And that's when decentralization and scalability are not in fundamental tension.
And we think that there is, we have, in general, as an ecosystem, we have understood enough principles that actually we know how to build systems which scale horizontally.
For example, our data availability servers is built in such a way that each staker,
needs like 0.3 megabytes per second in network bandwidth.
But together the system bandwidth is 15 megabytes per second.
So it is not based on everybody needing to have a lot of computational infrastructure.
It is based on everybody doing a little but data being distributed through this network
and thus achieving scaling.
So one part of the answer is making it easy to do audits
and follow along other people's recommendations,
recommendations. Another part of the answer is there's only a few services that should matter and we try to make those services
be home, you know, be easy for homes takers to participate in. And I think there's a third dimension to this answer, which actually I'm most excited about.
So if you look at the whole MEV and other things going on in Ethereum,
one thing we'll see is there is a lot of discussion about how to keep homesakers decentralized.
And if you just examine the system objectively, there is a gradient or pressure to centralization,
but there is no gradient or pressure to decentralization.
The system doesn't have it.
We only enforce it socially or religiously.
Right.
Like, the system by itself has no pressure.
There is no advantage in being decentralized.
There is some advantage in being centralized.
So there is a gradient or pressure to become more centralized.
And all we can do, all we are trying to do when we're doing, you know,
MEV boost or PBS or any other like design consideration is how to minimize the gradient
to centralization.
That's all that it's being done.
There is no gradient to decentralization.
It's not that.
And because decentralization is not objectively measurable.
It's not possible to make the protocol recognize and incentivize it, even though it's one of the most critical aspects of building the Ethereum protocols.
So we are very excited about the role eigenlayer can play in this.
What is this?
So in eigenlayer, we don't want the platform to exert subjectivity, but we want middlewares or services consuming, you know, decentralized trust to exert subjectivity.
What do I mean by that? For example, imagine Felix is building a service. This service is based on, you know, a threshold encryption. Okay, so part of threshold encryption is dividing some secret into many, many chunks and each person holds a chunk. And if they all don't collude with each other, or at least majority of people don't collude with each other, the secret remains a secret. Okay? This is an example of something which is not based on economic security.
This is based purely on decentralization, because people should not be able to collude with each other easily.
If it's just like, so there are certain things, certain services which can absorb trust from economics,
and there are certain services which only absorb trust from decentralization.
And Threshold encryption is a great example of something that only absorbs trust from decentralization.
So Felix may come in and say when building on eigenlayer that he doesn't care about who has how much eat,
but he has some subjective measurement of decentralization.
Maybe he comes in and says,
I only want rocket pool stakers to participate in his system.
Or he only want certain home stakers to participate in his system.
Or he has an oracle feed that he himself creates.
And it says, only people in this like, you know, my white list can participate in this ecosystem.
Because he has somehow subjectively vetted that they are actually more decentralized.
So if this happens, then what happens is that you're actually.
then Felix is paying for decentralization.
So the decentralized quorum can potentially earn even more than a centralized quorum,
creating a gradient, a pressure to decentralization.
Because decentralized trust is, so another way of thinking about it is,
if we all value decentralized trust, why are we not paying for it?
Because the rich expressive markets to value decentralized trust don't exist today.
And if you allow for rich expressive markets to value decentralized trust,
people will pay more for things to be decentralized in things that you carry about.
And we don't know how the economics is going to play out,
but at least there is a possibility to create a gradient for decentralization.
This is something that I'm super excited about as a possibility for what Igenre.
What kind of services do you see building on top of eigenlegal?
So basically, what are the biggest use cases you see coming on top of Igenre?
So, I mean, of course, we are building the first service ourselves, which is a data availability service.
And the reason we chose to double down on building a data availability service is, of course, the Ethereum roadmap is strongly oriented towards a modular ecosystem where roll-ups basically write data into Ethereum and write commitments.
And one of the things that we want to see is a world where thousands of roll-ups can flourish.
And to do this, you need much more data bandwidth than available on Ethereum today.
And even in the foreseeable future, including things like Dunk Sharding, we want to provide
100x,000x more data bandwidth than what is available.
And the set of techniques already actually have been pioneered in the Ethereum research community,
and we can build, you know, much more flexible engineering modules around these basic cryptography,
like using, you know, things like KCG polynomial commitments and how they were used in dark sharding.
We can take them and engineer like many different kinds of systems around it.
So data availability is one example of what we are building and which could be a very useful ecosystem service.
And we are trying to build it in a way that stakers of all shapes.
and farms can participate in it.
That's one example.
Another example of what could be built on Daubo of eigenlayer
is a whole host of MEV management services.
Why am I talking about MEV management services?
If you're staked, if you're staking in Ethereum,
but you're also restaked on eigenlayer,
then you can start making credible commitments about your behavior.
You can say, for example,
I'm selling you a portion of my block
You're doing an auction, but you're not doing an auction where you're selling the entirety
of block space, which is what is happening in the MEV boost market today.
Whereas what you could do is you could say, yeah, I'm selling most of my block space,
but I still retain the ability to add stuff at the end of my block space.
This is something that you could do.
In fact, we had a design for this we call MEV Boost Plus Plus, which is basically saying
you are auctioning off the rights to fill, you know, some portion of the block, but there's
still space at the end of the block where I can add in as a block proposal, whatever transactions
that I want at the end of the block. So you don't have to make a trade-off between expressing
my preferences as a block proposal and the economic upside of having to participate in an MEV
market. I can do both. So this is one example of what you can build on EGenLayer as an M-EV service,
but there's a whole host of other things. You can start doing things like multiple block builders,
decentralized block building. What I do is instead of selling all my block space to one person,
I say, oh, I'm selling the first 30% of my block space to Frederica, the next 30% to Felix,
and so on, right? I can start doing more of these things. And what does Eigenlayer particularly
enable in it? The idea that if I don't stand by my word, if I told Frederica that I'm going to
include 30% of her transactions in the first portion of the block, but I don't, then I'm slash,
I could be slashed on Ethereum, right?
So that I could lose my ETH.
And that gives Frederica some trust in me in actually making this transaction possible.
So the ability to make credible commitments actually opens up the space for how we do transaction ordering priority.
Even things like I want to build a threshold encrypted memple.
So I commit to maybe selling the first 30% of my block space.
And then I say that the remaining 30% or the next 30% I'm going to use.
threshold encryption. And I agree, I send a signature saying I'm going to use the decrypted
version of these encrypted transactions. And if I don't include them in the block, then I'll get
slashed. So it opens up the space for anybody to come in and innovate on MEP management services.
So that's that's one huge category. There are also other things there, things like I want to do
event-driven activation. For example, Felix is like, hey, you know, whenever I'm, I'm
I'm under-collateralized on compound, then refill my collateral from my wallet at this address.
And that's just a kind of standing instruction, event-driven instruction that he wants to give.
You can do this today using this category of middleware is called Keepers, which, you know, like, gelato and chain link has something, and there are others building in.
But the problem with those services is there is a kind of non-attributability problem, whether
that node triggered the transaction but that transaction was not included in the block or that
node did not trigger the transaction and therefore it was not included in the block. This is not
attributable. Whereas on eigenlayer, if a block proposer opts into these event-driven conditions,
if a block proposer opts into these event-driven conditions, then it's uniquely attributable
because the block proposer of course controls block space. So this is another example. There's
all kinds of other examples like whole block flash loans and, you know,
know, other, like, crazy economic objects you can start building.
Because block proposals are staked, they can kind of opt into covenants on what they cannot
break.
So this is one class of solutions, MEV management.
Our questions, comments.
Yes.
Let me move away from the financialized use cases a bit.
So in principle, there's lots of things that you would love to have a trust network for
that are non-financialized and consequently currently currently currently.
decrouded out of the truly credibly neutral blockchain, which is Ethereum.
So can you kind of make it viable for them to run on eigenlayer?
Absolutely.
I think one consequence of like high performance data availability is if you have like
a huge amount of data availability bandwidth, now you know you can start running applications
which are simply financially priced out, right?
And one of the ways we think about this is if you look at the current networks, the operational cost of running the network is far lower than the capital cost of staking.
Like I'm putting $20 billion of staking.
So I need at least a 10% or 7% some APR.
So I need at least $2 billion annually in return.
So that's the capital cost of staking.
And then there's an operational cost of like scaling and providing whatever services that you want.
And actually the operational cost is not at all dominant right today.
Staking cost is dominant.
So you can overlay more operations and still, you know, not suffer significant cost.
That's one part of it.
The other part of it is actually just by better engineering,
you can use the same amount of bandwidth much more efficiently.
And that's the part I was talking about in our data availability
is by actually every node doing a little, but together they do a lot.
And by scaling across nodes, you can actually provide a huge.
huge amount of bandwidth for applications to consume.
And this is one of the reasons we actually built the data availability first is just like
opens up the volume of use cases from use cases where there was high amount of value flowing
per bit of data to use cases where you don't need a lot of value flowing per bit and then
open up to the long, you know, use cases where like there is going to be a lot of data
needed to be transmitted to arrive at, to still like leverage the credible neutral platform like
Ethereum for that.
Let me zoom right out.
So, I mean, if you look at eigenlayer as a concept, it's kind of a different paradigm in scaling.
So I mean, so currently in scaling, we kind of, we have layer twos.
We have like the IBC, you know, style connected layer one.
and now we kind of have this piggybacking mechanism that is eigen layer.
Can you talk about how these compare and whether piggybacking often existing economic system has negative externalities for just that system?
So basically, does it take away anything from Ethereum that you're using this as well as an economic trust?
layer? I think it goes back to mostly the leverage type questions that we talked about earlier,
right? That's one part. So I won't go into the same thing again. But what other issues are
there? One other issue is the same stake is committed now for Ethereum, but for also to do these
other validation tasks. And the one thing to understand is Ethereum is the primary and everything
else is the secondary in this market because your stake didn't eat. And the actual mechanics,
which I didn't go into earlier, is you stake in Ethereum and then you set the withdrawal powers
to the eigenlayer smart contracts. So what happens is Ethereum has like first dibs that's slashing.
Ethereum has basically, so Ethereum is the primary loan holder, so to say. Everybody else is a
secondary on this platform. So that means actually that Ethereum protocol has the priority on
slashing, so I don't think it affects the core properties.
There's one thing, though, which, you know, is just temporary and we hope that it'll get
sorted out eventually, is the idea that when somebody is slashed on eigenlayer, when does
Ethereum get to know about it, right?
Like, if there is a huge delay and like the person has actually completely slashed already,
they don't have anything remaining, and, you know, Ethereum thinks that they have a lot,
but actually they don't.
And this problem can be minimized by, you know, a feature in Ethereum called smart contract triggered withdrawals.
If the eigenlayer smart contract can immediately, the way slashing happens is they immediately triggers withdrawal from Ethereum.
Then basically you don't have this principal agent type problem.
So that's another dimension that we've discussed with the EF people.
And I think in general, something like smart contract triggered withdrawals helps all kinds of standards.
taking protocols. Other than that, I don't see any significant other aspects to the...
You would always withdraw the entire validator, like all 32 for every slashing. I could
imagine like some slashings being kind of minor to just kind of discourage Hugh.
Yeah, I think one of the principles we're using for eigenlayer slashing is to be as rare as possible.
and when it happens to be as severe as possible.
So as severe as possible, because we don't slash for things like uptime,
they only slash when there is like a significant safety failure,
probably malicious action.
So when there is a probably malicious action,
we don't need to slash a little.
So slashing is designed to be very rough and severe when it actually happens.
But this, you mean, is this for your specific service you're building?
or because I guess that's kind of customizable for someone that's right but that's the recommendation
for all the services and we want to uh because you know we don't take slashing it lightly and I think
it should not be taken lightly by any service either uh especially because of this primary secondary
type problem that we talked about and it should only happen when there is a clearly provable malicious action
if if I'm slashed do I have a recourse so basically if say if say for instance
I'm evil.
Ha, ha, ha.
And I build a service on top of eigenlayer that uses a malicious contract to slash unsuspecting stakers.
Do they have a recourse?
Can they do something against this?
Absolutely.
So this is something we take very, very seriously.
And no amount of audit and audit and other things is sufficient to guarantee that there's no, like, edge case and malicious codebays in.
slashing so I think that is a significant problem and in especially in eigenlayer
which couples trust across multiple systems you know it could very well happen
that all each takers opt into Frederica's like evil contract and like you know
they all get slashed and it could be like an absolute nightmare so the way we
deal with this is by requiring or like enabling what we call a slashing veto
There is a veto for slashing run by like a reputed committee.
The only thing this committee can do is basically veto slashings.
And you can think of basically slashing needs approval both by the contract and by this committee in order to actually pass.
So that's the trust model.
It's basically you're trusting one of these two to work correctly to protect against slashing errors.
And if both are malicious, then there is a problem.
Okay, but that's currently social consensus, right?
So basically it's a pool of known individuals who say, like,
clearly, Friedrich is out of her mind.
This was not a slash a blow fence.
Give back the stake.
That's correct.
So this system relies on both neutral, objective algorithm,
which is, you know, a smart contract, plus a social layer.
So the things that blockchains have natively is a social layer to fork the chain of
something crazy happened.
Sure.
And we are an overlay layer and we don't have the ability to fork Ethereum and something,
or we don't want to assume the ability to have four Ketherium and something bad happens.
So we are incorporating the social layer into the protocol and that's a necessary.
And so this layer is not run by like an economic committee.
We do not think that it's correct to have a token committee or whatever, you know, run this thing.
should be run by trusted individuals, reputed entities in the ecosystem.
Yeah, that makes sense.
Coming back to the second half of my question,
how does this compare and contrast with other scaling solution,
basically loosely meshing eigenlayer with layer 2s and IBC connected blockchains?
Yeah.
How does it compare and contrast?
So, EigenLayer is designed for a modular world.
So it is designed for the roll-up world.
So it doesn't add particularly anything to the one part of the roll-up world, which is, you know,
you want to do zero knowledge proves or other like economic games in which you can actually
prove that your execution state is correct.
I think that is something that we like a lot and, you know, it's completely complementary
to what Igon layer is offering.
So it does offer something to, in terms of our particular solution on data availability,
but also for the ability for others to build even better data availability solutions on top of eigenlayer,
I think that is something that we are quite excited about,
the fact that, you know, the area of open innovation can be quite large there.
In terms of other services, other kind of like paradigm,
So just to add a little bit more there, for things like optimistic roll-ups, you have a layer of economic security, which is the sequencer is basically making a claim that what they said is correct.
And then you also have a layer of verification, which is that they'll get slashed if that doesn't happen correctly.
You know, on things like eigenlayer, you can reuse a lot of stake and provide more economic security for things like optimistic roll-ups.
So that's something interesting.
it adds to the optimistic roll-up ecosystem.
On the ZK-Rollap ecosystem,
I think one thing, something like Aikenlaher adds,
is, you know, proof verification on Ethereum is still expensive, right?
And it is expensive because of some basic fundamental limits.
You know, if you use the Ethereum blocks only to do like ZK-proof verification,
maybe you can do like 15 to 30 ZK-proof verifications per block.
So that's the current like block size and gas consumption of these things.
So if you had a world where there are like thousands of different roll-ups, then they cannot
all write ZK proofs into Ethereum every block.
So that's not possible.
But what they can do is if there was a kind of like restaked quorum of all the Ethereum
stakers and they all verify.
But the fact is verifying ZK proofs is very trivial off-chain, right?
Like you can run it in parallel, you can check hundreds of ZK proofs in parallel.
of them may take only like tens of milliseconds and so you can actually verify thousands of
zK proofs so in a reasonable note so the question is like on on an off chain so the
proposal could be something like you can create a service on top of eigenlayer where all
the eat stakeholders participate and they verify like thousands of zK proofs in parallel and then
they just certify that they have verified all of this on ethereum so this could be an example
of a kind of synergy with things like zxie
zero-knowledge roll-ups.
And the one thing we see in ZKrollops today
is they wait for a long time to batch
because of the verification costs.
And you don't need to do it,
and you can have a bridge
which moves data between the roll-up
and Ethereum every block
if you had a layer like this.
Going to your broader question on things like IBC
and the external ecosystem,
I can layer Bars close similarity
with what is called interchained security,
which is basically
one chain providing security to other chains.
I think there are a couple of important differences
interchained security as it is being talked about today
is that the provider chain has to have a governance upgrade
to opt in to serve this other chain.
And I'm just working in this space for enough time,
anything that has a governance upgrade.
I'm like, okay, that's too slow.
So I like the nature of like what we are doing with eigenlayer,
which is basically validator-level opt-in,
permission-less, each validator make a decision and opt-in.
I think it reduces frictions massively.
And the second thing is the way we think about
what should be built on eigen-layer,
which is more of a subjective opinion,
but I think it aligns deeply with the Ethereum landscape
is to build modules,
each module being secured by the same stake,
rather than to build chains,
which is what interchained security is optimizing for.
Okay, to add to your last question about IBC in particular, right?
IBC is inter-block chain communication is the standard for talking between different
blockchains.
I think we need a powerful IBC port for Ethereum.
And I'd be very excited to see people build something like that, for example, on eigenlayer.
Because what you can do is once you have stakers opt-in, you can verify signatures from
all these IBC-connected chains and just make an economic certificate saying that
yeah, we all know that this is the set of signatures in these other chains.
So that's an example for what can be built on Agen-Lair.
Thanks for expanding so far into it.
I think we're also been at it for a while.
I think we can slowly get to kind of wrapping up.
Maybe for the final question, we can talk a little bit about where the project is at right now.
I mean, we talked a lot about what is theoretically possible.
Maybe we can hear a bit, where are you at right now?
What's on the roadmap and like the immediate future?
The way we're building eigenlayer is initially we are building the first service ourselves.
And on launch, there will only be the one service, the data availability service we are building on top of it.
And we want to slowly open it up from being one service platform to a few like partnered services to then to be a self-server platform.
to then to be a self-serve platform on which anybody can come and build anything that they want.
So we'll see the first service launch hopefully mid-next year,
and then we'll have a whole bunch of other services on board in the months following.
So that's the roadmap. Right now we are in internal test net.
We have a few integrations we are testing inside the internal test net.
So that's where we are.
and we'll hope to have a more public-facing test net in the months between now and launch.
Fantastic. We look forward to that. It's been an absolute pleasure to have you on,
Rhehram. I have learned a lot. This is such an interesting project. I'm excited to see where this takes you.
Thank you so much, Frederica. I've had really, really enjoyed talking to you and Felix on this podcast.
to be in touch in the future.
Cool.
Thank you, guys.
Thank you, Felix.
Thank you for Alex. Thank you for Rick.
Thank you, Zero.
Thank you for joining us on this week's episode.
We release new episodes every week.
You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud,
or wherever you listen to podcasts.
And if you have a Google Home or Alexa device,
you can tell it to listen to the latest episode of the Epicenter podcast.
Go to Epicenter.tv slash subscribe for a full list of places where you can watch
and listen.
And while you're there, be sure to some.
that for the newsletter, so you get new episodes in your inbox as they're released.
If you want to interact with us, guests, or other podcast listeners, you can follow us on
Twitter. And please leave us a review on iTunes. It helps people find the show, and we're
always happy to read them. So thanks so much, and we look forward to being back next week.
