Bankless - Eigenlayer In 2024 With CEO Sreeram Kannan
Episode Date: December 27, 2023Ethereum validators are earning yield for nearly a million wallets but what happens if you can restake that ETH for even more yields? Eigen Layer is one of the most ambitious projects in crypto and cr...ossed a billion dollars in deposits while recording this episode. Joining us today is special technical co-host Mike Neuder to discuss all things Eigen Layer with Sreeram and Teddy from the team. ----- 🏹 Airdrop Hunter is HERE, join your first HUNT today https://bankless.cc/JoinYourFirstHUNT ----- BANKLESS SPONSOR TOOLS: 🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://k.xyz/bankless-pod-q2 🦊METAMASK PORTFOLIO | MANAGE YOUR WEB3 EVERYTHING https://bankless.com/MetaMask ⚖️ARBITRUM | SCALING ETHEREUM https://bankless.com/Arbitrum 🔗CELO | CEL2 COMING SOON https://bankless.com/Celo 🗣️TOKU | CRYPTO EMPLOYMENT SOLUTION https://bankless.cc/toku 🌐 Layer Zero V2 Launch https://bankless.cc/LayerZeroLabs ------ Timestamps 00:00:00 Intro 00:00:26 Episode Overview 00:09:20 Current State of Eigen Layer 00:12:01 Cap Limits and Accepted Tokens 00:19:42 Economic Security 00:25:51 Eigen Incentive Structures 00:34:12 What Is Eigen DA? 00:40:29 Benefits From Eigen DA 00:45:28 Why Eigen DA? 00:55:31 Throughput and Slashing 01:05:58 Finality Layer 01:10:51 ABS Aggregation 01:16:53 The Veto Committee 01:21:04 Neutrality 01:24:40 Networks in 2024 01:28:27 When Mainnet ------ Resources Mike https://twitter.com/mikeneuder Sreeram https://twitter.com/sreeramkannan Teddy https://twitter.com/TedBreyer ------ Not financial or tax advice. See our investment disclosures here: https://bankless.com/disclosures
Transcript
Discussion (0)
Let's keep all the blockchain wars, L1, L2, L3, like aside, right?
Just let this shrink in.
It's insane.
It's amazing.
It's unusual.
It's like we've becoming this much, much more coordinated.
In fact, as a species, I think our evolutionary advantage is that we're able to cooperate
at a scale that is simply not possible for other species in a flexible way.
Welcome to Bankless, where we explore the frontier of EigenLayer.
eigenlayer is just about two quarters away from mainnet and the excitement and demand for eigenlayer
has been relentlessly crescendoing.
And while recording this very episode with Sri Ram and Teddy from the eigenlayer team,
eigenlayer passed a billion dollars in deposited value into the eigenlayer system,
making the future of eigenlayer in 2024 a very interesting topic to explore here on the show today.
And to help me explore a more technical topic, I brought in a technical co-host.
Mike Noirter from the Ethereum Foundation is joining.
me today. He's a researcher at the EF. He is a milady on Twitter, and he's most known on bankless
as my rock climbing buddy in Brooklyn. Mike, how's it going, my dude? It's going great, David. Yeah,
thanks for having me on. I've been a bankless listener for a long time. So to be here hosting
with you is a real treat. So thanks again. Well, whenever we need technical co-host to explore
technical topics, I always learn more than a few things. And that's definitely what happened here
on the episode today. We just finished recording the episode with Sri Rom.
and Teddy. What were your big takeaways? Did you get all of your questions answered? What did you think?
Yeah, I'd say my biggest takeaway was kind of a new mental model for thinking about how eigenlayer fits
into the system. And that is as a way of democratizing access to restaked rewards. So the points
here I'm made was that in today's world, like without eigenlayer, a dominant liquid staking token
issuer could have internalized all of that restaking yield and given it only to people who issued
restaking or liquid staking tokens with them. And this would be like a stronger centralizing force
because, you know, only that single pool would have access to those rewards. So the way eigenlayer
kind of fits into this picture is by creating an open, permissionless marketplace for both the buyers of
economic security and the sellers of economic security. And to kind of allow everyone to access it in a more
transparent way, hopefully will help democratize and distribute those rewards more evenly.
I think perhaps said another way. I think that description, I think really fits into what the
EF people care about, the Ethereum Foundation people care about, which is antitrust forces around
protocols. I think that's kind of what you're alluding to with eigenlayer, is that without
eigenlayer, there might be a monopoly in a single liquid restaking token becoming the dominant
restaking token. But I think maybe your mental model after this is eigenlayer is kind of
of the resource traffic controller for restaked assets and networks and yield and security.
Is that a fair way to articulate this?
Yeah, exactly.
Yeah, a way of kind of opening up the market and making sure that it doesn't centralize
around one single shelling point.
Yeah.
And also another really cool point, just to kind of add on to that, is how he described
eigenlayer as a way of propagating the meme of ETH as a unit of account, right?
So the initial set of tokens that can be restaked are all denominational.
in ETH. And so this EGN layer is kind of a vehicle by which ETH as the unit of account for
economic security in the whole ecosystem continues to be spread was another really cool mental
model that he brought up. Yeah, that facet specifically. I think I'm, I resonate with on a very
large degree. And I think that's going to be a big theme in 2024. So, Mike, I mean, we're going to
have more re-staking content throughout the year, I think. It's just, it's, it's nerd-snip to me.
I think it's nerd-s-s-knipped to me. I think it's nerd-s-snipe to me. I think it's nerd-snipeed a lot of
people in Ethereum. What questions do you have left? What is still on the frontier of this
restaking meta that you want to explore? Yeah, I think the last thing that still sticks with me,
and this is kind of one of the first things we talk about with Sri Ram, is this idea of,
like, what is economic security in the context of delegation, right? So when there's the principal
agent problem where the principal is the person who owns the stake and is restaking it,
and the agent is the node operator, how can we think about economic security when the slashing
is associated with the node operator, not the person who actually owns the capital that's at
risk? So, yeah, I kind of want to keep deep diving on that. And as Stryoran mentioned, that applies
beyond just eigenlayer. That applies in East Delegate staking and across the board.
I think the principal agent problem is one of the main problems that plagues not just crypto,
although definitely crypto, but really humanity at large. And now we are also discovering it
inside of the eigenlayer system.
Guys, we're going to get right into the episode of Stryrom and Teddy from the Eigen Labs team.
But first, quick disclaimer, me and Ryan are both advisors to EigenLayer.
All bankless disclosures are available at bankless.com slash disclosures.
And with that, let's get into the episode.
Cracken knows crypto.
Cracken's been in the crypto game for over a decade.
And as one is the largest and most trusted exchanges in the industry,
Cracken is on the journey with all of us to see what crypto can be.
Human history is a story of progress.
It's part of us, hardwired.
We're designed to see.
seek change everywhere, to improve, to strive. And if anything can be improved, why not finance?
Crypto is a financial system designed with the modern world in mind. Instant, permissionless,
and 24-7. It's not perfect, and nothing ever will be perfect. But crypto is a world-changing
technology at a time when the world needs it the most. That's the Cracken mission, to accelerate
the global adoption of cryptocurrency, so that you and the rest of the world can achieve financial
freedom and inclusion. Head on over to crackin.com slash bankless to see what crypto can be,
Not investment advice, crypto trading involves risk of loss.
Cryptocurrency services are provided to U.S. and U.S. territory customers by Payward Ventures Eek.
PVI doing business as Cracken.
Introducing USDV, a better type of stablecoin.
Currently, billions of dollars in stablecoin yield each year are paid to tether, circle,
and other central issuers of major stablecoins.
But what if yield could be shared with the protocols that use it?
Those protocols, in turn, can decide how to reward their users.
USDV shares its yield with a community of apps and developers that mint it.
Every USDAV is backed one to one by U.S. Treasury bills which pay yield.
This yield flows out to the community of USDV issuers,
so your protocol or app can get paid for helping end users convert other stables into USB.
This works thanks to a breakthrough technology called ColorTrace from Layer Zero.
Without it, it was impossible to attribute users of a token with a specific issuer.
But now we can.
USDV is live on Ethereum, Optimism, Arbitrum, and other chains,
and it's already available on over 20 exchanges, such as Curve, BitGet, Velodrome,
and Stargate, start participating in the yield from treasury-backed stablecoins at bankless.com
slash usd-v.
Sello is the mobile-first, EVM-compatible, carbon-negative blockchain built for the real world,
and now something big is happening.
Introducing the Sello Layer 2.
It's a game-changing proposal that's going to bring Sello's rapidly growing ecosystem home
to Ethereum.
Vitalik has shared its excitement for the Sello Layer 2 on the SLOF forum, so has Ben Jones
from optimism.
But why?
The Sello Layer 2 will bring huge advantages, like a decentralized sequencer, often
chain data availability and one block finality. What does all that mean? Rock solid security,
a trustless bridge to Ethereum, and more real-world use cases for Ethereum without compromise.
And real-world adoption is happening. Active addresses on Sello have grown over 500% in the last six months.
With the SELO layer two, gas fees will stay low and you can even pay for gas using ERC20 tokens.
But SELO is a community governed protocol. This means that SELO needs you to weigh in and make your
voice heard. Join the conversation in the SELO forum. Follow at SELO
on Twitter and visit cello.org to shape the future of Ethereum.
Bankless Nation, I'm excited to introduce you to Teddy Knox, a research engineer over at
Eugenlayer working on the EigenDA team. That's the data availability.
Previously, Teddy was working inside of the cosmos ecosystem and later as a protocol specialist
over at Steakfish and has joined EigenLayer bringing all of his previous expertise into the
world of restaking. And with EigenDA is as the first AVS developed in-house by EigenLayer,
skills being put to the test. Teddy, welcome to Bankless.
Thanks for having me, David. And returning to Bankless, we have Sri Ram Canon, the father of modern
restaking. Sri Ram was a professor over the University of Washington, where he ran a lab focusing
on information theory and his applications in communication networks, machine learning, and
blockchain systems. But eventually, the nerdsnipe of crypto economics got him like it got the
rest of us. And he started Eigenlayer in 2021 in order to open up a new dimension of trust networks
built on Ethereum.
Sri Ram, welcome back to Bankless.
So we're excited to be here, David.
Guys, I'm really excited for this conversation.
The excitement around Eigenlayer has definitely been heating up.
And there's been a lot of things happening inside of the Eigenlayer ecosystem.
And so today on the show, I kind of just want to get a download as to where things are
and where things are going with the world of Eigenlayer as it approaches real time,
like production, in-house, the main net, all of the cool things that is going to
to impact Ethereum and all the trust is going to bring.
So I kind of want to start just getting a high-level snapshot of where we are with
eigenlayer.
Sri Ram, I'll start with you.
Just the current state of eigenlayer development.
Where are we on the roadmap?
What is in the near-term roadmap?
And what are people over on the eigen-layer side of things excited about?
Yeah.
The few things, number one, on the main net, we started, we launched the protocol, just the
staking site on Mainnet, you know, around June, July.
and it you know we started conservatively it was a guarded launch with a small tvL cap and we've been
successively raising that over time as we test the stability of the protocol and so there was a cap rise
a day before yesterday of this recording and i think we are now at one billion dollar tvL for
restaking so that is on the main net the broader ecosystem of eigenlayer comprises stakers node operators
people building new services, and our own service called IGNDA.
And all of these are live on our public test net,
where, you know, stakers have staked and delegated to node operators.
Either the node operator can be themselves or they can delegate to a third-party node operator.
We have a bunch of really strong node operators from the blockchain ecosystem,
block team and Coinbase Cloud, Google Cloud, pair-to-peer figment,
all the major operators on our test net.
On the, and also our service, eigenDA is live on the test net.
So this is a data availability service, which is intended to expand the data bandwidth
available for Ethereum roll-ups and layer tools.
And then finally, anybody can build and deploy actively validated services, which are basically,
you can think of them as eigen apps like applications, but these applications are not
necessarily consumer-facing these applications will be used by consumer-facing applications.
These could be oracles, data availability, bridging, finalization services, all these kinds
of things. So that's where we are on the ecosystem so that the test net is public and live.
We are going to this, to the main net, this exact same configuration to the main net between
Q1 and Q2, depending on audit and hardening. So very excited to have the full ecosystem kind of get
together to start up more open innovation.
Yeah, congrats on the recent raise of the amount that you're allowed to restake.
So just to add some color here for the listener, there's about 447,000 ether denominated
restaked tokens.
So yeah, that's almost exactly $1 billion.
And about $200,000 of that, so nearly half of that is with Steeth.
So I was just kind of curious how you choose the different limits for the different liquid restate.
liquid staking tokens that you allow people to restake.
And also a follow-up question, why did you choose only ETH denominated tokens?
Like, have you thought at all about people restaking like USDC or other, you know, tokens?
Because generally speaking, like, it's just the value of the token more so than the fact that it's ETH denominated that adds value to the system.
Yeah, absolutely. Thanks, Mike.
Also excited to have this conversation with Mike here.
Why are we choosing this particular set of tokens?
Why are denominated?
How do we choose the caps?
All kind of complex questions.
But the first thing is we chose a guarded launch so that we can test the protocol at various levels of TVL and safety.
So that's the first thing.
And we chose the liquid staking protocols to have a cap, whereas native staking does not have a cap.
So native staking is uncapped.
This is because native staking is already very complex.
to actually go and execute because you have to go and when you stake in the beacon chain,
you have to set the withdrawal credentials to the eigenpower.
And furthermore, any lags on, so the withdrawal lags exist on the eigenlayer ecosystem,
on the eigenlayer platform.
So whenever you want to withdraw any unit of ETH or any other token from the eigenlare staking
platform, you can actually, it takes seven days before you can withdraw it.
This withdrawal lag is there so that if, you know,
when you're staked and providing services to operators,
if there's anything that malicious that you've done,
you can be slashed within this period.
So, you know, standard in all kinds of staking protocols.
But it also acts as a measure of safety for us,
because, you know, actions do not happen instantaneously.
Like if you're doing on a bridge, you know,
who knows somebody can drain it pools, TVL, like, instantly.
Whereas staking is a necessarily long-term activity.
So having this kind of like a one-term.
week withdrawal lag gives us a measure of safety that, you know, simply other protocols
may not be able to achieve just because the timescale of staking is fundamentally very
different from the timescale of other kinds of financial activities. But adding on to this
is when you have native restaking, you have the additional lags on Ethereum itself, right,
because you have to go and like withdraw from the weekend chain becomes more noticeable.
So all of this means as far as the safety limits are concerned, we can be more aggressive.
on the native restaking, then we can be on liquid restaking.
So that's why the native staking is uncapped.
And, you know, we have to decide onto some cap for all of these different services.
And we just chose, you know, these numbers based on both market representation that, you know,
we do know that some LSDs are more dominant than the others.
So we don't want to say that they're all very low.
But we want to also have representation of multiple different liquid.
staking tokens in the platform. So that's why we did that. Regarding a question, why restrict
to LSTs, you can think of the question's premise is absolutely right. Eigenlayer, you know,
even though we popularly call it a restaking platform and that's a narrative, the fundamental thing
is it is a permissionless programmable staking platform. It's staking. You stake your eat. You could
stake your USD, you could stake a bond, you could stake whatever you want. It is programmable. So anybody
can come and program it to like what the staking conditions are and it's permissionlessly programmable.
It's not programmed by us or anybody we know.
Anybody can come and create these slashing staking and slashing conditions.
So yes, the premise is absolutely right that eigenler can incorporate all kinds of tokens.
But the reason we focus on, you know, the ETH and ETH related tokens to begin with is that we think,
number one, clearly there is a big market opportunity there.
That, you know, ETH, there is, you know, a lot of the LSDs, as well as native staking is locked in.
And when you're promising to validate Ethereum, you might as well promise to validate some of these other networks.
But more broadly, I think we are also trying to support a lot of the services for the Ethereum ecosystem.
and when your risks are denominated in ETH,
it is much better for your underwriting economic safety mechanism
to also be denominated in ETH.
Imagine I'm doing like 100,000 ETH transaction
between one roll-up and another roll-up,
and you want to say like,
hey, I have enough economic safety out of like EGEN layer
to do this transaction.
Now if I know that I have LSTs worth maybe $120,000
Eth backing this claim, that's actually like a much more rigid, you know, the mapping than to say,
oh, I have 100,000 ETH, but I have like some X dollars, USD backing it, because now I have to
account for the volatility and slippage between these two different tokens over the period of,
you know, the collateral and unwrapping. Add to this the capital efficiency of LSTs,
because LST is already earning certain amount of reward. We found that this is,
is the best configuration to stop this platform of it.
Sri Ram, is this just an articulation that the ether unit of account has network effects?
And so it's just easier to use that unit of account because the risk is denominated in
eth, the collateral is denominated in eth in these networks.
People tend to think in eth.
And so while it doesn't necessarily need to be eth, it just kind of makes sense to be
eth.
Is that just a fair summary?
That is absolutely right.
And this is what we want to insert.
incentivize the most. And so, you know, the idea being that initially, so over time, we are going to
completely make this permission list. Like, anybody can list any token and each AVS can decide how to
relatively value these tokens. You know, somebody may not like to use USD. They may only want to use
certain LSTs. Some people may want to use any of them as long as they have enough economic value.
So this is up to the services. So we want to get out of like this layer of saying,
hey, you can only do this or that.
But, you know, we just have to steward this platform in the beginning.
To add to one of David's point, I think people, when people think of the network effects
of ETH, I think this is a new dimension of network effect of ETH, which is that when you are
transacting and denominating in EAT in the system, that means the right backing collateral
for economic safety and validation is also ETH.
This creates. So this is a network effect between the monetary premium of ETH, which is that this is used as a unit of denomination to the utility of EIT, which is it is actually used as the backing system for economic safety. I think this is a new, I would say, emergent effect that EGEN layer brings to this market. So that strengthens actually the dominant position of EIT.
Cool. Yeah. And just to kind of double click on this, you know, economic.
economic safety, economic security point.
I think we might have talked about this offline,
but just so to kind of bring it into this conversation,
I guess one thing that always feels a little weird
about the meme to me is the fact that
the economic security denomination is in ETH,
and like the owners of that ETH
aren't necessarily the ones running the services
that could be slashed, right?
So this is the classic principal agent problem.
It shows up in Ethereum Staking, too, right?
So I guess how do you think about economic security
when the people who are at risk of being slashed
aren't actually the ones doing the task of the ABS operation.
They're the ones who the capital was delegated to,
but they're not actually the owners of the capital itself.
This is a great question,
and I think maybe one of the most important
for our entire field to actually consider and understand.
So I wouldn't claim to have simple answers to this question.
So to rephrase this question, the idea is economic safety is coming because somebody is putting down their stake and then running the node operations, let's say, themselves and saying that, hey, if I don't run these operations correctly, then I'm willing to lose my heat.
So the first point I want to bring here is that this, if the staker and operator are the same person, this is a very unusual type of risk.
I call this endogenous risk.
Endogenous risk means, you know, unlike going and putting your ETH into a landing platform
with, you know, 10x margin position or 100x leverage, where you're underwriting certain kinds
of price volatility risk.
That's what you're doing when you're doing that.
When you're staking in the eigenlayer platform, an eigenlayer is constrained to validation tasks,
you are underwriting endogenous risk.
Endogenous means something that you do yourself.
you can control yourself. You not being malicious, and if the protocol is correct, you will not get slashed. It's very different. This is why the usual mental model of people thinking of, oh, this is leverage, leverage is not quite accurate because, you know, it is endogenous, whereas all other forms of risks, you know, that people are used to when you think of re-hypropocating stake or like re-hypoticating your house or any of these are subject to exogenous price risks.
Okay, that's number one. But the risk is purely endogenous only if the staker and operator are the same. Like, that's what mics are loading to here. And it's absolutely true. The staker and operator have to be same or in our view to be inside the same trust zone. So the staker has to trust the operator that the operator will do right by them. The fact that the staker and operator are not necessarily the same means no, they have to, again.
establish some other mechanism of trust between themselves to actually make sure that I will
delegate to somebody while putting my eat at risk. So these mechanisms can be manifold. And one
mechanism is social or legal. Oh, there are major operators and they're legally regulated and
they're not going to go and do like something which is provably malicious. When we think of all the kinds of
You know, this is, I think, very important and people in crypto don't fully appreciate it, I think, that among the set of, like, you know, things, ways in which a company or like a system can cheat, they usually choose to cheat in ways that are not observable.
Because, you know, observable means like you're liable.
And what these systems do is make it completely transparent because there is a slashing condition.
There is an observation that you actually.
double sign this block or whatever the set of things are. So it makes it perfectly naked that
you're cheating. Like, this doesn't happen very often. I think this is something when people think
about, oh, you know, all these Wall Street guys, they do this and that and all that. Nobody goes
and does something where it's perfectly universally observable that they're actually cheating. Like,
this is very important. So what the principal, so how to solve the principal Asian problem? The real
world mechanisms are, hey, I'm in a certain jurisdiction. I trust certain other, like,
entities outside my like blockchain protocol, and I'm therefore going to delegate to them.
This might be one mechanism. Another mechanism is they use technological substrates to actually
minimize the principal agent problem. For example, we're working with this platform called a project
called Cubist to build anti-slashers. Anti-slashers is this idea that, hey, there is a piece of
code that simulates the slashing conditions and then make sure that when I'm issuing a
signature, the slashing conditions will not be violated. And this piece of code alone runs inside
a trusted execution environment like an Intel SGX or an AMD trust zone. So what this does is
it gives a sense of correctness between the principal and the agent, because even if the agent
wants to manipulate it, they're still running it inside the TEE. So therefore, they cannot really cheat
the principle. And in our platform, we have a protocol called Puffer, which is based on
trusted execution environment, and they are actually doing liquid staking for Ethereum itself and also
restaking based on these TEs. These are, you know, two different ways, legal, social, and number two
is technical. There's also like a third way, which is economic, which is the rocket pool way,
which is saying, hey, yeah, you know, the principal and agent are, the agent's going to,
the principal is going to lose something, but the agent's going to lose something too.
So, like, you just try to correlate the fates of these two people.
But in our, like, fundamental analysis of the economics, this really only works if the
slashing is bounded or bounded for some reason or the other.
And on eigenlare being a fundamentally economic safety platform, it's not clear, like,
what will be these bonds.
So that's the three different ways, social, technical and economic to minimize these kinds of principal agent risks.
And I think this is a generic question, not for eigen layer, but for the entire field to actually answer.
Yeah, for sure.
And just kind of one more high-level question before we dig into some more of the details of eigen-DA and stuff.
Yeah, one thing I think that comes into question when thinking about restaking is that it does fundamentally change the incentives of being a staker in Ethereum.
Right. So if you think of the protocol as kind of like having two incentives now, it has the
consensus layer rewards and then the execution layer rewards, like consensus is for participating in the
block, you know, voting on blocks. What's the head of the chain? Execution rewards are kind of these
congestion fees like gas fees and also the MEV rewards given to proposers. Igan layer kind of
tax on a third set of rewards, right? Like these are restaking rewards. So the main issue I see potentially
with this is that these rewards are outside the purview of what the protocol can see and
what the protocol is designed for, right? So if this kind of warps the incentives of the protocol,
it might, for example, increase the demand for staked ETH significantly. Or also it might make it
so that solo staking, kind of the opportunity cost of solo staking is very high because restaking
yields are bigger than the other two components of the reward. And so in order to be competitive
as a staker, you also need to be a restaker.
So, you know, these are big, big kind of themes that I've been thinking about,
but would be curious to hear your high-level response on these before we dive deeper.
Yeah, absolutely.
I think also complex question and landscape to think about,
and filled with second-order effects, which are not totally anticipatable.
But I'll start with one thing.
This is the hard thing about building permissionless platforms.
who knows what somebody else can do.
When Ethereum's building in, you know, the MEV was one example,
liquid staking is another example, re-staking is another example,
where these are emergent effects that, you know,
could not be anticipated fully.
So having said that, I want to make a bunch of observations.
So the first observation is that anything you could do with restaking,
you can already do with liquid staking,
right, one major LST, the dominant LST, could just simply say, hey, you know, the economics
are simply not only being used for, you know, Ethereum staking, but I'm also making this promise
as the dominant LST protocol that ABCD will happen, right? And this leads to a completely different
set of effects, which is that like that LST, because it has figured out that it can do ABCD,
now completely consolidates the market because it is able to tack on additional things.
This is exactly the kind of the problem that MEV Boost was trying to solve, which is that if
you're a major player, you can do auto protocol deals. And, you know, if you're a smaller player,
you cannot do auto protocol deals. And you're simply completely subverted by an auto protocol
deal. So just like MEV Boost and the PBS roadmap basically tries to democratize the opportunity
for making these out-of-protocol deals,
eigen-layer is an opportunity to democratize
these auto-protocol deals
and make it as formal, transparent, clear,
and verifiable as possible
so that anybody can enter into these kinds of agreements,
not only the dominant player.
So that's the first thing.
Anything that you could do with restaking
could have already been done with LSTs.
The second thing, I think,
you know, in order to,
affect Ethereum's protocol economics.
I mean, when I hear some of the concerns about, you know,
eigenlayer and restaking makes me wonder in one sense because, you know,
these people are much more bullish about Aiken Lair than I am because they're basically
saying the statement, if once I formalize it, will make it clear.
They're saying that the total amount of reward and yield that will come
out of restaking should be higher or of exactly the same magnitude of all the defy yield that would
come out of any kind of LST and other things. So it's only at that scale that this starts to
become, you know, significant. Okay. But having said that maybe it can happen and, you know,
we are, of course, you know, believers in the technology. That's why we're building it.
But how does it affect Ethereum's protocol economics? It does. It does.
definitely warp the incentives, but it warps it lesser than if eigenlayer wouldn't exist.
And like one LST basically significantly integrates this kind of an idea inside of its own
protocol.
I think people don't see it.
Like a lot of people on Twitter, for example, saying, why doesn't eigenlayer commit to
self-limiting or whatever ideas?
And I think it is the same reason why MEV Boost is a neutral platform, the same reason why PBS
has to be neutral.
there has to be a mechanism for new protocols to be built to be completely neutral so that the playing field is level.
Because if we self-limit, the dominant LSTs, what are they going to do?
They're going to say, hey, I have to internalize this because these guys are going to self-limit.
So there are all these second-order games that people don't transparently understand, but these are, you know,
and I'm not claiming to have all the answers for the second-order games.
but at the minimum, the observation is that the presence of a more neutral platform
democratizes restaking yield rather than centralizing restaking yield into only the L.S.
Now, at least, like if I'm a home staker, I can opt into Aginlayer and then adopt at
least a few of the protocols which are lightweight and easy to run and participate in that
additional rewards, whereas in the absence of Agen, that would just simply not be possible.
So that's number two.
Number three, we know and hope that the number of such protocols is high, but we know that
there are some protocols which fundamentally rely on decentralization rather than relying purely
on economics.
An ideal layer is a highly expressive platform because it has this feature we call double opt-in.
Double opt-in means a staker and operator have to opt-in to the protocol and the protocol has to
accept the opt-in.
So double opt-in basically means protocols can express subjective opinions on who can opt-in to their protocol, you know, into an AVS, as well as give additional rewards to certain people than to other people.
So because Agen-Lare is this highly expressive platform, and there are services which fundamentally rely on decentralization rather than fundamentally relying on economic safety, those services could actually incentivize decentralization itself.
Like, for example, one of the services building on top of us is this thing called witness chain,
which offers a proof of location protocol.
Basically, offers a geographic location oracle, which itself is geographically decentralized.
Users like stakers and then, like, tries to measure network latencies across various notes to certify that,
hey, you are in this zone or that zone.
Now, it's possible for an EBS to say, I want to add a geographic decentralization bonus to my reward,
and homesdakers being more geographically distributed could potentially, you know, take part in that.
You know, other people can offer other kinds of subjective articles which try to analyze, you know,
stake flows and stake correlation to determine whether it's the same guy is staking across these
different, you know, entities or it's actually distinct, you know, homestakers.
So all these things give me confidence that there will be some amount of incentives for
decentralized home operators that can come through.
eigenlayer, which in its absence actually just makes it significantly worse than centralizing.
That was a fantastic, just high-level overview of, I think, some of the big questions about
eigenlayer and kind of restaking specifically, and I want to bring Teddy into this conversation
to open up the eigen-DA rabbit hole, because I think this can be a more narrow understanding
of what it means to be an A-V-S. And because Igen-D-A is being incubated in-house by eigen-Layer,
there's definitely some like additional knowledge I want to pull out of you, Teddy, here.
So I want to ask the question, what is eigenDA?
But I want to ask it in three different ways, because I think we can kind of get three different
answers out of it.
There is eigenDA, the data availability network.
There is eigenDA, the first internal incubated restaking network by eigen layer.
And then there's eigenDA as this like very proximate data availability layer to Ethereum.
So what is eigen-DA as it needs to be for eigen-layer to incubate its own network?
Like, why does eigen-layer data, eigen-DA need to be a thing internal to eigen-layer?
How does eigen-DA compare to other DA layers?
Like, there's many DA layers out there.
How is eigen-layer different?
What are the unique properties?
And then lastly, what does eigen-DA specifically do for Ethereum, the Ethereum ecosystem,
that other data availability networks don't do?
So three questions, all about what is eigen-D-A.
You can start however you want to start, Teddy.
Yeah, sure.
Well, so Igen, DA was both an opportunity and a necessity for eigenlabs because we had the plan
for eigenlayer and we needed a way to demonstrate it to the world.
We wanted to build a product that was truly useful for people to attract stake and
other AVS projects to Igenlayer.
But on the other hand, it was also an opportunity because we looked at the landscape of DA providers and saw an opportunity to build a DA layer from first principles.
That was better.
The goal of Eugen DA is a trustless, decentralized, hyperscale DA layer built on top of Eigen layer.
and, you know, alluding to your, one of your questions about alignment with Ethereum,
that sort of implies alignment with Ethereum, given that we're building on top of Egon layer.
So I think the main thing people want to know is how is it trustless decentralized and hyperscale?
And how does that set it apart?
I guess I'll start with trustless.
Igen-D.A. operates on the basis of operator nodes, which,
opt into the eigen-DA network via eigenlayer.
And so operators are providing storage bandwidth to the eigen-DA network
on the basis of the amount of stake that they have attributed to them.
So if I'm an operator and I manage to get 5% of the eigen-DA network stake,
that means I'm going to be receiving roughly 5% of the data.
And this is how we achieve
this trustless quality to eigenDA, we ensure that every operator is only handling the amount of data
that it is on the hook for providing. And this is different, this is also what sets it apart from
a naive data availability committee, which, although very simple, does not provide these trustless
decentralized guarantees. The hyperscale part is what I think is the most interesting about IganDA,
which is that eigenDA's capacity scales with the total bandwidth of the operator set.
This means that as the number of operators joining the Igen-DA network grows,
the amount of data that Igen-DA can support writing and reading to also grows.
And how do we do this?
I mean, most other DA layers either involve relatively simple data availability committees
or maybe involve some amount of consensus.
And we try to take a hybrid approach where we,
remove peer-to-peer consensus from the dispersal process of data.
So Igen-DA can generally be thought of as this operator set, which is interacted with
via a disperser service.
So data is sent to these operators, and this disperser service, which can be understood
as something like a decentralized sequencer in a real.
roll-up analogy is responsible for collecting these various signatures that form a data availability
proof and posting these signatures on chain to Ethereum to certify availability.
And there are several other pieces of technology which I can't go into yet, which ensure
that data is available, not only that it's stored, but that it's not being withheld.
And, you know, systems for payment and for slashing.
I want to go into the unique properties of eigenDA just a little bit more.
Just my mental model, my map for understanding eigenDA is kind of like a dank sharding sidecar,
where it has a lot of properties that EIP 48444, full dank sharding also have,
except that it's also a separate network, except it is also secured by ETH.
So it seems to be like a very proximate replication of dank sharding,
just as like a sidecar network running in parallel.
to Ethereum. And why is it in parallel to Ethereum? Well, because it's using Ether as
stakes. And so it seems to be like the closest approximation to dank-sharding data availability,
while also retaining the security of Ether, but yet it is a separate network from Ethereum
data availability. First, Teddy, is that a fair articulation? Do you want to amend that? Is that
accurate? Is that inaccurate? And to what degree that is accurate, how does that extra useful to
Ethereum versus other like far more distant data availability networks that aren't so close to
Ethereum.
Sure.
Yes.
Well, that's generally accurate.
I like to think of Deng Sharding as being sort of the public option that will eventually
arrive.
And Eigenlier is being a very closely aligned private option.
So Eiglayer has plans to support greater throughput than Deng Sharding.
but in the short term, they generally align in terms of bandwidth planning.
And so why would someone use eigen-DA over some sort of like more third-party networks
that are alternative layer ones?
Like what benefits is eigen-Layer, Igen-D-A bring to the table?
Well, so Igan-DA settles to Ethereum.
This means that roll-ups will have lower latencies when settling to Ethereum themselves.
This is one of the larger advantages.
The other is that eigenDA is going to be a generally Ethereum-line product going forward.
We don't have any plans to sort of try and move away from the Ethereum ecosystem.
And so when roll-ups who are already deciding to commit to Ethereum use eigen-DA,
they can be assured that we're planning on the basis of 4844 and dang sharding in the future.
You know, you asked about how we got started with Agendae, and I think Teddy gave a good answer there,
which is that it is not only a proof of concept that, hey, you can build something interesting,
but also a proof of value that you can build something useful on top of Igon layer.
And value is needed when you want to get this kind of a platform,
bootstrapped, you know, you want to start eigenlayer, who's going to use it, who's going to come and
build protocols on top of it, we have at least one useful kind of product on top of it, that's
eigenDA. But how did we actually arrive at this? Is there's an interesting story. This was back in
2021 when we were working on, you know, just coming up with some of the core ideas around
eigenlayer and restaking. And, you know, I had decided to fund this startup, just, you know,
bootstrap, use my own money to do it. And we were several, you know, maybe more than six months
down the journey at that time. And I was talking to many VCs. And one of the VCs I talked to
was Kyle Samani from Multicoin. And I gave this pitch, hey, you know, here's Ethereum. You can
stake and then you can use it for other networks.
and you can have these kind of slashing and things like that.
And I said, oh, these are just looking like fraud probes and optimistic roll-ups, they suck.
They're not going to work.
And I was curious, why he said it.
And then I asked him, why do you think optimistic roll-ups sucked and they're never going to work?
And he said, because they're very expensive.
And I hadn't, like, I was not paying close enough attention to know that.
optimistic roll-ups are more expensive.
And I said, thank you and, like, finish the call.
And then we went back and called the team.
And I said, hey, I heard that optimistic roll-ups are more expensive.
Can we dig into why this is the case?
Just go in and look at it.
And we find it's all just data costs, right?
Because, you know, you don't even have to write a proof to Ethereum.
So why is it more expensive?
And then I looked at it.
And we have actually been working on data availability for a much longer period.
as an academic. In fact, one of the first papers on, you know, fraud proofs and data availability
that Mustafa and Vitalik and others wrote, I was actually the kind of on the program committee,
and I championed this paper to be accepted in financial cryptography. And so we've been thinking
about data availability for a long time. And I knew that of all the things we know how to scale,
data availability is the one we know most how to scale. And looking at the cost, it's like,
oh my God, there's a huge opportunity because we didn't know how to get this platform started.
That was the other question that we couldn't answer in any of these VC pitches is,
oh, you build this platform, who's going to come and build anything on top of it?
I was like, I don't know.
It's useful.
From that, it became, we are going to build INDA, the first data availability service because
we know exactly how to scale data availability and we have this platform.
In fact, we even had a paper called the AISD a data availability.
Availability Oracle, like, you know, two years before this episode.
And basically that's basically saying that, hey, this is an off-chain network,
the 35s data availability to Ethereum.
And we just didn't know, like, how to bootstrap this.
And Aiken layer was designed to solve the bootstrapping platform.
So we became our own customers to actually then build AikenDA.
So that's a – and I did tell Kyle this, like, a few months back.
And he's like, most people, when I say something like that, they just get annoyed.
and you know, but you're the only one who took it positively and came back and thanked me after some time.
I guess.
Yeah.
So going to the other question on why eigen-DA, and one of the things is, I think Teddy was alluding to earlier, is eigen-layer is built as the only E.
E.Centric data availability layer.
Okay.
What does it mean by ETH-centric, Ethereum-centric?
So it's Ethereumcentric in many ways.
You know, other data availability layers are actually blockchains.
You know, they say modular, but it's actually an entire blockchain.
There is a consensus layer.
There is a new asset.
There is a new trust layer.
There is also a new data availability system, all of them packaged.
And actually packaging it brings you certain superpowers.
And the superpower is if you natively run a roll-up on top of that blockchain,
you actually inherit much better security.
Because if data is not available in that system,
let's say something like Celestia,
if data is not available,
then the blockchain itself will fork around such a failure.
So if you're a native roll-up on Celestia,
you actually get a lot of security guarantees.
But if you're an Ethereum roll-up,
you're ordering primitive in Celestia has no bearing on it.
From the viewpoint of Ethereum,
everything is just a cum laude.
committee. Like all that, you know, if you're an Ethereum roll-up, you have a roll-up contract sitting
on the Ethereum blockchain. And it's just viewing some certificate from some committee. Like,
that's all it can do. It cannot do a contract cannot do data availability sampling. A contract
cannot do like whatever set of features that are actually available on that other platform.
So we instead of like clubbing all these things and clubbing two separate goals, which is I want to
build a blockchain of my own and I want to provide data availability services to the Ethereum
Ethereum ecosystem started from first principles.
How would one build just a data availability adjacent, a data availability layer which
adjoins the Ethereum network?
So the first thing is, Ethereum has, in rollups rely on Ethereum for ordering and consensus.
Therefore, you should not need to have a separate consensus.
A separate consensus adds nothing to your own, you know, Ethereum rollup ecosystem.
Okay.
So that was the first thing.
remove consensus.
And once you remove consensus, you see that the design space for actually maximizing throughput
and reducing latency and all these other things explodes rather than reduces.
Because now you don't have to do another thing, you know, another module, consensus being
another module, you don't have to do it.
You only have to do data attestations.
You just have to certify that data is available.
Now you can start to think from first principle.
what set of things you can do. I can give some examples. I think Teddy alluded to this earlier.
The idea is, for example, in eigen-DA, the way it works is there is this committee. This committee
is staked. What do you stake? There's a natural thing to stake, which is eat, and we have eigen-layers.
So that's one side of this. The second side is, because we had to think varing the Ethereum hat on,
we have to make sure that nodes that participate in eigen-DA need to have very limited resources.
or can manage with very limited resources.
Now, people say, like, oh, this is a big constraint.
Like the Solana people, for example, say, oh, this is a big constraint.
But, you know, there is a power in adopting constraints and then seeing actually, like, how to,
because these constraints are meaningfully adopted, right?
They're adopted for, I want to maximize decentralization or whatever.
And then ask, like, let's say each node has low amount of bandwidth, but you have an insane number
of notes.
Like, you know, Ethereum has whatever, 900,000 validated.
right? They may all not be distinct nodes, but the whole system is set up in such a way that
if you have 32E, you should not need a lot of bandwidth to participate in the network.
So one of the things we said is, hey, I want to adopt decentralization, but if you centralized,
I'm going to make you pay more in the sense that now you have to store more data.
Now you have to expand more bandwidth if you're more centralized.
So till now, even if you're validating Ethereum, if you're a single operator running 10,000,
validators versus if you're a single operator running a single validator, both of them have the same
expense basis. So there's like an inevitable, you know, a drift to centralization, because if you
centralize, you're more efficient. Igen, DA breaks this. Oh, if you centralize and you have
30% of the stake and somebody has 0.1% of the stake, you have to do 300 times more work to hold 30%
of the stake than you would if you're holding 0.1% of the stake.
So we start not only from like, okay, when we think of Ethereumcentric, the first thing, no ordering,
let's just do like data availability.
The second thing, what is the asset to use?
Use ETH.
The third thing, what is the principle, what is the philosophical substrate on which you're
operating, get as much decentralization as possible, get node operators to participate with
very limited resources while minimizing the benefit of centralization.
So these are some of the decisions that went into building.
like IG&DA, there are certain emergent benefits from it.
For example, when you want to do ordering and consensus in your own chain,
one of the things you have to do is you have to lock the state,
have a leader, the leader proposes the next block,
and that's how you build the chain.
Whereas in IGNDA, what happens is because we are not doing ordering,
ordering can be just data attestations or data availability claims can be completely paralyzed.
So David sends a data availability claim to eigendae nodes.
They all receive a chunk of the data availability claims and send David a certificate.
And similarly to Mike is sending a data, you know, a data blob, splits it into small chunks,
sends it to the eigendae nodes.
They all send him the claim.
They send the same thing to Terry, all of them in parallel.
So you're not deadlocking on the state at all.
Second, your censorship resistance is not gated off eigenDA, is not gated by some leader.
There is no need for a leader.
It is what we call self-leader protocols.
Like, if you're a sequencer, you can yourself decide, hey, I just send the data chunks,
encode and send the data chunks myself.
I don't need to wait for, like, you know, that leader.
And the leader becomes the MEV center of the entire data availability system.
So essentially what we found is actually being a data availability adjacent to Ethereum
as a first-order design principle unlocks a variety of different things.
that, you know, we had to actually, like, innovate from first principles on.
Metamask portfolio is your one-stop shop to navigate the world of Defi.
And now bridging seamlessly across networks doesn't have to be so daunting anymore.
With competitive rates and convenient routes,
Metamask portfolio's bridge feature lets you easily move your tokens from chain to chain,
using popular layer one and layer two networks.
And all you have to do is select a network you want to bridge from and where you want your tokens to go.
From there, Metamask vets and curates the different bridging platforms
to find the most decentralized, accessible, and reliable bridges for you.
To tap into the hottest opportunities in crypto,
you need to be able to plug into a variety of networks,
and nobody makes that easier than Metamask portfolio.
Instead of searching endlessly through the world of bridge options,
click the bridge button on your Metamask extension
or head over to metamask.io slash portfolio to get started.
Arbitrum is the leading Ethereum scaling solution
that is home to hundreds of decentralized applications.
Arbitrum's technology allows you to interact with Ethereum at scale,
with low fees and faster transactions.
Arbitrum has the leading defy ecosystem,
strong infrastructure options,
flourishing NFTs,
and is quickly becoming the web-free gaming hub.
Explore the ecosystem at portal.arbitrum.com.
Are you looking to permissionlessly launch
your own Arbitrum orbit chain?
Arbitrum allows anyone to utilize
Arbitrum's secure scaling technology
to build your own orbit chain,
giving you access to interoperable,
customizable permissions with dedicated throughput.
Whether you are a developer, an enterprise,
or a user, Arbitrum orbit,
it lets you take your project to new heights.
All of these technologies leverage the security and decentralization of Ethereum.
Experience Web3 development the way it was always meant to be.
Secure, fast, cheap, and friction-free.
Visit arbitram.io and get your journey started in one of the largest Ethereum communities.
Are you launching a token?
Is it already live?
How are you managing the legal and tax for providing token awards for your team?
Toku simplifies everything about managing token grant compensation,
and you can get started with them for free.
You'll have access to top-notch legal and tax support to handle the distribution and management of tokens for your team.
Toku caters to every step in the process, from user-friendly legal templates for granting tokens to tracking investing periods and calculating withholding taxes.
Toku understands every grant structure, token purchase agreements, restricted token awards, restricted token units, token options, and all the other ones.
Toku is already simplifying this today for leading companies like Protocol Labs, DYDX Foundation, Mina Foundation, and many more.
You can learn more about how Toku can help you streamline your token management and get started for free.
Visit Toku at Toku.com slash bankless or click the link in the description below.
Sweet. Yeah. No, this is super good. And I kind of want to drill down into some of the mechanics.
So Teddy, I'll ask you two questions about kind of the day-to-day of running eigen-D-A.
So I guess tying it back to 4844, the initial design is set up so that there's going to be target of three blobs per block, three blob transactions.
each blob will be like about 128 kilobytes and like the design is conservative so that we know that if all Ethereum home stakers can keep running this on their you know the same internet connection before just kind of like alluding to what Surin was mentioning so just curious on the kind of the numbers for for eigenDA as far as how many kind of kilobits per second and also like me as a solo staker like on my home you know internet connection will it be reasonable for me to be able to to run like a eigenDA
DA restaking service and kind of participate in that network.
The second question I have is around these slashing conditions, right?
So when we're thinking about like the kind of settlement assurances of eigen DA,
like someone who posts DA to the service wants some guarantee that that DA will be available
with, you know, with some economic security, can you just talk about what the slashing conditions
actually look like and how those violations would be resolved if someone didn't fulfill their
promise of making that data available?
Sure.
So I'll take the throughput question and then pass the slashing conditions to you,
Sviram.
But we've launched our test net guaranteeing around 1,000 kilobytes per second.
And we've launched our network, according to the design, Sphreum just talked about.
The amount of bandwidth you need to run an operator is not at all related to the total
throughput of the network.
It's related to the amount of state.
that you have assigned to you.
And so this should make it possible for smaller operators within the IGDA network
to still receive chunks of data and earn on their staked assets.
So on our test net, the benchmarks we've run locally have suggested that it can support
roughly up to three megabytes per second.
and we're pretty confident that we can get to 10 megabytes per second by Mainnet.
You know, all of this is powered by roughly two things.
One is improving the speed at which we can encode blobs.
Each blob has a combination of Reed-Solman encoding and KZG encoding,
which is somewhat expensive, but also not an optimization problem.
can't be solved.
On the other side is just increasing the number of operators.
And so we see it is very possible to get to 100 megabytes per second, one year from the launch
of IG&A on Mainnet.
Teddy, you threw out some numbers just now, like 1,000 kilobytes per second, which,
I mean, when I download stuff, I'm downloading stuff on my computer.
That's we need way, I need way more speeds than that.
But also at the same time, we're talking about crypto-economic.
And so, you know, cryptography compresses stuff.
And then you started talking about like 10 megabytes a second, which is starting to be a number I can, like, reason about.
Just overall, like, is that a lot?
Like, how much is that?
How, what do we get from that?
How can we compare these things in maybe more qualitative, less quantitative ways?
Sure.
So one Ethereum block is roughly 200 kilobytes.
This is every 12 seconds.
And you can do the math to figure out how much that is per second.
EIP 4844 is moving us towards something like 32 kilobytes per second.
And just to do apples to apples comparison,
Celestia is about 167 kilobytes.
per second. So this is just comparing with the status quo. But I think that what we're looking at,
just from a market perspective with Heiding DA is a situation where the cheaper block spaces,
which is essentially what DA is providing for roll-ups, the cheaper block spaces, the greater
we're going to see induced demand.
Everybody knows in their heart that blockchains would be mainstream if it was cheap enough and reliable enough for people to use.
And so, you know, obviously 10 megabytes per second worth of transactions doesn't sound like a lot.
But that would represent a roughly, you know, 50x increase over the current status quo when people are using Ethereum or Ethereum roll-ups.
one quick follow before
Sturiam talks about slashing conditions
because I'm sure that's going to be an interesting part of the conversation
so you mentioned that if you have more
restaked ETH as an eigen-DA node
then you're responsible for more data
right
is there just trying to understand
the mechanic here like
would that potentially incentivize large node operators
to kind of like sibble themselves
and have many smaller node
split into many smaller operators to not have as much data responsibility,
but still earn the same rewards as people solo staking?
Or is there kind of just a linear scaling on the amount there?
It's linear on the amount.
It's basically proportional to the amount of stake.
Okay, cool.
Yeah, and so I guess going on to the slashing conditions, Sriom would love to hear.
Yeah, I'll just add a little bit on the throughput thing.
You know, when I first did the numbers and like,
tried to calculate like, oh, Ethereum's data bandwidth is 80-something kilobytes per second right now.
I was like, oh, why is it so slow?
Why is it so small?
And I think, you know, there is a lot to improve here.
But I want to kind of phrase this in the broad arc of like human evolution.
I think of like these bytes, you know, like, you know, inside the team we say like our goal.
in on building eigenDA is to maximize coordination bandwidth.
You know, if you think about it, these are complex coordination systems.
Like, we, we have, like, all these parties in, in Ethereum, like, certifying and maintaining
this ledger based on which lots of coordination is happening, like, you know, moment of money
and other things.
So usually just, you know, if you just neglect the last, like, whatever, five to 10 or maybe even
20 years of history, we were as a species able to coordinate on very few things. Like,
we would elect who is the precedent. And, you know, a president can specify very simple immigration
policy. Immigration good, immigration bad. Like, that's one bit, right? We had coordination
bandwidth, which was like five bits per five years, like something really, really, really,
really small as a species. And suddenly, I think we are, you know, this is, this takes some time
to like sink in. We are scaling that to, you know, kilobytes per second to megabytes per second to
gigabytes per second. So what this means is suddenly the rate at which we can coordinate as a
species, maintain common information, enact powerful, you know, coordination conditions,
just like insanely skilled.
And I think this is just like,
if you think about it like this,
like, you know,
the internet unleashed this information superhighway.
Like we can kind of talk to each other.
But that's still not the same as the ability to coordinate with each other.
Because, you know,
I may be talking to you something.
I may be telling somebody else something
because this is not global, verifiable state.
And with systems that promise like common data availability,
or like more block space,
essentially we are talking about the rate of the bandwidth of coordination as a species, right?
Let's keep all the blockchain wars, L1, L2, L3, like aside, right?
Just let this shrink in.
It's insane.
It's amazing.
It's unusual.
It's like we've becoming this much, much more coordinated.
In fact, as a species, I think our evolutionary advantage is that we're able to cooperate at a scale
that is simply not possible for other species,
in a flexible way.
Like, you all know,
Harari, in his thesis says,
like, our humans are special
because we cooperate flexibly
in large numbers.
And it is cooperation.
When we're talking about,
you know, a shad like DA bandwidth,
it is flexibly because I can program
all kinds of new VMs
and conditions and contracts
and interesting arrangements on top of it.
And in large numbers,
because we can have everybody agree
on this common state.
This is insane.
And, you know, as a community, that's what we're setting out to accomplish.
It's just good to put it in that perspective rather than the day-to-day thing of, hey,
you know, I'm doing X better than Y or Z.
We have two mechanisms right now for ensuring the fidelity of data availability.
Number one, proof of custody, which ensures that people store the data.
But it doesn't have a mechanism to ensure that people serve the data.
So that's proof of custody.
Proof of custody is basically, if you don't store the data and you have to respond in certain ways, you have a secret, you have to sign some blobs based on like the state of the data that you're storing. And if you don't do it correctly, you'll be slashed. So proof of custody is a mechanism to ensure that you're storing. But what if you're storing, all the nodes are storing data, but they all collude and coordinate to not serve the data to anybody. So that's a problem. So while proof of custody relies on a
economic security, okay, because you'll be slashed if you don't do your proof of custody.
How do we ensure that people serve? It is by ensuring that the operator set remains decentralized
and collusion resistance so that there is a competition to serve because no one node has all
the data. The data is dispersed across many nodes and as long as you can get a quorum number
of nodes, you will be able to retrieve all the data. Now you have a market where there are many
independent players who have the data and are willing to serve. Unless everybody colludes together,
or some large number of stakers collude together,
you will be able to retrieve the data.
So that's the mechanics.
So it borrows both decentralization from eigen layer
as a separate principle,
as well as borrow economic security from eigen layer.
Furthermore, we are actually building new security mechanisms,
which are on top of this,
which I think we are not yet ready to share,
but we'll be ready to share in the coming months.
One thing that I think is pretty cool about just the primitive of restaked capital is that it opens up opportunities for interoperability across networks that wouldn't otherwise have been interoperable.
And one of these things that I've been keeping an eye on is the super fast finality layer out of NIR, which is in partnership and collaboration with eigenlayer.
Sri Ram Teddy, I've been on a quest to learn how all of the many, many Ethereum layer twos, which are fragmented.
how do they recompose back into one unified network?
We have so much scale on Ethereum.
We have horizontal scaling.
We have vertical scaling.
But we don't have yet a composed coherent network, at least from the perspective of the end
user.
And there seems to be many different answers as to how all of these networks become
recomposed.
But one answer that I always come back to is low latency settlement finality.
If one roll-up can have assurances that settlements from a different
roll-up are final, all of a sudden we can unlock a lot of compose ability. And this is something
that I think is what you guys, AigenLayer and NIR are pioneering with this super fast finality layer.
So Sri Rang, maybe you can just walk us through this partnership, this collaboration with NIR and
the super fast finality layer. What is it and what is it doing and what is its impact upon the
Ethereum roll-up landscape? The idea of a super-fast finality layer is to ensure that you get
instant finalization guarantees. So what happens is the roll-up writes to this layer. This layer
is, think of it like a chain. The chain is getting economic security from each staking. Let's keep
there on the side. Just think of it like a chain. You just write the roll-up rights settlement commitments
to this chain. This chain then writes the settlement commitments to Ethereum. Okay, so, but the order
in which these commitments can be returned is rigid based on this chain. So what happens is I write a
commitment to this chain and this chain gives me a certificate that, yes, this is the order in which
this is going. Now I can take it to another roll-up, which is also tethered to this, like, you know,
fast zone and say, yeah, you know, this fast zone is verified that this is actually happening,
so I can move value back and forth. And what this does is solve some of the liquidity fragmentation
problems because instead of liquidity residing primarily in the roll-ups, the liquidity can reside
in this, like, fast zone. And, you know, people have hooks to draw liquidation. And, you know,
in and out of it. Like each roll-up has a hook to draw liquidity out of this layer and then give it back.
This is like just in-time liquidity across all the different roll-ups because, you know,
now you have a common zone in which, which can move really fast, which has economic security
because it's borrowing it from each staking. And now it becomes a zone where like a lot of
these things compose. So this is the idea that we are exploring with NIR, but other projects are
also building somewhat similar things on eigenlayer.
One is Omni, which is building a shared liquidity layer.
Alt layer is building a super fast finality layer,
specifically tailored around like, you know,
the roll-up and eigen-layer ecosystem.
So these are some of the attempts at solving the roll-up interoperability problem.
There's another interesting thing, David,
that goes on with the roll-up interoperability.
even without this fast finalization.
Imagine I want to move value between one roll-up and another roll-up at a timescale,
which is much faster than seven days.
Let's say both are optimistic roll-ups and I want to more value between them.
Then the way to do it is if I had, you know, if I want to move like 100,000-Eath,
if I have from an eigen-layer service a promise of, you know, more than 100,000-eat slashability,
then I can take that commitment as final and then like use that.
trigger to move value between these two roll-ups. So the roll-up fragmentation and the fact that the
roll-ups are fragmented. And second, the fact that roll-ups are fundamentally denominating in
ETH basically gives another utility to ETH as a staking asset for moving, you know, value across
these different roll-ups. And so we have bridges which are specifically building around this concept
of what we call attributable security. Each bridging
claim buys a certain amount of security from eigenlayer and then moves value around.
And as long as the total, you know, attributable security that the bridge holds is less than
the total value moored around, you're actually completely safe.
So this adds another interesting utility to ETH as a staking asset in backing these bridging claims.
A fast finality layer accelerates the rate at which you can do it.
But, you know, that's basically the kind of like overall landscape that we're looking at here.
Yeah, super cool. And I think now that we've kind of covered a number of different use cases for eigenlayer, I want to bring the discussion up to this idea of kind of aggregation across many different AVSs. So I think a common theme that's been discussed is this idea that, you know, as someone who has capital, they want to restake, they might have a hard time deciding which AVS to delegate to, you know, or sorry, which AVS to opt into, which node operator within that AVS to delegate to.
And so this kind of brings forward this idea of some abstraction layer between the restakers and the actual Aveses.
So I think this is kind of where the liquid restaking token discussion usually fits in.
So I would be curious to hear how you think about layers building on top of eigen layer and how they're like managing risk of many different Aveses and many different node operators to ensure, you know, the fungibility of their liquid restaking token and kind of, yeah, just generally your opinion on these these things.
concepts or these tokens as a concept and the potential, you know, implications to the eigenlayer ecosystem.
Yeah. I mean, this is the, this layer of abstraction that Mike is talking about. The idea is that,
you know, as a staker, I don't want to kind of like sit and make these decisions as to what is
the set of node operators. What is the set of aviases? Maybe I should allocate some portion to some
Aveses, some portion to others, like, should I accept rewards in only ETH or can I accept
rewards in new tokens? Like, there's just like a lot of different dimensions that simply don't
exist as a staker in Ethereum. You just go download stake and run. Like, that's, it's clear,
well specified. Being a double opt-in platform, Eigenlayer also brings all these new, new things.
Like at what price to accept an AVS, you know, does it offset my operating cost? There's
all kinds of questions that go around it.
So these liquid restaking tokens are one subset that basically tries to address these
kinds of questions.
The idea being they create a decentralized organization, which basically tries to adjudicate
and make these decisions and take stake on people's behalf and then like go and delegate
it to various operators.
So the questions there are, you know, firstly, are LRT's.
good for the eigenlayer kind of ecosystem. And I think I had a somewhat different answer,
six months back and then actually considering various things. I think on net they're actually
very good. The reason is imagine that somebody wants to build a, you know, a lending protocol
or some other like thing based on your Ethereum stake that's staked on eigenlare. Now, there are two
ways to do it. One is to do it kind of inside the eigenlayer platform and say that, hey, if you get slashed,
I'll actually go and withdraw your stake from Ethereum. If you get liquidated, I'll actually go and
withdraw your stake from eigenlayer and Ethereum. And another option is I have a liquid token,
and then I just have people exchange hands. Like, you slash, you know, I get liquidated.
I give my liquid restaking token to David. And the previous one actually has.
has worse cascade risks because, you know, instead of when the liquidation happens,
I know I have to unwrap my like eigenlayer position, which means eigenlayer security fluctuates.
And also, Igen layer goes and unwraps your Ethereum position.
So the Ethereum security fluctuates.
So what I've started seeing liquids taking tokens as are a layer of buffering.
Basically, let the financial thing be buffered at a higher layer rather than any financial thing,
like create a shockwave that goes.
through the entire ecosystem, right?
Imagine there is some, like, big, decently stable coin
or something built on top of this,
and some E2USD price change happens.
And this, without, like, this layer of buffering,
you undergo this massive, like, shockway,
which goes through eigenlayer, which goes through Ethereum,
rather than just, like, getting buffered out at the top,
and people just exchange, oh, you know,
you got my liquid restaking token instead of I got it.
That is much safer, I think, for the entire ecosystem.
knowing that these systems are permissionless and knowing that these things are anyway going to happen.
Okay, given that, now the same kind of alignment problems that Ethereum had to wrestle with get either amplified or similar problems show up with Eigenlayer,
because one of the things is if there is one single dominant staking token and that Dow makes all these decisions,
eigenleier loses the free market property.
Like one of the, you know, at least in Ethereum, price discovery was automated an algorithmic.
And eigenlayer relies on two sides of the market.
Aves is bidding a certain price and stakers accepting or rejecting that price.
If you had one collusion on, you know, one party representing the interest of the entire other side,
then you don't have the free market movement.
So, you know, this is something that we have to figure out, like Ethereum.
have to figure out, or we have to figure this out over time. But one high level lever is,
unlike Ethereum, which has to be like absolutely neutral and completely protocol,
eigenler can have some governance levers to actually like, you know, move the system in,
in, in, in, to be healthy across the multiple sides of the market. So that's just a lever that we
have. But in general, I think these liquid restaking tokens, we are seeing like many
highly talented teams come in and build this.
which I think is net positive, not only for EigenLayer, but for Ethereum itself,
because it induces more competition, a new opportunity to actually participate in the liquid staking market
by also being part of the liquid restaking protocol.
Yeah. And just to kind of continue on in the risk direction, because I think it's super interesting.
So you were talking about how EigenLayer can kind of be more opinionated about some of these risks.
I think maybe part of that equation is the slashing veto committee.
So I guess I'm curious how you see the importance of that as a tool to like underwrite the risk of things built on top of eigenlayer.
And also kind of the tradeoff between being a permissionless kind of platform that anyone can build on, anyone can launch any, you know, no AVS.
But then also like some of them are going to be kind of king made in that the committee is supported for them.
versus not for others. So can you just talk about the tension there? Yeah, absolutely. Absolutely.
So the goal of Agenlator is to be completely permissionless. But, you know, how we start up
the platform, I think, you know, in Justin's words, these platforms have path dependence. So you want to
make sure that the platforms start off safe. So we are going to start with a bit more permissioned
than totally permissionless on day one. And the way it's going to work is the slashing veto committee
needs to know what's slashing to veto.
So which means it needs to know what is your AVS.
So there is an onboarding condition,
either the Slashing Vito committee themselves do
or they, like, trust some other committee to basically
the onboarding of various ABSs.
That minimizes the risk profile.
You know, it has to be auditor.
It has to follow certain guidelines.
All these kinds of things are enforced in that layer
so that we can onboard safe and useful services
before, you know, eventually becoming a completely
permissionless platform. So each service, you know, over time, they either are on, you know,
they start out with the slashing veto and then over time as the platform matures and as also the
services mature, there's going to be an option to be free of the slashing veto committee.
Like you can just go and say, hey, I don't want the slashing veto committee because I'm rigid,
I'm ossified. I don't need to actually like trust us. Okay. So that is an option. But more
generally, I think, you know, I made the MEV and PBS analogy earlier, we can follow some of the
kind of like ideas from that space. One of the things that in MEV boost happens is there's this
concept of a relay. A relay is a doubly trusted party from both the block proposer and the block builder.
Both of them trusted for different properties, but that is a doubly trusted party. Similarly, a
slashing veto committee is a doubly trusted party from the ABS and from the staker side.
The Staker are trusting the veto committee to veto illegitimate slashing.
The AVS is trusting the veto committee to not veto legitimate slashing.
So it's a doubly trusted party.
So you can take that abstraction and think of like this veto committee as an entity.
And now you can say just exactly what happened in MAB Boost,
you can create a marketplace of veto committees.
There doesn't have to be one veto committee that like some small group of us decide.
There can be a marketplace where people can come in and say,
hey, here's a new veto committee we have self-coordinated to form.
Essentially, this is like an adjudication committee between like the, you know, AVS and Stakers.
And of course, if the AVS is completely rigid and solid and ossified, you don't need adjudication.
So you go to a null or empty adjudication committee.
But the more untrusted you are, you're underwriting some amount of trust from the adjudication committee.
So we envision this as one of the ways in which eigenlayer as an ecosystem evolves to remove this permission.
This is one of the ethos that we want to follow is minimize subjective decisions at the eigenlayer level.
This is one of the reasons we don't build a liquid restaking token.
This is one of the reasons we don't say which tokens can be staked and which tokens cannot be staked.
Initially, even though we are starting with a permissioning procedure there, over time it's going to be completely permissionless and people can decide.
And this is the aesthetic.
I think a protocol should minimize subjective decisions.
Subjective decisions should be made by agents who have both rights and.
and responsibilities, and they can figure out how to exercise it.
So that's the ethos in which we're building.
The credible neutrality of eigenlayer, of course, is going to be super important,
especially as, oh, by the way, congrats guys,
because during this podcast, Eigenlayer just crossed $1 billion in TVL.
I'm sure that makes you feel fantastic and also perhaps a little bit nervous,
at least I think it should.
And we have all of these liquid restaking token teams that are like going after that pie, right?
that it's a big pie, it's super valuable.
And, you know, the credible neutrality of the eigenlayer protocol, of course, makes plenty
of sense.
It ought to be that way.
But what about the neutrality of eigenlabs, the organization as it tries to help some of these
liquid restaking tokens bootstrap?
Because, of course, we do want these things.
But I can name like five names in my head about who would enjoy to be like the eigenlayer approved
liquid restaking token.
And of course, you also can't work with every single liquid restaking token down the long tail, A, because you don't have enough resources, and B, because one of them will be a rug.
And so, like, how do you guys think about just, like, neutrality when it comes to supporting the liquid restaking token ecosystem from the eigenlavs perspective?
Yeah, this is, you know, it's not only for liquid restaking tokens.
This is the case for aviases.
Let's say there are three bridges which want to build.
or like, you know, finality layers and all of these things.
So we face this kind of a problem.
So one of the ways, at least we want to minimize our own role in many of these processes,
so we want to create external committees which will make a lot of the decisions over time.
For example, onboarding, right?
Onboarding AVS is.
We don't want to say, oh, I like this AVS more, therefore we are going to onboard it,
but not that AVS.
rather the onboarding process should be merit-neutral but risk-sensitive, right?
You cannot onboard based on, oh, this is going to be a bigger a v-s.
That is going to be a smaller a-vias.
Instead, it is on-borded based on this is going to be more risk, this is going to be less risk.
So it's risk-aware, merit-neutral kind of onboarding process.
But it is a hard, hard trade-off because we have to also make sure that at least some people build.
if nobody builds, it's like you can have all,
then you can be credibly neutral if then nobody's building.
So it is a hard tradeoff.
I think layer ones,
many of them had to do this.
Ethereum itself had to do it.
Do you support uniswap or not support uniswap?
And, you know, there's a position of the protocol.
There's a position of people.
And, you know, Igan labs itself is a big team.
There are like, you know, it is complex.
So I don't, I'm not going to pretend I have like some great answer here.
But we're trying to both.
make sure that projects build on top of eigenlayer, but also make sure that other projects
don't feel like there's a barrier to come and build on top of eigenlayer. So that's how we
split the difference here. Teddy, over as a research engineer at eigenlayer, one of the things
that excites me about eigenlayer is that it can spawn an entirely new dimension of cryptoeconomic
networks. The way I kind of articulate eigenlayer is that we were living once upon a time in
flatland space when all we have were blockchain. All we had were blockchain.
And now with eigenlayer, we're entering like a more a third dimension, a new dimension of
crypto economic trust networks. And I would imagine that that is a fantastic nerd snipe for you
over as a research engineer. The success of eigenlayer depends on many networks coming
on board. And so I'm as true, like you engage with some of these networks. Some people have
ideas about networks that could be built. There are stealth networks being built, I'm sure,
because that's just how it works. Can you kind of just like, give us a,
a vibe, a chakudery board, a taste test of all of these like different kinds of networks that
maybe you're working with or just in discussions with. Are we going to see millions of
networks in the fullness of time or maybe just like 10 to 100 networks in the fullness of time?
Like what can you say about like all the yields that these networks are going to spit off?
Are they going to be great? Are they going to be little? Just kind of like give us a taste of
the future for Eigenlayer in 2024.
for. Yeah, so Sriram has this slide in his talks where he goes over, there's like five different
categories of aviases and probably more. We've got co-processors, Oracle networks, obviously a variety
of DA layers. There's going to be a Cambrian explosion in the same way that there was with
L1s and other sort of general token-based crypto-economic schemes.
Yeah, one way of thinking about it is, like, how many SaaS services exist on, like,
the cloud era.
Like, people think about modules.
They think about, like, oh, there's data availability, there's settlement or whatever.
But if you look at, like, you know, if you put a similar hat, you may say on a cloud,
there may be like one database platform and one virtual machine.
Like there'll be two layers.
But actually, if you look at the cloud,
you'll see like thousands of successful SaaS companies,
software as a service companies.
And it's because, you know, as a civilization,
I think our tendency is to hyper-specialize.
Because when we specialize,
there is like a lot of value in specialization and like composition, right?
I specialize and I just do.
oh, a database for games for this, like, you know, for games which are coming from like
AAA studios or whatever, like something very, very, or just for like a database for games for
the Unreal Engine, right? Like something super special. And, but that itself is a big
enough and interesting enough market that somebody will build that kind of a special layer.
So that's our long range thesis is there's going to be like lots and lots of modules,
just like there's lots and lots of SaaS services.
We can think of it as a new kind of financial market.
Because, I don't know, if you look at, for example, bonds, bonds are, you know, a generalization of a loan,
where money is actually changing hands over time.
Restaking is different from that because money is not directly changing hands.
It's enforcing a crypto economic set of incentives and scheme.
But I guess the gravity of the invention of a generalization of this kind of financial instrument is on the same order of magnitude.
We think that, you know, similar to how the number of bytes and kilobytes and gigabytes and gigabytes of coordination we're able to generate through blockchains and DA layers continues to grow,
that the number of crypto economically secured services will also grow.
All right, guys, well, I've already learned quite a lot in this episode.
I'm going to have to really learn this, re-wash this, re-listen to this
to make sure I can understand some of Mike's questions and Sri Ram's answers.
Sri Ram, I think you said Mainnet, Sometime Q1, Q2, 20, 4.
Is that all of the details you can give us?
And what else are you looking forward to an eigenlayer in 2024?
Yeah. We have an upcoming main entlauch in Q&Q2. We will expand the scope of what kind of slashing and attributions that can be done. For example, we have this mechanism we call attributable security where you're not just getting this idea that, hey, you know, if my service goes wrong, X dollars will get slashed. But I will be able to redistributeable security.
a portion of that, you know, X dollars.
So combining the idea of pool security and attributable security and creating mechanisms
where, you know, a particular AVI has a portion of that pool as a specific attribution.
That's something that we are expecting to launch also in 2024.
We have, of course, eigen-DA, which I think is going to be a very important and useful,
primitive for roll-ups.
We have many partners launching roll-up services.
like decentralized sequencing from espresso.
We have Alt-Layer launching these finalization and other services for roll-ups.
We have major bridge partners like Polymer, Lagrange, Wormhole,
launching a bunch of different bridging services.
We are also excited about AI-related services coming up on EigenLayer.
Like, imagine you're sitting inside an Ethereum contract and you can make a call,
and get an AI service and its answer certified with a certain amount of economic security
so that if you take an economic action, you have protected till that amount of value.
So you can think of defy itself becomes more intelligent because now you have this ability
to get like, you know, much more computational resources on your disposal.
This is the category of co-process that Terry was referring to.
So we're excited about all these, you know, cascading.
No, there's also other things.
For example, there is MEV, which, you know, value, which can be controlled more by a protocol
rather than MEV value necessarily going all to the validators.
So an application can say, hey, you know, this group has to consensus in order to, like,
certify the MEV and they will redistribute a portion of the MEV back to protocol.
All these kinds of really interesting use cases coming up with the,
AgenLair this year. So looking forward to
encourage more builders to come on top of our platform
both as role-ups on IGNDA but also as building
brand new services on AgenLair.
If people want to just learn more, get information,
open up the docks, get started, Sri Rom, where should they go?
iganlayer.org. Yeah. There's also like, you know,
forums where we have like, you know, research discussions
on the AikenLayer system. There is a blog where we put up
regularly material on new things.
Also, the eigenleared Twitter handle, which points to all these different things over time.
Well, Sri Rom, I think the evolution of restaking networks, AVS networks, is going to be one
of the more fascinating things in 2024.
And one of the reasons why I like it is it because it goes right down to the heart of
ETH, which I think the monetary arc and development of ETH as a monetary asset, I think,
is one of the most fascinating things in crypto.
and eigenlera seems to be the next evolution on that story.
So as a podcaster who likes interesting podcasts,
I thank you for bringing this evolution to the world.
Teddy, Sri Ram, thank you so much for coming on Bankless today.
Thank you so much, David, and Mike, our pleasure to be here.
Bankless Nation, you know the deal.
Crypto is risky.
Staking is risky.
Restaking is even riskier, but, you know, it's probably more fun too.
You can lose what you put in.
We're headed west.
This is the frontier.
It's not for everyone, but we are glad you are with us.
on the Bankless Journey.
Thanks a lot.
