Bankless - Justin Drake & Ben Fisch: The United Rollups of Ethereum
Episode Date: February 28, 2024In today’s episode, we do a shared sequencing deep dive with repeat guest, Mr. Moonmath himself, the Blockchain Brainiac, and the Ethereum Evangelist– Justin "The Juggernaut" Drake. Justin is join...ed by The Sultan of Sequencing, the Espresso Emperor, the Cross-Rollup Connoisseur himself, Ben, the Blockchain Barista Fisch. Where the fragments of Ethereum threaten to turn the digital ecosystem into a maze of confusion, these two men stand tall, illuminating the way, to come and save us from the labyrinth of Ethereum’s rollup-centric roadmap. ------ 📣SUI | Register for Sui Basecamp https://bankless.cc/SUI-podcast ------ 🎧 Listen On Your Favorite Podcast Player: https://bankless.cc/Podcast ------ BANKLESS SPONSOR TOOLS: 🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://k.xyz/bankless-pod-q2 🦄UNISWAP | SWAP SMARTER https://bankless.cc/uniswap 🔗CELO | CEL2 COMING SOON https://bankless.cc/Celo 🗣️TOKU | CRYPTO EMPLOYMENT SOLUTION https://bankless.cc/toku 🛞MANTLE | MODULAR LAYER 2 NETWORK https://bankless.cc/Mantle ⚖️ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum 💸CRYPTO TAX CALCULATOR | USE CODE BANK30 https://bankless.cc/CTC ------ TIMESTAMPS 0:00 Intro 6:09 Goals For the Episode 8:32 Sequencing / Sharing Explained 15:13 Ethereum’s Arc 18:20 Shared Sequencing Obstacles 23:52 Network Effects & the Cold Start Problem 32:54 MEV Redistribution 45:25 Reserve Price 46:45 Loss of Sovereignty 54:34 Espresso Announcement! 1:03:52 Fast L2 Finality 1:09:52 Real-Time, Recursive Proving, & Hardware 1:13:32 Flash Loans & Snark Proving 1:15:38 Low Latency Finality 1:17:08 Censorship Solutions 1:21:50 Discussing the Downsides 1:35:38 Known Unknowns of the Future 1:39:46 Closing & Disclaimers ------ RESOURCES Justin Drake https://twitter.com/drakefjustin Ben Fisch https://twitter.com/benafisch ------ Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
Welcome to Bankless, where we explore the frontier of internet money and internet finance.
And today on Bankless, we explore the frontier of composability of Ethereum's roll-up-centric roadmap.
Today on the show, we have Justin Drake as my technical co-moderator as we both interview Ben Fish, the CEO of Espresso Systems.
Espresso is building a shared sequencer that operates at a higher plane than the Ethereum Layer 1 that allows for shared execution amongst the Ethereum roll-up landscape.
The frontier that Espresso is building in is becoming more and more relevant, as,
the fragmentation of Ethereum's rollups become more and more obvious. Yet there are a bunch of
solutions being built in parallel, and espresso unlocks a lot of them, a lot of innovation,
is allowed to come into the world of composability across Ethereum's roll-ups. At the beginning
of this episode, I really make sure that we drill down into the foundations of this conversation,
the 100-level, 200-level basics, before I kind of let Justin Drake take the reins and get into
some of the deeper ends, the more technical nuances of shared sequencing. So I think this episode is
accessible to all skill levels.
If you are at the beginning of your arc of understanding Ethereum, you're going to catch
a vibe.
And then if you are deep into the weeds, I think you're going to really enjoy the second
half of this episode.
But first we're going to talk about suey and the suey base camp, which is a in-person
conference in Paris during Paris blockchain week, April 10th through 11th.
So if you are interested in a parallelized layer one delegated proof of stake chain, the
suey base camp might be for you.
There is a link in the show notes to get 20% off of your pass to the sui
base camp. This is the move ecosystem for all the devs out there who enjoy move.
This is, Sui is like the spiritual successor to the whole Libra project, now turned into a layer
one. And so especially if you're a dev who's interested in parallelized new VMs that are
parallelized. There's a link in the show notes to get started. I think the exploration of
composability off across Ethereum's roll-ups are, is just beginning on bankless. And this is our
second step into the world of composability here. Our second episode with Justin, we will get into
further conversations with other players in this arena as well in the future. But before we get into
this conversation with Justin and Ben, first a moment to talk about some of these fantastic sponsors
that make this show possible. If you want a crypto trading experience backed by world-class security
and award-winning support teams, then head over to Cracken, one of the longest standing and
most secure crypto platforms in the world. Cracken is on a journey to build a more accessible,
inclusive, and fair financial system, making it simple and secure for everyone, everywhere to trade
crypto. Krakins intuitive trading tools are designed to grow with you, empowering you to make
your first or your hundreds trade in just a few clicks. And there's an award-winning client support
team available 24-7 to help you along the way, along with a whole range of educational guides,
articles and videos. With products and features like Krakken Pro and Krakken NFT marketplace and a
seamless app to bring it all together, it's really the perfect place to get your complete
crypto experience. So check out the simple, secure, and powerful way for everyone to trade
crypto, whether you're a complete beginner or a season pro. Go to crackin.com slash bank lists to see
what crypto can be. Not investment advice, crypto trading involves risk of loss. Uniswap is revolutionizing
the defy space, not just by enabling swaps, but by empowering you to swap smarter with a comprehensive
suite of products for faster, safer, and more informed swapping. Say goodbye to pop-up wallet
extensions. The uniswap extension is coming soon, and this extension is not a pop-up. It is a
sidebar in your browser that persists no matter where you are on the web. This means you can swap, sign or send
and receive crypto anytime, anywhere, without obstructing your browser window.
But that's not all.
The Uniswap web app now features limit orders, so you can buy and sell any token at your
price on your terms without having to watch the market.
And the best part, limit orders are built on Uniswop X, which means no gas fees.
Also new to the web app is the data and insights pages with real-time candlestick charts,
price data, transaction logs, and detailed pool data, all integrated into the Uniswap web app.
All of these new releases come together to create one platform to help you swap smarter every time,
no matter where you are on web, mobile, or on the extension.
Click the link in the show notes to sign up for the extension waitlist and download the mobile app.
Start swapping smarter with uniswap.
Selo is the mobile-first EVM-compatible carbon-negative blockchain built for the real world.
Driving real-world use cases like mobile payments and mobile defy.
And with Opera MiniPay as one of the fastest growing Web3 wallets,
Sello is seeing a meteoric rise with over 300 million transactions and 1.5 million monthly active addresses.
And now Sello is looking to come home to Ethereum as a layer two,
Optimism, Polygon, Matter Labs, and Arbitrum have all thrown their hats in the ring for the cello layer 2 to build upon their stacks.
Why the competition?
The cello layer 2 will bring huge advantages like a decentralized sequencer, off-chain data availability, secured by Ethereum validators, and one block finality.
What does that all mean for you?
With cello layer 2, gas fees will stay low and you can even pay for gas natively using ERC20 tokens, sending crypto to phone numbers across wallets using Social Connect.
But Sellow is a community-governed protocol.
This means that Sello needs you to weigh in and make your voice hurt.
Join the conversation into Sello forums.
Follow Sello on Twitter and visit sallow.org to shape the future of Ethereum.
Bankless Nation, ladies and gentlemen, in my left corner, we have Mr. Moon Math himself,
the blockchain brainiac, the Ethereum evangelist, Justin, the juggernaut, Drake.
Justin, welcome to the show.
David, thanks for having me again.
And coming in for his first time on bankless, we got Ben Fish, the Sultan of Sequencing,
the Espresso-Emper, the cross-rollup connoisseur, the blockchain,
Rista Ben, welcome to Bankless.
Pleasure to be here, David.
Thank you for having me.
So where the fragments of Ethereum
threaten to turn this digital landscape
into a maze of confusion,
two men stand tall,
illuminating the way to come save us
from the labyrinth of Ethereum's roll-up-centric roadmap.
Today on the show, we're going to be talking about
shared sequencing and what this means for Ethereum's roll-ups,
what it means for composability across Ethereum's roll-ups,
and trying to get down into the deep end
of some of the nuances that are going
going to ultimately turn into engineering questions and then ultimately turn into production
products for the Ethereum ecosystem.
And so we got Justin Drake not as a guest, but as my technical co-moderator for this episode.
And I think I'm going to just make sure that we kind of touch on some of the easy things,
the easy subjects before we get into the deep end here.
But Justin, maybe I could just ask you before we really get started here, what would you
say are your goals for this episode?
What knowledge do you want to get out of Ben?
what should listeners be prepped for or be thinking about as we progress further into this conversation?
Right. I mean, I'd like to push forward the discussion around shared sequencing and base sequencing specifically.
I think we're in this interesting phase shift moment where people are starting to pay attention to this, the problems of fragmentation, and also the solutions are starting to pop up.
There's a bunch of misconceptions or maybe I should say preconceptions that would be.
be good to clarify. And I also feel that there's this wave of momentum of teams like Ben's team
that are potentially making a big pivot. And I think Ben might have a big announcement to make
here during the podcast. Interesting. Yeah, Ben, maybe share some of your similar thoughts or goals
or just interests in the knowledge and content and conversations that you want to broadcast. What goals
do you have for this episode? Yeah, similar to Justin, I would like to
to address head-on, you know, shared sequencing and what it is and why I'm very bullish on it,
why I think that it's very important for the evolution of the roll-up space and for the scalability
of Ethereum while preserving its unity and defragmenting, you know, what has happened to
Ethereum with the current scaling of Ethereum through many different fragmented roll-ups that are not
United. I also want to talk about, as Justin alluded to, we've got very excited about
based sequencing, the idea of involving the L1 itself, Ethereum L1 in as much a way as possible
in shared sequencing. And we've actually found a way to merge the narrative of espresso with
that of base sequencing. And that's been a very exciting and new development for us that I've been
talking a little bit about but haven't talked about publicly yet until this podcast.
And so I could get into a bit of how we've made those adjustments at espresso and how roll-ups
running on espresso can actually be based roll-ups.
Okay.
I love that.
And I want to put a pin in that because I want to work towards that point.
But I think we need to establish a foundation, some knowledge before we get into what the
significance of that is.
So let's see how fast we can kind of speed run through the 100 level and 200-level aspects of
this conversation before we get into the deep end. So this is like just start at the 101 level.
Sequencing, what is it? And then what does it mean to share it? So what's sequencing at all?
And then what does it mean to add sharing onto that? Ben, we want to take this one?
Yes. So sequencing today for a roll-up is the act of determining the transactions that will actually
go into the roll-up.
And currently, most roll-ups today have a server that collects these transactions
and, you know, orders them, gives confirmations to users that these are the transactions
that are going to be included.
And then every ever so often, it settles them on Ethereum by posting them to a smart contract.
And the smart contracts today can only be updated by this sequence.
So they give the in total authority to this central server to determine which transactions are going to update the roll-up, with the exception of transactions that can be forced included, and there are other details we don't need to get into right now.
Maybe the first thing to talk about then with sequencing is, well, what is decentralized sequencing?
It's simply taking that and making it not one server that's in charge of sequencing, but many different servers.
having a decentralized, some kind of decentralized protocol whereby it's not just one party that can determine the transactions that get included.
Maybe it's a rotating set of parties. Maybe it's a Byzantine fault tolerant protocol.
And shared sequencing is the idea that multiple roll-ups can share a common mechanism for determining this ordering.
And I'll give one slightly zoomed outtake, though, on shared sequencing that I don't think
it's a common way of describing it, but it's definitely the way that I think about it.
I think of shared sequencing as not necessarily sharing a common party or protocol for determining
the ordering, but it's more about sharing a mechanism, sharing a marketplace, whereby roll-ups
can effectively sell their block space to third parties who are bidding on the joint
block space and may value it more and can create a surplus value. But roll-ups really can determine
by the slot whether to sell that block space or not. And, you know, if there is value to be had in
parties jointly sequencing for multiple roll-ups at once,
then the outcome will be that there are these third parties
that will be simultaneously purchasing the block space
from multiple roll-ups and producing these blocks simultaneously
and creating a surplus value that gets redistributed to roll-ups.
I think that last point gets a little bit deeper,
and I'm going to put a pin in that,
because I know we're trying to go over basics 101
of what share sequencing is, but I'll stop there.
maybe one way to articulate the purpose or advantage of shared sequencing, and this is just my understanding, so check me on this, Ben, is that when you have one roll-up, you have the economics of that one roll-up, and then when you have a sequencer that is spanning multiple roll-ups, aka a shared sequencer, one roll-up can have more opportunities for double coincidence of wants with the transactions in one roll-up as it relates to another set of transactions in another roll-up.
And so a shared sequencer is kind of like a matchmaker between the double coincidence of
wants of the economies of two rollups.
And when it behooves both rollups, a shared sequencer kind of make those connections
happen.
Is that like a simplified way to understand this?
Absolutely.
Yeah.
It enables faster bridging between roll-ups.
There are certain things that you can't do today.
Like maybe it's good to, what you said is entirely correct.
But I think just giving a few quick examples might help make it concrete.
Let's say that I want to buy an NFT on one roll-up, but I want to pay for it with cash on a different roll-up.
I might want to make these two transactions atomic without even bridging assets over.
Could I buy an NFT on Zora with, you know, I don't know, cash that I have on ZKSink, right?
What if I want to, you know, take advantage of arbitrage between AMMs on two different roll-ups?
What if I want to fund a transaction on one roll-up using funds that I have on the other?
They say, I want to take a loan on one roll-up and use it to fund a transaction that I have on another,
and then even repay the loan.
So do some kind of flash loan, for example.
All of these things are not, you know, possible with the status quo,
and shared sequencing is not the same as being in the same execution environment like Ethereum,
but it can get us under certain kinds of economic guarantees very, very close to that.
Yeah, I like the marketplace analogy because it's all about creating value that otherwise couldn't
have been created. So there are certain instances where we want this really low friction
form of interaction, which we call synchronous composability. And that provides
more value than the counterpart, which would be asynchronous composability.
And it turns out that on L1, every roll-up benefits from synchronous composability.
So this is the luxury that we've gotten used to.
And once you don't have shared sequencing, when you have this siloed sequencing,
then suddenly you fall back to this lower form of composability.
And I think what Ben is trying to do is kind of bubble up this value creation.
and when there are opportunities to have synchronous composability,
try and capture this additional value creation
and potentially even give it back to the roll-ups.
So it's a net positive for everyone.
Not only are we creating value,
but this value goes back to the roll-ups
and there's no loss of the perspective of an individual roll-up.
Maybe the Ethereum layer one,
the common settlement layer for all roll-ups,
maybe one way to perceive it is like a minimum viable composability for all roll-ups.
At the very least, roll-ups are composable, albeit slowly, I'll be it extremely asynchronously
through the layer one.
And that is our threshold.
That's our floor.
That's our foundation of level of composability.
And with additional levels of innovation, shared sequencing being one of them,
maybe an intense is also relevant here, we can actually kind of raise the floor of what
composability looks like.
And maybe that's kind of like where we are.
are in Ethereum's arc right now is like we've got this minimum level of composability,
and now the webbing between some of the roll-ups are starting to get fleshed out,
which is like what this conversation is here.
Ben, what would you like to add to this?
I'll say it like this.
Ethereum is a shared sequencer.
We already have a shared sequencer, right?
Ethereum is a shared sequencer.
Not only for all applications that run on Ethereum, it is a shared sequencer for roll-ups.
It is not as good a shared sequencer as we want right now.
It's not optimized to be a shared sequencer.
Well, it's a shared sequencer that happens on a delay, right?
And I think that it's talking about Ethereum as a shared...
The difference between shared settlement layers and shared sequencers, I think, is, you know, grayer than people might think.
So I think all this discussion about shared sequencing is about improving on the baseline that we have,
which is that bullups share Ethereum as a settlement layer.
And how can we get more out of that?
How can we get, how can we get atomic execution promises, right?
How can we get just more value out of shared infrastructure than what we already have today?
David, I really like your point around the minimum foundation that if you provide,
which is this asynchronous composability.
And the way, I guess a good metaphor is countries.
Let's say you have two different countries.
At a minimum, what you can do is transport from one country to another if you get a visa or if you show your passport.
And you can also do trade, but there's some friction.
You might have to pay import duties or whatever it is.
And then there's something a little tighter, a little more close-knit, which is something like the United States or Europe, which is some sort of coalition where within those countries or whether,
those states, you have for freedom of movement and for freedom of commerce. And in some sense,
chains opting into a shared sequencer is all about maximizing this freedom of movement of assets
and value and also maximizing the freedom of commerce. And I really like the analogy that you
and Ryan put forward, which is this united chains of Ethereum. We're creating a super low friction
environment for the different chains to scale Ethereum with very, very little trade-offs.
Okay, so this all sounds great.
Fantastic.
Why don't we have this right now?
Like, why this seems so simple?
Just like share the sequencers.
Just get it done.
Why can't we have it?
What are some of the obstacles, Ben, that you're running into a dispresso, right?
What are they just the net obstacles for the ecosystem as to why this is difficult?
Like, why can't we have this?
Yeah, good question.
So I think that in part there's a technical challenge.
There's also a more social challenge, a coordination challenge.
Let me touch on the technical challenge first.
So actually, the original concept of roll-ups was just to share Ethereum as a sequencer.
The original idea for roll-ups was you don't actually have a sequencer, you just allow the smart contract to collect transactions.
And so effectively the L1 proposal is basically the shared sequencer for all.
roll-ups. And just as a bit of a historical note, like this, you know, the sequencer was introduced
as something that could be used for a performance benefit. So by having the smart contract
only allow itself to be updated by this one party, you can allow that party to collect many
transactions. And because it knows that it's the only entity that can update this contract,
It could collect many transactions at high throughput.
It could compress those transactions more, so it can get more compression out of this and thus cheaper fees.
It can also, if you trust it, give users very fast finality guarantees.
So it can say, if you trust me, then believe me, this is the transaction.
You don't need to wait for it to settle on Ethereum.
This is the transaction.
Just trust me, and I'll eventually post it to Ethereum.
So that has all kinds of performance benefits.
and UX benefits.
If you're going to go back to getting all roll-ups to share a sequencer,
then you need, from a technical perspective,
to preserve some of these performance benefits that we got from centralized sequencing.
And so that has been, you know, the main, I guess, technical challenge at espresso.
And even now where we've figured out how to
allow roll-ups to be based and to basically involve the,
allow the, whoever is proposing the next Ethereum block to act as a shared
proposer for the roll-ups as well.
We've figured out technically how to, you know, integrate the, I would say, the meat
of what we have, the technical meat of what we have at espresso, which is a very fast,
high-thrubit consensus protocol called hot chat, in a way that a lot of,
allows these roll-ups to enjoy much of the performance benefits that they have today,
while still sort of sharing a proposer who is also the proposer of the L-1.
So I don't want to get too much into the weeds on that,
but they just want to say that, look, that's the sketch of where the technical challenges come from.
And then the social challenge is convincing roll-ups that this is a good idea,
that they are going to get surplus value from not having so much fragmentation,
but enabling stronger interactions with other roll-ups and also with the L-1 itself.
That coordination challenge is difficult because roll-ups evolved in a world where there wasn't
shared sequencing.
And many of these roll-up projects have considered doing shared sequencing within their
ecosystems like optimism with the super chain for example but the idea of there being
one Ethereum shared sequencer that is that that that expands beyond any
individual role-up ecosystem is new and even if it's somewhat where we started
it's not where we ended up and we were trying to get back to that original state
Yeah, so if I were to summarize, basically what you're saying is that the sequencer is this coordinator which provide various services.
It can do optimal sequencing and deduplication and compression.
So it's a performance benefit.
But it also provides UX benefits.
One of them, as you said, is this finality, if you trust me, also known as pre-confirmations.
And then another very useful service is MEV protection.
is essentially implementing an encrypted mempool using a centralized sequencer.
As a user, I send my transaction encrypted to the centralized sequencer.
No one else sees it, and so I can't get sandwiched.
And I guess what you're saying here is that we've made this mental unlock,
which is very new recently, where we can think of the L1 proposes as providing these exact same services
that centralized sequences provide,
and at the same time have all sorts of benefits.
Benefit number one is we have this credibly neutral platform.
Benefit number two is that we inherit a lot of the security of a filrum.
And then benefit number three is that now we can think of true global and universal shared sequencing
as opposed to having ecosystem by ecosystem shared sequencing.
Justin, where would you like to take things?
from here. I have one set of questions that maybe we will get to, but it's more about just like
one of the last things that Ben said, which is like, at the very start, no one wants to use
shared sequencing. At the very end, everyone wants to use shared sequencing, the end being
when everyone else is already using shared sequencing. But I don't know if I want to hop there
first. Justin, what kind of, what rabbit hole in this whole world do you want to go down first?
Right. I mean, I do want to go through various innovations that Ben and
and his team are bringing to the table.
One of the big ones that we've already alluded to is
MIV redistribution.
But I really like what you said,
that here,
shed sequencing is all about network effects.
And so if you're one sole roll-up,
like what is the incentive for me to join a network with one node?
There's no network effects.
And so bootstrapping these initial network effects,
you know, can be difficult.
But once everyone else has a shed sequencer,
then it almost becomes a no-brainer for you to come in
because you enjoy these network effects.
And one of the things that I do want to highlight economically
is that there's this idea of MEV, right,
and do roll-ups lose it or do they get it kicked back to them?
But there's also this idea of execution fees
whereby by connecting to the network,
you actually increase your total amount of execution fee.
So I guess I want to go to Ben and ask him,
you know, do you agree that there's these network effects at play?
And like what would be your strategy to try and kickstart things and solve the cold start problem?
Yeah. Justin, I agree there are absolutely network effects.
And the value of a shared sequencer increases, you know, quadratically in the number of roll-ups that are joining it.
I wouldn't necessarily, I wouldn't say, however, that nobody wants to be.
to use a shared sequencer at the beginning.
I think everyone talks about decentralizing their sequencer.
And if you're going to decentralize your sequencer,
you might as well use a decentralized shared sequencer
that you know other roll-ups are going to join,
as long as it doesn't have any downsides.
So the idea that it solves an immediate problem for you now,
even though if you're the first one to the party,
you know, you still needed a solution
for decentralized sequencing,
and you might as well go to the party
that everyone else is going to show up at.
The other thing I will say...
Wait, Ben, before you move on,
I'd like to actually kind of just like unpack that
just a little bit more.
Yeah.
So decentralizing a sequencer,
so like right now the base case for Ethereum roll-ups
is the way we have centralized sequencers.
We get some benefits from that.
But like, ideally, we would like to go
from the current state where, like, Arbitrum or Optimism
have one single sequencer,
and they kind of want to split it
maybe into like three different sequencers that do some sort of like round robin thing,
some sort of setup that gives them some new properties, mainly liveness protection.
So if one sequencer goes down, the other two out of the three are always still in the network.
And so the network has some sort of robustness with in terms of liveliness.
That's the reason why internally optimism or arbitram would enjoy decentralizing the sequencers.
Maybe there are some other benefits as well.
Mainly it's a big like liveliness thing.
And what you're saying is like, well, they want this.
they do want this. It's not, this isn't, this is a known quantity. What you're saying is like, well,
there happens to be this specific sequencer that could also be one of these, you know, of the three,
an arbitrary number that I just picked, uh, of the three, they could just choose this one,
which also is a shared sequencer. So they are decentralizing their sequencer. If they
just pick one that's also shared, maybe arbitram and optimism are actually using espresso as one of
their sequencers. And so this, there's not as much of a cold start.
problem if they are already looking to decentralize their sequencer, they'll just, you know,
why not also pick the sequencer, which is having a party where other people will show up,
is a shelling point for where people will show up, and we can start to get some of those double,
triple, quadruple coincidences of wants, depending on how many chains decide to elect to use
this particular shared sequencer. That's kind of what you were saying, right? That is kind of
what I was saying, yes. The other thing to note is,
with based sequencing, you automatically are sharing a sequencer with Ethereum, with the L1, with the EVM.
So by involving the L1 proposer in the sequencing protocol, right?
So all sequencing protocols basically have a rotation of, okay, who can propose the next block, right?
Who's able to propose the next block for all the roll-ups that are on this sequencing protocol?
And if that can be the L1 same as the proposer for the next Ethereum block, then suddenly this
proposal can start to give what we call pre-confirmations about what it will do.
So if I'm a user and I say, I only want my transactions to execute, you know, on optimism or
arbitram or ZKync or start quick, whatever, whatever roll up, Tyco, if the price of uniswap on the L1 is X,
then, you know, I can get some kind of promise from the proposer that this will happen.
And if it doesn't happen, the proposal will be slashed.
Or another way of thinking about this is insurance.
I can buy an insurance policy from the proposer that will cover my losses if this thing that I want to happen,
this intent that I have as a user, is not satisfied.
Who is able to actually sell this insurance, not just anyone on the street, right?
The proposer, and specifically proposers who have,
the ability to affect the outcomes of both of these roll-ups simultaneously.
So even if there's just one roll-up, the first roll-ups, let's say TIEG will be TICO.
So if TICO is the first roll-up that becomes a based roll-up, then it's the first roll-up
that will enjoy this additional composability with the L-1, and its users will enjoy that.
And I think that's a benefit that anyone can have by becoming a base roll-up.
And the wonderful thing is that if they join it and other people want to join it too for this reason,
then again, the benefit of being part of the same party that everyone else is joining
is going to only increase quadratically with a number of parties that join.
If I were to summarize Ben, I guess you're saying two things.
One is that a lot of these teams already on their roadmaps want to decentralize their sequencer.
and so they might as well have someone else do all the work.
The Rolet teams are experts in virtual machines.
They know how to design the virtual machines
and design fraud proofs around them
and snark proofs around them.
And they might also branch out in terms of tokenomics
and things like that.
But sequencing is not what they excel at.
Even like the simplest form of sequencing,
which is centralized sequencing, right?
The centralized sequences, you know, go down all the time
and they're not DevOps experts.
The other thing that you're saying is that even if there's just one base roll-up, that's already a network with a network effect.
And the reason is that there's another node, which is the L-1.
And it's not just any plain old node.
It's the damn L-1, which has half a trillion dollars of TVL.
And so from day one, you enjoy huge amounts of network effects with base sequencing specifically,
because you have this network with two nodes,
one of which is the dominant one.
Yes, yeah.
I'll soften just the first remark,
because my personal view is that,
and I know a lot of the leaders of these other projects,
I think all these projects have brilliant people,
and they're all actually very, very capable
of building advanced technical solutions
across many different types of domains,
including building, you know,
decentralized sequencing, etc.
But we know that specialization is also important.
And are you going to want to maintain a consensus protocol, a decentralized sequencer, right, in addition to being the best at, you know, building your ZKVM or your optimistic VM down the line, you know, that may not be what you want to focus on, right?
I think that roll up are bitter off focusing on making their stacks the best.
best DAC for, you know, as an execution environment.
But I think that the greater reason is just around the network of fact that we're going to
have with the benefit of Roll-ups joining a shared sequencer.
Right.
So one of the big concerns that Roll-Up have when they're first exposed to this idea of shared
sequencing is that they have to give up MEV as a source of income.
and I have my own thesis, which is that MEV is going to go down by at least an order of magnitude,
if not two orders of magnitude relative to this other thing, which is execution revenue and execution fees.
But let's assume that I'm wrong for a moment and that MEV is a very significant part of roll-up income.
You've been working on something super interesting called MEV redistribution.
Can you talk about what it is and how do you achieve it technically?
Yes.
So I'll go back to what I said at the beginning, which is that I think of a shared sequencer, if built appropriately, and this is the way that we're building it at espresso.
So I think of a shared sequencer as a marketplace, right, whereby chains are roll-ups are selling proposal rights by the slot to others, whether they're the proposers of the O1 blocks or other third parties, who are creating even more value.
by offering, you know, pre-confirmations and cross-chain liquidity and
atomicity and all kinds of things and more.
And so what do I mean?
Why is this a marketplace in the way that we've designed it?
Well, let's consider an example.
Okay, let's say that there are just two roll-ups, roll-up A, roll-up B.
Arbitram and base.
Fine, arbitram and base, right?
So, you know, hypothetically, arbitram and base.
then if they're the only two roll-ups on this shared sequencer,
what's going to happen is that third parties are going to bid for the right
to be the proposer for the next blocks of base and arbitram.
And there's different kinds of bids that will collect.
We'll allow people to bid individually on proposing the next block of arbitram,
individually on proposing the next block of base, and to bid on the pair.
So the bid on the right to produce simultaneously the next block of both arbitram and base.
So let's say that the highest bid that we get for arbitram individually is, I don't know, five.
Let's say that the highest bid that we get for base individually is three.
And what we get, we get a bid to produce both of them together for 10, right?
Well, 10 is greater than 5 plus 3, so we have the most economic value created by allocating the bundle to whoever bid 10 for the right to produce the bundle.
But because we got the highest bid for Arbitrum at 5 and the highest bid for base at 3, that's what they would have got if they were just auctioning off the right to propose their next slot on their own.
So we will give Arbitrum 5.
We will give base 3.
and the surplus of two can be divided proportionally.
It could be burned.
You know, we will redistribute it in some way
so that they're getting even more than they would conceptually on their own.
So that's like the basic idea.
You know, how do you estimate the contribution levels of each roll-up
to the surplus value,
or how do you make sure that they're making at least as much
as they would make on their own?
Another way to think about this, though,
is that each roll-up has the opportunity
to set a reservation price.
That's why it's really that the roll-ups themselves
are selling their block space.
What does it mean to set a reservation price?
Well, Arbitrum can bid on itself, right?
It can bid five.
It could bid six if it wants.
It could bid 10.
If it bids 10, then it won't end up
in a bundle with base,
so it will basically exclude itself
from the potential surplus value
that can be created.
If it's true values...
Is that like a tariff, Ben?
If we're using the nation state,
United Chains of Ethereum metaphor.
Is that like a tariff?
Like, you must pay at least this.
Not a tariff.
I mean, it's, I don't think it's exactly a tariff.
It's, it's more, I mean, in the language of auction, of, of auctions, or, you know,
the, the economic term would be a reservation price because, or reserve, reserve price,
because it's saying that the, the seller is only willing to sell, regardless, you know,
above this price, right?
So what would happen if Arbitrum bids 10 on itself, right?
Well, because the true value of Arbitrum is actually five,
and nobody values the bundle more than 13,
which would be the sum of 10 plus three,
the highest bid on basis three, then Arbitrum's bid will win itself.
It will be excluded from a bunch of $1,000.
bundle, and it's the same as if arbitram were just its own centralized sequencer.
And so it guarantees that it will only make more than its reserve price.
So if it wants its reserve price to be 10, it will only make more than 10 by participating in this shared economic mechanism.
And I think that this is just this example with two parties is here to illustrate that it's very much by the slot,
roll-ups are selling the proposal rights for their next slot to other parties who are now going to
hopefully buy up the rights for many roll-ups at once and create surplus value from that.
It gets more complicated when we go from two to three.
You don't want a situation where one roll-up can bid excessively high on itself or set a very
high reserve price and ruin it for everyone else.
So the solution, in order to generalize it from two to three, where it gets a little bit more complicated.
And I won't go into the details right here, but we'll have a blog post on this, is to ensure that if one roll-up sets a very high reserve price for itself, every other roll-up that sets a correct reserve price that honestly reflects its true value will enjoy the benefit of being in a bundle with each other.
Right? So one person can't cause others to be excluded from the bundle.
One roll-up can cause itself to be excluded from the bundle only.
So if I were to highlight one specific point, what you're saying is that there's a mechanism whereby, at the very least, the roll-ups are getting the value, the MEV, that they would have received if they were completely isolated from a sequencing standpoint.
And potentially, they're getting more.
more because there's excess MEV that's generated from the additional value of synchronous composability.
That's right.
And I also want to highlight again what you said earlier, which is that this has nothing to do
with the gas fees that roll-ups charge today.
So roll-ups are making hundreds of millions in sequencing revenue today.
That's not from MEV, right?
That's all from execution fees today, mostly.
And shared sequencers don't touch any of that.
revenue because even though that's called sequencing revenue today, it's really separate from
sequencing revenue, right? Sequencers make it M-EV. Sequencers don't make money on
transactions that are executed by the individual roll-ups. They do today only because it's all,
you know, conflated into this one, you know, single. All integrated, right? It's all integrated,
right? But shared sequencers separate that. So the transactions are still paying fees in the
roll-up itself, they wouldn't pay it to the shared sequencer, the execution fees that
they're paying, the gas fees that they're paying within optimism or arbitramers, you can
sink or whatever, whichever roll-ups decide to be part of a shared sequencer would be going
to the roll-ups themselves and the roll-ups can decide what to do with us, right? The value that's
being captured by a shared sequencer is M-EV, and not, M-EV is not just, you know, front-running.
We still want to build shared sequencers that, that, that, that, that, that, that,
prevent harmful forms of MEV.
You can also think about pre-confirmations as generating MEV.
So the tips that users are willing to pay for atomic execution across two different roll-ups,
the, you know, what users are willing to pay to take advantage of arbitrage between AMMs and different roll-ups,
there's all kinds of things that people would be willing to pay for that goes into what we give the umbrella term MEV to.
and that's what's being captured and then redistributed.
Mantle, formerly known as BitDAO, is the first Dow-led Web3 ecosystem,
all built on top of Mantle's first core product, the Mantle network,
a brand-new, high-performance Ethereum Layer 2, built using the OP stack,
but uses Eigenlayer's data availability solution instead of the expensive Ethereum Layer 1.
Not only does this reduce Mantle network's gas fees by 80%,
but it also reduces gas fee volatility,
providing a more stable foundation for Mantle's applications.
The Mantle Treasury is one of the biggest,
Dow-owned treasuries, which is seeding an ecosystem of projects from all around the Web3 space
for Mantle. Mantle already has sub-communities from around Web3 onboarded, like Game 7 for Web3
Gaming and BuyBit for TVL and liquidity and on-ramps. So if you want to build on the Mantle network,
Mantle is offering a grants program that provides milestone-based funding to promising projects
that help expand, secure, and decentralize Mantle. If you want to get started working with the
first Dow-led layer-2 ecosystem, check out Mantle at mantel.xy-Z and follow them on Twitter at
zero X mantle. Are you launching a token? Is it already live? How are you managing the legal and tax
obligations for providing token grants to your team? It's no secret that token management gets
complicated. Between learning all the legal language and tax obligations in every country that
your team is in, token grant management can feel like an obstacle course. But it doesn't have to.
That's where Toku steps in. Toku provides practical tools to handle token grants, allowing for effective
oversight of token distributions and payroll tax compliance for employees, contractors, advisors, and
investors. They also handle tax withholdings through their real-time tax calculations that can be done
by Toku or integrated into any payroll E-O-R providers in any jurisdiction. Toku is a trusted provider
of protocol labs, D-Y-D-D-X Foundation, MENA protocol, and many more. Get started for free and make
token compensation simple at Toku.com slash bankless. Arbitrum is the leading Ethereum scaling
solution that is home to hundreds of decentralized applications. Arbitrum's technology allows you to
interact with Ethereum at scale with low fees and faster transactions.
Arbitrum has the leading defy ecosystem, strong infrastructure options, flourishing NFTs, and is quickly becoming the web-free gaming hub.
Explore the ecosystem at portal.arbitrum.com.
Are you looking to permissionlessly launch your own Arbitrum orbit chain?
Arbitrum orbit allows anyone to utilize Arbitrum's secure scaling technology to build your own orbit chain, giving you access to interoperable, customizable permissions with dedicated throughput.
Whether you are a developer, an enterprise, or a user, Arbitrum orbit lets you take your project to new heights.
All of these technologies leverage the security and decentralization of Ethereum.
Experience Web3 development the way it was always meant to be.
Secure, fast, cheap, and friction-free.
Visit arbitram.io and get your journey started in one of the largest Ethereum communities.
It's everyone's favorite season in crypto, tax season.
And crypto tax is always an absolute headache, especially for all you D-Gens out there.
But it doesn't have to be a nightmare.
That's where crypto tax calculator comes in.
The software built for D-Gens by D-Gens.
As Coinbase's official global tax partner, Crypto Tax Calculator focuses on making complex transactions into easy ones, supporting over 300,000 currencies across Ethereum, Arbitrum, optimism, as well of 1, 1,000 other integrations as well. It's as simple as connecting your wallet, pulling in all your transactions and following the automated suggestions to quickly and accurately calculate your tax obligations. Plus, for all the airdrop farmer's out there, Crypto Tax Calculator has your back, as they are consistently adding support for new and upcoming layer 1s, layer 2s, and all the airdrops that you're currently farming.
2024 is the year when the DGens do their crypto taxes with speed and confidence.
Make taxes this year easy and affordable with crypto tax calculator.
Sign up at cryptotaxcalculator.io and get a 30% discount with code Bank 30.
Click the link in the show notes for more information.
I want to talk about this reserve price just to try and get more of a mental model around it.
So if Arbitram sets a reserve price of 10 or 2 or 100,
is the significance of that reserve price just arbitram or any other layer or
raising or lowering the threshold of how much they need to be paid in order to be
participating in this marketplace, this double triple quadruple coincidence of wants.
So if somebody is increasing the number saying like we, that's them interpreting how
valuable that their transactions are and they're setting a threshold for how much value
they need to retain in order for even to be included in this marketplace, correct?
Yep.
Yeah.
Okay.
So, like, maybe a tariff is, like, kind of a rough, rough analogy, rough metaphor.
But I think where I was going with that is, like, this is the sovereignty of one particular
chain, making sure that they are receiving what they perceive to be their due amount of value
before they are okay with being interoperable with other chains.
That's kind of like where I was going with that metaphor.
Maybe that doesn't stand.
Maybe you don't like that.
It makes sense to me.
I like it.
Yeah.
I tend to be like over precise.
So I think I do like the analogy.
Okay, cool.
While we're on the topic of sovereignty, is there any loss of sovereignty that might
happen beyond economic sovereignty?
So you've talked about how economically it does make sense to connect to this network.
But is there something else that maybe is lost when we move to shared sequencing?
I mean, my personal perspective is that there is no loss of sovereignty.
And this is something that, you know, roll-ups when they're first exposed to this concept, are worried about.
But really, there's so many things that roll-ups can do, right?
They can set their tokenomics.
They can choose what their governance is.
They can choose all the details around their virtual machine.
They can choose how they do bizdev.
They can choose how they allocate their treasury to public goods.
They can do all of these things.
and nothing is really compromised.
But I'm curious from your perspective,
is there something that they really lose
from opting into shared sequencing?
Yeah, I mean, my honest perspective at this is net positive.
I'll tell you some of the concerns that I'm hearing
that I see.
And I think they're coming from, you know,
that these concerns need to be, I think,
and considered seriously.
I think one of the concerns that I hear is that,
you know, shared sequencing could create a network effect,
whereby at some point, leaving the shared sequencer
could be very harmful for that roll-up.
If everyone is already used to the idea
that being part of a shared sequencer has all
these benefits and then now by leaving it, you, um,
if shared sequencing does have the ability to create this strong network effect,
then, um, you know, maybe chains could lose sovereignty to some degree because they are,
uh, they are creating a reality where there's a strong network effect around something
that's not just them, right? And, but the way, the, I mean, the way that I think about this is
it's making Ethereum more valuable and Ethereum is not an individual roll-up, right? So it's,
it could, you know, it could have, it could have that effect that if, if, if the downsides to being
on a different roll-up, for example, are now lower because you're part of, because Ethereum is a
shared sequencer for all roll-ups and it could make it less likely that there's one roll-up that
wins, for example. If you believed that you were going to be the only roll-up and that will never be
any other roll-up, then maybe it is better for you not to participate in shared sequencing.
I just don't think that that is a realistic outcome. I don't think there's going to be one
roll-up. I think there's at least going to be, you know, multiple. And I think that it's better
for roll-ups to focus on being the best roll-up stock and playing.
to their unique advantages.
I think that there's many unique advantages
of these different types of ecosystems.
For example, ZK roll-ups have a very unique advantage
of being able to enable all kinds of bridging
and synchronous composability that can happen
on top of a shared sequencer.
You can take a look at AgLayer, the proposal from Polygon,
whereby proofs can be used to basically pass
messages between chains that requires a degree of coordination that can come from shared sequencing.
So these things are very, very nicely composed with each other.
ZKSync has this beautiful idea of roll-ups sharing a contract, so sharing basically a bridge,
so that you can bridge assets easily from one roll-up to the other without going through the
L1.
This is also complementary to shared sequencing.
So I think by roll-ups focusing on their own unique advantages and building the best stacks
and getting customers because they're the portal to customers, right?
They're the portal to chains rather than trying to just be the only role of out there
is a better move.
But I just want to acknowledge the concern that I'm hearing.
And I think it's a valid, it's coming from a valid place.
I think the other thing that I hear as a risk or downside of sure sequencing is that it
will create a hyper-centralized builder market.
I think that the concern is really not over shared sequencing, even though people think it's over shared sequencing, but rather the concern is over what we actually want, which is if you make, if you make Ethereum more valuable and there's, it's really acting more like one chain, right?
then there will naturally be parties that rise to influential roles through competition.
There will be some party that's able to create the most economic value,
you know, across all these different roll-ups that are now more interoperable with each other.
And if the rolloops were more isolated from each other and those interactions were not possible,
then, you know, maybe there wouldn't be one party that rises to that role.
But I think we should find ways to address this more directly rather than shying away from basically creating more economic value overall for everyone.
We should address those concerns.
How do you avoid having parties individually gain a ton of influence?
How do you avoid censorship?
And I think that we are addressing those through the design, at least of the espresso shared sequencer.
I know that's something that Justin wanted to touch on.
of it as well. So pause there.
Right. So I guess
on the first point that you brought up, you're saying
that it's very unlikely that we're going
to have just one single
roll up. They're going to win everything.
The way that I think about it is there's
going to be a very rich long tail
of virtual machines. There's going to be
virtual machines that are
specialized for gaming, those that are
specialized for trading,
those that are specialized for
whatever it is. And
in some sense, Shad sequencing,
is all about diversity, right?
It's all about saying we won't have one single virtual machine
that will win everything.
And it's not just about diversity of virtual machine,
but it's also diversity of the tokenomics
and the public goods funding and all there is
with the package of a roll-up.
But the other thing that you're saying, I guess,
is that the L-1 will always exist, right?
Yes, yes.
And so there's going to be some amount of assets
that might never migrate from L1.
Think of ENS, for example.
The root of truth of ENS might always be the L1,
just for historical reasons and because of maximum security.
The same might be said for large whales.
They might always choose to just have their treasuries
or whatever it is on L1.
And so in some sense, we had this really awkward plan before
where we would migrate everything asynchronously
from the L1 and the L2
and in the process break network effects.
But now we have this, in some sense, better plan,
which is that we can smoothly migrate assets from L1 to L2
and everything stays synchronous
and we won't be breaking network effects.
Now, on that point of composability with the R1,
there was some sort of a technical shift
from Espresso's perspective.
And, you know, I alluded to this
at the very beginning, that there might be some sort of a pivot or an announcement. And I want to
give you the opportunity to voice that. Yeah. So I mentioned it at the beginning, but originally
we were thinking of, you know, espresso as just a, you know, replacement for centralized
sequencers, a shared decentralized sequencer that would have participation from E3
stakers through EGenLayer. And we thought that this was different.
than the narrative that you were describing as base roll-ups.
But I think that as we got to understand base roll-ups more,
and as the concept of base roll-ups evolved,
to also try to introduce things like pre-confirmation layers
on top of base roll-ups.
And as we came to defining from a higher-level perspective,
like what are the things that we really are trying to get
from base sequencing, right?
better security, you know, liveliness inherited from the L1, and most importantly, composability
with the L1 so that the L1 can have synchronous interactions, Ethereum itself, the EVM, the L1,
can have synchronous interactions with roll-ups, not just roll-ups between each other.
We realize that those are properties that we can achieve while retaining the unique advantages
of the espresso design.
and so that's what we've done now with the design.
We, it only, in hindsight, it only required some subtle changes,
but I think that the impact is massive.
And basically what we're doing now is, in addition to having this, you know,
high throughput consensus protocol that can be run by E3 stakers,
We also allow the proposer to be the L1 proposer.
Now, the way that we do this is we, after running this auction that I described,
whereby roll-ups are basically allowing third parties to bid on joint block space from the roll-ups.
We give the L-1 proposer of the next L-1 block, what I like to call a right of first refusal.
So the O.1 proposer has the option to purchase proposal rights from any individual roll-up or even purchase the winning bundle that comes out of this auction.
So let's say the outcome of the auction is a bundle of all the roll-ups that's going to be produced or a collection of different bundles.
So some partition of the roll-ups into bundles.
The O-1 Proposer can come and buy the rights for all those bundles at the winning bid price.
So it still has to pay for it.
We still get this MEV redistributions.
We still capture the MEV in the protocol.
Some of it goes, of course, to the L1 proposer,
mainly the surplus value that the L1 proposer is now creating
by enabling more interactions with the L1 itself.
And yet we retain also the
I describe this in a post in greater detail.
I think it will be going a bit too much into the weeds to describe how this is done,
but we still retain the fast finality that comes from the espresso sequencer,
but it becomes this conditional finality conditioned on published L1 blocks.
So roll-ups now know that certain transactions are final unless some published Ethereum L-1 block reorcs.
Okay, let's talk about this.
finality because that's a whole new topic.
But I just want to very briefly summarize what you just said,
which is that each L2 individually generates some value.
But once we consider the L2s together,
there's like this excess value that's created.
And that's all well and good.
But now in addition to the L2s, the pure L2s,
the L1 proposer can join the game and create even more value on top of that.
And we're basically giving the option for every L1
proposer to play this game and add even more value when they can.
Now, the other thing that you alluded to is this idea of fast finalities.
Justin, can I also just kind of do my attempt at kind of explaining the topology of the
network that I'm seeing here?
So we have the base Ethereum chain, kind of thinking of this as just like, you know, the planet,
and then we have, you know, some ball at the very center when it's got all got all of its
validators.
And then we have the vertical layer two's spawning off of the base chain.
And so, you know, multiple layer twos are horizontally scaling.
We have layer twos individually, which are vertically scaling.
And then we have this espresso middleware layer between the layer twos,
which is like this higher level plane that is producing this double triple coincidence, you know,
quadruple coincidence of wants between two, three, four different rollups.
And so like kind of how we were talking about Ethereum is this minimum viability.
composability because it's all the way down at the bottom of this stack. We have this new level
of composability that espresso is establishing at a higher plane between the layer twos. So we have these
vertically spawning layer twos coming out of this base chain that is Ethereum. And then we have a
higher plane that is producing composability between all of these layer twos. And that was like what
espresso has been building, is building, is continuing to build. But what you are announcing,
what you're talking about,
your not pivot,
but like additional scope for espresso
is that when layer one proposers
are also integrated into espresso,
when they are also becoming a shared sequencer,
opportunistically, whenever some layer one block proposer,
Ethereum validator is proposing a block,
and that proposer is also part of the shared sequencer that is espresso.
Then we also get to include the Ethereum
as one of these marketplaces
that produces coincidences of wants
into the shared system.
And it's not every single block
because not every single proposer
is going to be a part of espresso,
but when they are,
they can join the marketplace
that espresso is establishing higher up
across the layer twos.
I think that's kind of like my version
of explaining this whole thing.
Yeah, that was said exceedingly well, yes.
That was spot on.
And just to come back to this,
even though it might look like a subtle architectural change, I think the impact is massive, right?
Because now roll-ups that are running on espresso are based roll-ups, right?
Now, it can be a choice that roll-ups make.
So if a roll-up, for example, is running on the espresso sequencer and decides that it wants to build its state off of an old Ethereum state.
So if it wants to build its state off of an Ethereum state, this has already been finalized by Casper, for example.
The impact of that would be that bridge transactions would have to be delayed.
So they would be queued and would be delayed and would not materialize inside the roll-up for, I don't know, say 15 minutes, if they're waiting all the way for Casper finality.
And they wouldn't benefit from the fact that we're involving the L1 Proposer in.
the espresso protocol on the other hand if a roll-up decides to um you know and then the benefit
to that roll-up would be that it has it enjoys the fast finality that comes from uh the espresso
pre-confirmation layer uh this bft finality gadget called hot shot that is being run by restaked
um you know notes uh they would they would they would get very fast finality from that right
There would be no risk of transactions being reversed if Ethereum reorgs,
because they're building off of a finalized Ethereum block.
On the other hand, what they would be missing out on is the opportunity of an L1
proposer to build simultaneously at an X Ethereum block and their block
and enable atomic interactions between them.
So based roll-ups would choose to use espresso and in a mode where they are building
off the latest Ethereum state, and then espresso is giving them,
finality conditioned on published Ethereum blocks.
It's a slightly different type of finality, but it's still very strong, and it achieves a
very nice balance between the additional composability get-with-the-L-1 and the finality that
users want.
And you can make up for the difference with insurance.
So pre-confirmations are basically an insurance that proposers sell to users to remove their
risk.
So just to translate what Ben says.
in my own words,
there's two flavors of finality
from the perspective of an L2.
You can have finality
which is unconditional
and you can have finality
which is conditional
on the L1
reorging
or not reorging,
I should say.
So if you want to have
synchronous composability
with the L1,
there actually is a cost
which is that
what if you make
a pre-confirmation
that depends on the state of the L1
and then the L1 itself kind of rugs you,
the L1 reorgs.
Now, one of the things that we're working on
is this idea of single slot finality,
whereby the L1 can't reorg very, very deeply.
And more specifically, what single slot finity gives us
is at least the latest design
gives us a reorg depth of three slots.
So even though we have a block
which is finalized at every,
every single slot, there is a little bit of latency, a little bit of delay, and this delay is
three slots. So that's 36 seconds. And so that's as much as could be reorged, you know, in, in,
in some of the worst cases possible. Now, one of the things that Ben, you know, keeps on alluding to,
and I really want to dig into that, is this idea of fast L2 finality. So right now on L1, we have these 12-second
slots and it takes several slots to reach finality.
So it might take, let's say, half a minute to reach finality.
But what if you are doing pure L2 transactions?
So L2 transactions that don't touch the L1, like can we have basically shorter block times
and faster finality for those pure L2 transactions?
Yeah, we can.
So the way that that works, and this is unique to Espresso's version of base sequencing.
Let's look at what vanilla-based sequencing would look like, where the transactions are, we're only adding L2 transactions through published L1 blocks.
Okay.
So let's say we have a block, Block 12 on Ethereum, okay?
And this includes some L2 transactions, some L1 transactions.
After Block 12 is published, we know the state of Ethereum, L1, we know the state of these roll-offs.
Now, the next block, Block 13, is under construction.
There's some proposer for the O1 that's producing it,
and it's also adding transactions to the L2.
Well, these transactions that are being put into this Block 13,
even after they're published, let's say,
are not final conditioned on Block 12.
They're final only conditioned on Block 13.
So if Block 13 were to be reorged or were missed
or just lost and some other blocks were produced in its slot building off block 12,
all the transactions included in Block 13 would go away.
They would be gone.
This is something that in today's architecture with centralized sequencers doesn't happen
because the smart contract gives sequencing rights to a centralized sequencer
who is able to say, these are going to be in this roll-up,
no matter which way Ethereum reorcs.
Espresso is able to do the same thing.
So the way that transactions are, pure L2 transactions are added,
is through the espresso BFT finality gadget.
Even an L1 proposer who is adding on transactions
is going to get them finalized first through this BFT finality gadget,
which we can think of as a flavor of pre-confirmations too.
It's something that's happening between all the L1 validators
that have opted into this protocol and are participating.
in it.
And you can, even before this block 13 is published, continue to build on the L2 states
after the published Block 12 that will be final no matter what is published next on
Ethereum.
As long as Block 12 doesn't reorg, those transactions will be included on any fork of
Ethereum because they're built off the L2 state.
They're not affected by what happens after Block 12.
You know, they happen logically before whatever comes after Block 12, and their final condition on it.
That's one of the unique contributions of espresso's design of base sequencing that I think is very, very valuable.
Especially considering that you're going to have, I mean, the natural pattern is that you're going to have updates on the L1, and then you're going to have a flurry of activity on the L2, and then you have another update on the L1.
Now, importantly, we also enable this synchronous interaction.
So whoever's constructing Block 13 can also, once we have real-time proving, which is coming, I think, I truly believe that real-time proving is coming.
100-mill second latency-proving is coming.
Just as a piece of context here.
Like, Ben is a hardcore cryptographer, trained as a professional cryptographer and has made a lot of contributions specifically on snarks and folding schemes and recursive proofs.
Yeah, my side hobby is writing papers on snarks, but...
So, same.
I love that in my spare time.
But once we have real-time proving,
then you'll be able to have deposits from the L1 into the L2
and withdrawals into the L-1 within the span of one Ethereum block.
And that's only what base rules can do.
So we can still do that,
but we can also have between published Ethereum blocks
many L2 transactions finalized, conditioned on this already published thing.
Yeah, so that's the idea.
So with base proving that one example, that means like you could take a flash loan on the
layer one, do some activity on the layer two, and then pay it back on the layer one inside
the same block.
Is that what it unlocks?
Once we get real-time proving, we'll be able to do that.
Okay.
That seems like a very high, it seems like a gold standard of.
Yeah.
And this is going to, I mean, this is going to happen.
And there's many projects out there that are working on A6 for proving.
Also, with the innovations in recursive proving, you really only need low latency on the construction of this final proof that summarizes all the individual proofs that happen.
So it's getting a little bit into the weeds.
But if you're the O1 proposer and you're constructing an L1 block and also the L2 blocks at the same time, you will make some L1,
L1 state transitions, so you'll have some L1 transactions to the EVM, that you'll maybe do a deposit.
You'll pass a message along with a proof, but this proof is it's not small.
It's like very large, and it was really easy to produce.
And it doesn't need to be small because it's just in your head as you're constructing this.
You pass that message and this proof to the L2.
You do some L2 transactions.
You pass another message to the L1 that does a withdrawal.
all, it also has this large, easy-to-construct proof that's still just in your head.
And at the end, you take all these proofs that were just in your head, and you summarize them
into one small proof.
And it's that final summary, that final compression of these large, in-your-head, easy-to-produce proofs
that needs to be done in like 100 milliseconds.
And we're going to have the hardware to do that soon.
Okay.
Understanding, you know, at a high level, how these systems work, somebody here in this, in the
actual running of the hardware,
somebody here is extremely well-networked with extremely strong computational resources.
Right?
Like, that's just my intuition.
Is that check out?
Yeah.
But that's also something that, you know, it can be outsourced.
So if I'm the proposer, I don't necessarily need to be extremely sophisticated if I can outsource some of this job to someone else.
But I think, you know, and this goes back to how we were saying, how we were talking earlier about how someone becomes a proposer.
It's best done through some kind of economic mechanism that assigns the proposal rights to those who are able to create the most value.
You want to do it in a way that they don't have a monopoly.
It's easy.
It's competitive.
There can be multiple of them.
Maybe we should talk about censorship next.
But I think that the entities that are going to be producing these blocks will be from a more sophisticated subset.
But it will be done in a way that doesn't, you know, result in censorship or centralization.
In trust, right?
Yeah.
Yeah.
And this is basically all the research that we've done with Proposer Build a Separation.
The Proposer can run on the Raspberry Pi with a home internet connection and still tap into the most sophisticated sequencer and builder markets in the world.
And sure, there's extreme sophistication on the other side of PBN.
but that is segmented away and isolated so as to not corrupt the decentralization of the
proposer set. And one of the very important things is that these builders and sequencers
can't censor through mechanisms like inclusion lists and encrypted memples. And David,
you mentioned the flash loans and this is something I want to highlight as being the litmus test
of essentially perfect composability. If you can do a flash loan across
roll-ups, then you've solved it.
And we can do that.
So we can have, for example, a flash loan of a million if that originates at L1.
You deposit that into Roll-Up A.
You buy some tokens.
You send all of those tokens to Roll-Up B.
So you withdraw from A, deposit into B.
And then you sell those tokens for, let's say, a million and one-if.
And then you pay back the billion if at L-1 and you make a one-if arbitrage profit.
all of this as a flash loan.
And this is not a research problem.
This is an engineering problem.
Is that where we are in this development roadmap?
Correct.
This is purely an engineering problem.
And just a few weeks ago,
we've had this announcement from Axial,
which is a manufacturer of snark-proving ASICs,
that they have their first ASIC.
It's physical.
We can touch it.
and that actually there's photos on the internet
that you can go and find.
And not only that,
but there's two separate projects
that are also looking to build
a snog-proving ASIC for 2024
that's fabric and SISIC.
And once we have that,
it's a massive unlock.
And one of the things to highlight here
is that the snock-proving ASIC
doesn't have to be trusted for safety, right?
Like the whole point is that
the stock-proofing.
that are generated are totally trustless.
They're like mathematical proofs.
The worst thing that could happen is that for some reason,
all the ASICs from all the manufacturers just suddenly blow up and combust.
And then now you have to fall back to things like GPUs and CPUs,
which have higher latency.
But if we're in a position where we have diversity of manufacturers,
just like we have diversity of clients,
then there's no reason why they will all suddenly blow up.
Now, another thing that I want to touch on is basically this question of finality and ux.
So I was in this, I had this opinion that it's actually okay for the L1 to have long block times and for finality to take a long time.
So you can think of the L1 as being this ultra strong and decentralized settlement layer.
and that means that finality takes it a little bit longer than other chains can provide.
But the reason why I think this is acceptable is because you can have pre-confirmations
that are on the order of 100 milliseconds.
So you have the best of both worlds, the ultra-strong security plus the 100 millisecond latency
from a U.S. standpoint.
But what Ben is saying is that you can have even more than that.
You can have this low latency finality specifically for transactional.
that are pure L2, meaning that they don't touch the L1.
If they start touching the L1, then yes, you start suffering, you know, maybe half a minute of latency
for achieving finality.
But if you're pure L2, then you have, you know, one second, two second types of latency for finality.
And in all cases, you still get the fastest pre-confirmations from the proposers, which can be
100 milliseconds, but backed likely by less economic.
you know, a value.
One final topic that I think is worth discussing is this idea of censorship resistance.
You know, we mentioned that there is potentially this hypercentralized builder and pre-confirmer
market.
But I think, you know, in our discussions, you've alluded to a potential clever solution,
some sort of mitigation to that problem.
Do you want to walk us through that?
Yes.
It turns out to be the same.
solution that enables this MEV redistribution.
So we were talking earlier about how we have this auction
that assigns roll-up block proposal slots
to shared proposers.
And I say shared proposers with an S plural
because there's one special outcome where there's
one individual proposer who bids the highest
on all these different roll-ups,
but we could also have multiple reposers who jointly partition the roll-up space into a collection of different bundles.
And that's very important when it comes to talking about censorship concerns.
So one of the concerns that we hear people talking about is, well, I'm worried that if there's only one proposer that is going to be proposing for all roll-ups,
then they will just ignore my roll-up because it brings them too much.
risk for some reason, maybe it's some jurisdictional reason. And that's concerning, right?
We'll be left behind. Well, if you are finding the most efficient economic allocation
of roll-ups to multiple proposers, then that will not happen. Because the superbuilders,
let's say who are only interested in these five roll-ups and they're just simply not interested
in touching any transactions on this other roll-up will not bid on this other roll-up. They'll bid on
the bundle of these five roll-ups, and somebody else will bid on this other roll-up. So as long as
there is always a market for this other roll-up, and in fact, it's a special case. It can be that
roll-up itself, right? So if you were going to run your own centralized sequencer or your own
decentralized sequencer solution, then there already exists.
It's a market for running your roll-up, and that's not taken away when you join this shared
sequencer mechanism, right?
And this is only possible with a solution that doesn't force one proposer on all the roll-ups,
but rather finds the most efficient economic allocation.
So there is this additional censorship benefit.
I think that a lot of people are missing that, so I'm glad that we got to talk about that
today.
Right.
So there's two different kinds of censorship resistance that we care about.
The first one is around making sure that transactions get included on chain.
And for that, we have this really neat solution called inclusion lists.
So you can always get your transactions on chain, even if it's some sort of censored roll-up.
Yes.
Now, the other concern that we have around sensor resistance is sensitive resistance of the pre-confirmations.
Like this 100 millisecond U.S.
We want to make sure that everyone has access to that.
And what you're saying is that if we have a super sophisticated pre-confer that happens for, let's say, regulatory reasons to not be able to provide pre-confirmations for one specific roll-up, well, that's fine.
What we can do is we can have a second pre-confer that is going to step up and say, I will provide pre-confirmations for this censored roll-up.
And sure, like...
Somebody will bid on it, yes.
somebody will bid on it
and so you still have this
100 millisecond
ux and the reason why
it's compatible to have two
simultaneous
pre-conferes is that they're actually acting
on disjoints pieces of states
so whatever pre-confirmations
they individually give will never
conflict with each other
correct right
it kind of sounds like maybe there's a similar pattern
here to the like meme that I brought up
earlier about
Ethereum, the layer one, as the foundation of composability.
And it's also the foundation of censorship resistance.
And with additional mechanisms that high watermark, that tide level increases up the stack
for censorship resistance properties, we just kind of need to, you know, build it and integrate it.
It sounds like a similar pattern.
I'm sussing out here.
Yeah.
No, yeah.
I think so.
Guys, this has been an immensely educational episode, I think, just with the overall direction
and also some of just like the more down the rabbit hole details.
Lots of things, lots of positive things to get out of this.
Is there anything that we lose?
Like blockchain systems are systems of tradeoffs.
What are we trading off to get some of this stuff?
Maybe, Justin, you can talk about some high-level patterns
and then we can go to Ben for some more nuanced details.
Sure, yeah.
I think it is important to talk about potential downsides of espresso's approach
to counterbalance some of the upsides.
two that come to mind are number one,
we're introducing an honesty assumption, right?
So in order for this off-chain, fast L2 finality to happen,
we're assuming that half of the espresso validators are online and honest,
I guess one of the questions is,
what happens if they go offline, what happens if they're dishonest?
And then I guess the second kind of potential downside is the fact that we're dealing with restaking here.
And like that might come with risks that come that I also did with restaking.
So throwing the ball to you, Ben, to address the downsides.
Yes.
And I think that when we talk about downsides, I think that downsides are sort of a relative term.
right? So we should be looking at, well, downsides compared to something else.
So I look at it more as trade-offs. What are the trade-offs in the design space, right?
And certain trade-offs may matter differently to different, you know, people.
So it's just good to be informed about the trade-offs.
Espresso, you know, sits between, I guess, I would say, you know, pure, I guess, original idea of base sequencing and the centralized sequence.
that roll-ups do today, in the sense that it, when it comes to, I guess, liveliness, right,
it does involve this additional BFT protocol that has authority over updates to the roll-ups smart
contract. And so even the L-1 proposer who's constructing the next L-1 block can't just stick in
transactions into the role of contract on its own, and it can do it through inclusion lists,
but it can't just update the role of contract on its own. It has to get not the approval of a
centralized sequencer, but the approval of the BFT gadget that is being run through
espresso. And so the concern that, you know, or the tradeoff that Justin was pointing out is
what happens if the BFT gadget is not live, right? Now you have this other thing that's that
can stall progress.
And so that doesn't mean that everything loses
livence entirely because what you can do is,
if the BFT protocol is not live for a certain amount of time,
then you can just allow the L1 proposal in a future slot
to take over and inject transactions.
And this is similar to the design of forced include transactions
or escape hatches for roll-ups,
because the same concerns arise with a centralized sequencer that, you know, may go down.
But that ends up being a trade-off, right?
Because while you don't lose liveness entirely, there's still a chance that in some slots,
even if the L1 proposer of Ethereum is ready to go, it will not be able to make progress on the L2
because the BFT protocol is down, right?
That's a trade-off.
the benefit of course is what we already discussed that now you can get this high
throughput and fast finality for pure L2 transactions the same benefits that you get from
centralized sequencer just today the so that was the concern that that was the first
concern that you brought up Justin yes and let me just summarize this one before we
move on what you're saying is that if the BFT consensus protocol
loses lifeness, let's say, more than one-third of the espresso validators go offline.
Well, for some period of time, the roll-ups can't advance because they need the certificate of
finality from espresso in order to move on. But what we can do is basically have a timeout.
So if we've detected that the BFT has been down for, let's say, five minutes, well, we just
turn off all the goodies that espresso provides.
So there's no more M.EV redistribution.
There's no more fast L2 fanality.
And we kind of fall back to this plain vanilla-based sequencing,
which is maximally robust and maximally live,
but doesn't have these additional goodies.
Well, you don't turn off all the goodies necessarily
because the BFT gadget is separate from the MV redistribution mechanism,
which is assigning.
the proposer.
So you still have that MIV redistribution mechanism because this is an auction that's being run to determine who gets proposed.
And then that might be bought up by the O one proposer.
It's the BFT gadget that's being used for this fast finality property that we get from centralized sequencers.
And if that goes down for some period of time, you can turn it off.
And you won't get fast finality, but at least you can make progress.
Okay, interesting.
That's a detail I didn't realize.
So you're saying that the redistribution aspect only relies on the L1.
It's some sort of L1 auction that's happening.
It does not rely on the BFT.
And so even if the BFC is broken, you still have the redistribution.
Right.
Okay, perfect.
So I guess the second topic is risks around restaking.
Risks around restaking, right?
Maybe elaborate first, and then I can comment because maybe you know,
explain your concern around restaking and then I can add color. Right. So there's two, I guess,
classes of restaking risk that people are concerned about. One is around removing the level playing
field that we currently have with staking. So for example, if restaking requires very high hardware
requirements or lots of capital or things like that, it increases the barrier to entry and provides
an uneven amounts of APR for different entities. A separate class of restaking risks is around, you
know, massive catastrophes and mass slashing. Let's say that millions of Eve suddenly get slashed.
Like, does that mean that the L1 has to come in, the social layer has to,
So the L0 has to come in, the social layer has to come in and do a bailout,
which would be extremely messy and expensive.
Right.
Yeah, no, I've heard this concern as well.
And I think that could be a valid concern, this idea that we need to be
careful not to overload social consensus when it comes to this additional role that restakers are playing.
I think that it's important to consider the extent to which we are, you know, relying on this,
you know, what these restakers are providing, right? So like when we talk about pre-confirmations,
for example, from a proposer, a proposer has restaked some ETH collateral and is made,
making some kind of promise to users and they may violate the promise.
And there isn't, there are risks here.
There aren't like inherent risks here.
And it's maybe an improvement on the status quo of not having a pre-confirmation.
But we can consider like the worst that can come, that can go wrong and whether we really would need some kind of, you know, social consensus to come in and correct for things or not.
I'm curious, Justin, how you would sort of, yeah, look at this.
I mean, recently I've been more and more optimistic about restaking because I feel that
there's been a decoupling, a potential decoupling between staking and what's known as
restaking. In some sense, restaking is a little bit of a misnomer because you could put forward
any asset as collateral. It doesn't have to be if. And you don't necessarily have to be
staking. You don't have to be a validator. You could just put it.
put raw E for whatever you want.
And so really the class of restaking applications that I,
you know, from a research standpoint I'm focusing on today is those where you
specifically have to be a validator in order to participate in a given AVS.
And I think there's going to be the vast minority of applications.
What I mean is that the vast majority of the time, you can not be a validator and still
participate in an AVS.
But espresso is one of these like exceptions where really you do need to be a validator.
You specifically need to be, for example, an L1 proposer or an L1 attester in order to come in.
But now that I'm saying all of this, maybe this is not the case.
Like maybe we don't need the attestors to come in.
we could just have random people who are not validators come in with collateral above and beyond or separately from the attestors.
Well, actually, so let me just, I guess, clarify one point on the design of these attestors in espresso.
So to be in a tester, to participate in this BFT finality catch that espresso provides,
you can stake one of two assets, right?
You can stake espresso tokens.
You can stake ETH.
You don't have to be an L1 proposer in order to do this.
We give the L1 proposers this right of first refusal over proposal rights,
but for attesters, there isn't anything that says,
oh, you have to be, you know, validating for Ethereum.
It's just that you can use your stake for Ethereum.
You can restake it to participate as an test.
You can also restake espresso tokens.
What we do is we will determine the ratio of weight between, you know, your ETH and your, your, your,
espresso that will determine, you know, how much weight we assigned to, you know,
ETH versus versus espresso.
And you can think of, this is called a dual staking model by eigenlayer.
And I think what it's very useful for is bootstrapping.
Where initially the capital requirements, you know, to have 30,
ETHER-Ready stake for Ethereum and then restaking it for espresso is quite high, so you might not get a lot of participation until there's a lot of activity on the system.
But if you have the option of coming in, staking a different economic asset, then it can help with getting that initial participation.
And then the economic activity on top of the system will increase.
And eventually at some point, more E3-stakers will join.
And then, you know, predominantly the security is coming from ETH and not from ETH.
from something else.
Okay, understood.
So what you are saying is that actually the collateral doesn't have to come from
stakeholders.
It doesn't even have to be EF.
And so there isn't this tight-knit relationship that you're building with the attestess
specifically.
Yeah, I mean, I think I view ETH restaking as a subsidy for ETH validators to participate
in contributing security to the system, right?
So if you're already staking for Ethereum, then you can reuse that capital to also contribute security to this pre-confirmation layer.
But you don't have to be one of those nodes.
You can just stake a different asset to participate.
Right.
So just to provide a little bit of context as to why I'm asking this question is because there is a potential upgrade to Ethereum where we do what's called stake capping or stake targeting, where we adjust the amount.
of issuance to go down to zero or even down negative, for example, 10 towards negative infinity
as you start getting close to a cap. And in that context, if the issuance is actually negative,
meaning that you need to pay for the privilege of being a validator, then it actually doesn't
make sense to be restaking. And as a validator, what makes more sense is that you withdraw your
if and then you stick it into an AVS directly so that you don't have to pay this negative
yield and in the case of espresso specifically you would do this because you're not forced
to be an L1 attester in order to be an espresso attester. Guys this has been extremely uh all like
all encompassing I think this is like bringing in a lot of different innovations that are all
happening in parallel around the Ethereum sphere the progress of layer two's both
in their technical prowess, their technical capabilities and their sheer number, along with
eigenlayer restaking, along with just composability innovations that's been going on.
Justin Ben, this has been fantastic.
Maybe one last thing to explore, which is further explorations.
What's left in what didn't we touch in this conversation?
That might be left for future conversations.
What is left to explore?
What are some other unknown unknowns that are out there?
I guess it's impossible to ask you about unknown unknowns.
But what are the known unknowns?
Justin, if we were to do like another episode in maybe two, three months, six months,
what topics would you like to see more clarity on as we progress forward into the future?
Right.
So one of the things that I've realized recently is that there's all sorts of gadgets that can augment
and improve the shared sequencing.
One of them that was mentioned by Ben is this idea of the aggregation layer by polygon.
This is this really clever trick where you can have very strong safety in the context of pre-confirmations.
The pre-confir can't rug you to the same extent that they can today if they are willing to get slashed.
Another really interesting innovation, which Ben also mentioned, is from ZK Sync, is this idea that roll-ups can share deposits.
So if you want to withdraw from Rollab A and deposit from Rollab B, you don't have to go through the L1 and you don't have to pay the very high L1 gas to do so.
You directly do it from L2 to L2 and not pay this L1 gas.
And then, you know, I think another key part of the puzzle is going to be around real-time proving.
And all the hardcore engineering that goes into this, including innovations on recursive proofs and for,
as well as innovations in terms of hardware acceleration.
Ben, same question to you.
What further topics would you like to see explored?
Yeah, well, just to echo what Justin said.
I think it would be really great to discuss how shared sequencing is, you know, an important
complementary, important and complementary to proof aggregation that's been described by
Polygon, you know, shared bridges as described by ZK Sync, all kinds of other add-ons.
that work very nicely together with a coordination layer,
which is what shared sequencing is,
I think it would also be cool to touch more on these pre-confirmations
and what you can do with them and how they function,
they can function as insurance,
not just something that has some kind of, you know,
a bond behind it that gets slashed.
The other idea that we haven't touched on
is how multiple nodes
can through, you know, threshold sharing of a key.
So this is called distributed DBT or distributed validator technology
can actually control a single proposer.
And so it would be good to talk about what you can get out of that
when these now distributed validators are,
or distributed proposers, not no longer validators,
they're really just proposers,
are making some kind of pre-confirmation promise.
you can get into this idea of like a threshold insurance policy, right,
that can only be violated if a certain threshold of these nodes,
you know, violate the policy or only becomes active once a threshold of these nodes sign
and basically create this insurance policy.
So there's all kinds of extensions to this,
both on the pre-confirmation layer, you know,
on how shared sequencing as a coordination layer interacts with,
proof aggregation, shared bridges, etc.
All great topics to talk about.
As a content producer, one of the most beautiful things about Ethereum
is that there seems to be a Cambrian explosion of surface area
for conversations going in every single direction at all times.
Today, one of these episodes is definitely an example of that.
Justin Ben, thank you guys so much for illuminating such an important future of Ethereum's roadmap
and also giving me so much more to talk about in the coming months and years.
I really appreciate you, guys.
I appreciate being here, David.
It was really a pleasure.
Thank you, David.
Thank you, Ben.
Bankless Nation, you know the deal.
Crypto is risky.
Layer 2s are risky.
The space between layer 2s doesn't even exist yet.
And when it does, it will also be risky, but it will also be a little bit more composable.
You can lose what you put in.
We are headed west.
This is the frontier.
It's not for everyone, but we are glad you are with us on the bankless journey.
Thanks a lot.
