Bankless - Multi-Proposers & The Future Of Ethereum
Episode Date: September 1, 2024In this special early access episode we're joined by Paradigm's Georgios Konstantopoulos and Charlie Noyes, along with Max Resnick from Special Mechanisms Group to discuss a possible change in directi...on for one part of the Ethereum Roadmap. This is a deeply technical episode that covers the history of MEV and new innovations that plan to change it all from the comfort of David's apartment studio, enjoy. ------ 📣 BECOME A CITIZEN | GET 10% OFF WITH CODE: SPOTIFY10 https://www.bankless.com/join ------ BANKLESS SPONSOR TOOLS: 🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://k.xyz/bankless-pod-q2 🦄UNISWAP | BROWSER EXTENSION https://bankless.cc/uniswap ⚖️ ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum 🛞MANTLE | MODULAR LAYER 2 NETWORK https://bankless.cc/Mantle 🌐 OBOL | STAKE ON DVs, SCALE ETHEREUM https://bankless.cc/obol 🗣️TOKU | CRYPTO EMPLOYMENT https://bankless.cc/toku ------ TIMESTAMPS 00:00:00 Start 00:05:14 Intro To Our Guests 00:09:21 Proposer Builder Seperation 00:12:39 Stepping Forward vs Patching 00:19:35 Getting Rid Of MEV? 00:26:35 Inclusion Lists 00:40:14 What is Braid? 00:50:43 The Bear Case 01:06:37 Vitalik's Take 01:10:53 The Benefits of Braid 01:20:28 L2 Centric Roadmap Changes? 01:36:26 How's Rust Going? 01:44:49 Other Exciting Roadmap Updates 01:57:52 Call To Action ------ RESOURCES Georgios: https://x.com/gakonst Max: https://x.com/MaxResnick1 Charlie: https://x.com/_charlienoyes ------ Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
Bankless Nation, we got a treat for you today on the show.
A pretty special episode recorded all in person in my New York studio
with some of the three biggest brain researchers in the Ethereum space.
Charlie Noyes and Georgios Constantinopoulos from Paradig from Special Mechanisms Group
discuss a possible change in direction for one part of the Ethereum roadmap.
Max has recently introduced Braid, which is a development framework for changing how execution works on the Ethereum layer one.
Instead of there being a single person who proposes a block,
multiple people come together using a clever mechanism to co-create a block together.
This has properties that Max believes are far more fair and efficient
when it comes to transaction ordering and MEV.
The part of the Ethereum roadmap that this relates to the most is the scourge,
where Ethereum attempts to deal with the scourge that is MEV,
and it's centralizing and corrupting impact upon blockchains.
Multiple concurrent proposers, or multi-proposers, for short,
Max believes is a better and more elegant way to achieve the goals that Ethereum is trying to
achieve with its GERGE roadmap. While Max's braid framework is not complete and more research is needed,
it is nonetheless piqued to the interest of many other mechanism designers and auction thinkers
that are out there in the crypto space. So today on the show, you're going to hear a history
of MEV on Ethereum and how Ethereum researchers have dealt with the reoccurring monster
that is MEV at different stages in Ethereum's timeline to where we are today.
with the current Ethereum research and direction. Charlie and Georgios, I would say, are cautiously
optimistic about the idea of multi-proposers for Ethereum, whereas Max has stronger convictions about
its level of effectiveness and its validity for eventual inclusion into the Ethereum roadmap.
This bankless station is a very technical episode. I'm going to have to go back and listen to this
myself to really understand everything that was said here. I do my best to help some of the less
technical listeners along. And nonetheless, I think you'll be able to catch a vibe even if you can't
understand everything. But there are some technical terms and like deep esoteric Ethereum research knowledge
that is brought forth that we often just didn't have the time to define and clarify everything.
We typically leave these technical research-driven podcast episodes for the bankless premium RSS feed.
This is the feed that I go and cover more specialized content, things on the frontier that I want to
go learn about. But since the idea of multi-proposers for Ethereum is so large,
and relevant, that this one needs to be made public so that everyone interested who wants to can hear it.
But if you want future episodes like this and all other bankless episodes as well, add free,
maybe you could consider getting the bankless premium RSS feed.
There is a link in the show notes for 10% off if you want to go get that right now.
So go get it right now.
And also, thank you for supporting independent media in crypto.
If you want a crypto trading experience backed by world-class security and award-winning support
teams, then head over to Cracken.
one of the longest standing and most secure crypto platforms in the world.
Cracken is on a journey to build a more accessible, inclusive, and fair financial system,
making it simple and secure for everyone, everywhere to trade crypto.
Cracken's intuitive trading tools are designed to grow with you, empowering you to make your first or your hundreds trade in just a few clicks.
And there's an award-winning client support team available 24-7 to help you along the way,
along with a whole range of educational guides, articles, and videos.
With products and features like Cracken Pro and Cracken NFT Marketplace and a seamless,
app to bring it all together, it's really the perfect place to get your complete crypto experience.
So check out the simple, secure, and powerful way for everyone to trade crypto, whether you're a
complete beginner or a season pro. Go to crackin.com slash bank lists to see what crypto can be.
Not investment advice, crypto trading involves risk of loss. Arbitrum is the leading Ethereum
scaling solution that is home to hundreds of decentralized applications. Arbitrum's technology
allows you to interact with Ethereum at scale with low fees and faster transactions.
Arbitrum has the leading defy ecosystem, strong infrastructure options, flourishing NFTs, and is quickly becoming the web-free gaming hub.
Explore the ecosystem at portal.arbitrum.com.
Are you looking to permissionlessly launch your own Arbitrum orbit chain?
Arbitrum orbit allows anyone to utilize Arbitrum's secure scaling technology to build your own orbit chain,
giving you access to interoperable, customizable permissions with dedicated throughput.
Whether you are a developer, an enterprise, or a user, Arbitrum orbit lets you take your project to new heights.
All of these technologies leverage the security and decentralization of Ethereum.
Experience Web3 development the way it was always meant to be.
Secure, fast, cheap, and friction-free.
Visit arbitram.io and get your journey started in one of the largest Ethereum communities.
Launching a token?
Don't let complex legal and tax issues slow you down.
Toku provides specialized support to optimize your launch
and ensure that you as a founder and your team and your investors get the most tax-efficient outcomes.
The Toku team understands the crypto space inside and out
and will ensure your token launch is fully compliant
while maximizing tax efficiency.
Toku can connect you with the best attorneys if you need them
to make sure that you have the best advice
and Toku can help to optimize your taxes
so you pay the least possible amount of taxes
while still maintaining legal compliance.
With Toku's guidance, you can concentrate on building your company
while Toku handles the logistics.
Token launches don't have to be complicated.
Talk to Toku today to get a free initial token valuation.
And now onto the episode.
Bankless Nation, it's SBC Week,
which is why we have three Brainiacs here on the couch today.
Just to my right,
We got Georgia's Constantinopolis, CTO, and now partner, general partner at Paradigm, Georgios.
Welcome to my apartment.
Good to see you.
Likewise, likewise.
Sandwiched in the middle, sandwiched by these fine gentlemen is Max Resnick.
What's your role at Special Mechanisms Group?
How would you describe yourself?
My official title at SMG is the head of research.
So I work on basically market design, mechanism design, protocol research, and other things.
Beautiful.
Welcome.
Welcome to your first bankless podcast.
episode is great to have you. Thanks for having. And down the line at the end of the couch,
got Charlie Noyes, who has been on the bankless podcast before like four years ago or something,
very long time ago. Good be back. Yeah. Also talking about MEV. Yeah, it was the first episode
that we did on MEV on bankless right when MEV as a concept was very, very young. Charlie's
good to have you back. Thank you. Good to be back. And it's great to have just some of the, I would call you guys
auction nerds. I think like both SMG and paradigm, you guys really like auctions. And I think that's
kind of going to be the undercurrent of the entire episode today. Going around in the research
space has been this idea of multi-proposers. And this isn't exclusive to the Ethereum research
community. This has been, I think, blockchain research generally, I know Anatoli from Salon
is very bullish on multi-proposers. There has been a recent proposal from Max here called
Braid, which proposes to meaningfully change how Ethereum works. And this is not set in stone. There's
no consensus on this on the direction, but it's something that we want to unpack here today.
And it's highly related to MEV, to consensus, to execution of the Ethereum Layer 1.
And this is going to be the main thing that we want to focus on and all adjacent topics.
But perhaps to even really just get started here, we want to kind of like illuminate the
landscape of MEV because this is going to touch a lot of different subjects.
Maybe Charlie, I'll throw it all the way down to you, just about like, what do we need to
know about MEV?
What do we need to bring up to listeners' brains in order to prepare them?
for a more in-depth conversation as we go forward.
I'll give the brief history.
So way back in the day in 2018, 2019,
trading activity was starting to take off on Ethereum.
It was getting more sophisticated.
And some people started to notice that there were behaviors of traders on chain
that were kind of counterintuitive.
And in particular, they noticed that when there were valuable opportunities on chain,
the way that people competed for them was getting much more complex.
the way that they were bidding in at the time what were priority gas auctions.
So that was like the original Flash Boy's 2.0 paper made some observations about how
arbitrage traders were bidding for arbitrage opportunities on Ethereum and then made
some projections basically about the fact that this behavior was fundamental and that you
would also start to see miners get involved in the game.
So the guy who wrote that paper, Phil, started a company called Flashbots, which we've been
involved in for a while since the early days. And they created the first MEV infrastructure for
Ethereum. At the time, Ethereum was run by miners. And so what it was was basically this auction
where for part of the block, not the whole block, and some set of trusted miners, they ran an
auction where you could send them transactions, and then they would order them very profitably
and give that to miners to put at the top of their block. And basically, the argument at the time was
if someone didn't do this in like a transparent way,
then miners would start to create internal arbitrage trading firms
or like miners themselves would turn into HFT firms.
And that this would be bad because there'd be a lot of variation
in how profitable miners were and they would get really sophisticated,
basically, and the market would become opaque
and it would be hard to participate in as a normal trader.
And so it's better if there's sort of transparent infrastructure
for this kind of sophisticated, complex behavior.
And then that was arguably the first kind of PBS or Proposer Builder Separation.
Right.
External PBS.
And I think this is when conversations about enshrining PBS also started and shrining it into the protocol.
We haven't actually got there yet.
But now we have this idea of PBS Proposer Builder Separation, which is strictly external to the protocol.
Right.
And it's been of a loose social contract that we are going to enshrine PBS once we figure out how to do that.
But that's kind of like the current state of things.
We haven't figured out how to do that, nor have we actually enshrined it.
I agree.
And it was really quickly.
So the type of PBS that we had in the proof of work mining days was actually a bit different.
It only built part of the block.
And this is important because basically we didn't see censorship the same way that we do now.
So Ethereum moved to proof of stake.
And we moved from MevGath to what's called Mev Boost, which is the current out of protocol like dominant PBS solution, also made by FlashBots.
And basically the different, one of the big differences with it is that these people called block builders bid on full blocks.
So they provide a full block to validators and validators don't have any part in the construction of that block.
There's no flexibility.
There's no flexibility.
And they introduced this party called a relay, which previously was just flashbots, but now a bunch of people operate them or at least a few people operate them, which basically sit in between the builders and the validators.
And the reason for the design decisions basically was as we moved to proof of stake, people didn't want to see a lot of variation in validator awards.
So the same way that we worried about minors becoming like HFT firms themselves, if you had really sophisticated validators that became HFT firms and then you had like home solo stakers which weren't or which couldn't participate in the PBS, like couldn't accept blocks from builders, then they'd have.
have much lower returns.
And that would be a centralizing effect on the network.
So there's a lot of complexity that that ignores,
but at a high level, I think that would be the sort of like
where we are today and the justification.
And then one of the primary downsides from that
is censorship, basically.
So we took the centralization that we were worried about happening
at the validator level, and we basically moved it
to the builder level.
So today there's two dominant builders, basically.
And they construct almost all Ethereum
and they construct the full block.
And so whether or not you can get a transaction on chain,
not entirely, but more or less depends upon these two builders.
So I did allow the talking about.
Yeah, I think I would just add one thing,
which is that one key difference between MevGath and Meb Boost,
which is the difference between the proof of work implementation
and the proof of stake implementation,
is that the trusted intermediary of the relay actually allows you to have more participation.
So in MevGh, there was like three big mining pools.
And because of the trust assumptions required, those are the only people who could participate.
Because smaller miners couldn't necessarily be trusted to stick to the rules of the MevGath protocol.
But now with Mev Boost, anybody can participate.
So even if you run a solo staker, you can participate.
And that was one of the key design decisions behind doing it that way.
So it's a pretty good snapshot in time, almost up to this point.
I do want to kind of zoom out and discuss a vibe of how the Ethereum research community has started to tackle some of these problems.
We discover MEVV.
FlashBots is created.
FlashBots creates Mev Boost, a out-of-proticle sidecar that solves some of the problems,
and it creates this idea of Proposer Builder separation, which is like something that we've considered to figure out how to actually enshrine into the Ethereum protocol.
Now there's also, as a result of that, like Charlie said, there's now relayers as well, and block builders as a separate party have also come up.
And we're kind of like incrementally patching problems that are showing up in the MEV space.
As we move forward in this lay of the MEV landscape, we're also going to introduce inclusion lists as a censorship resistance patch.
But how would you guys describe this like incremental step forward and then incremental patch towards kind of the problem space of,
MEV on the Ethereum layer one.
One thing I might say is that a lot of people are not happy with PPS.
So like a starting point for the conversation might be, you know, we explain the reasons
that we pursued this path, which I think were good reasons, understandable reasons.
But the state of play today is that we have a market structure that a lot of people,
for various different reasons, are not happy with, basically.
And I think everybody, regardless of like what improvement you want to see, basically wants
to see some type of improvement in positive direction for the protocol because it doesn't feel good to be like, like basically the current market structure that we find ourselves in just doesn't feel good to a lot of people.
And I have like my takes on what about it could be better.
I think Max, though, has been one of the primary, you know, provocateurs or sort of, you know, for the term researcher.
Yeah, yeah.
No, no, but, but seriously, people who have pointed out like real, real issues in the tradeoffs we've been making.
And so I'd be curious.
Yeah, what are those, Max?
Yeah, so I think the main thing that we've been focusing on recently is the idea of a particular type of censorship, which is short-term censorship.
And oftentimes you hear on Twitter about kind of censorship of OFAC transactions, tornado transactions in particular.
In 2016, Italic Road, a blog post, and the OFAC of the time was Silk Road.
So you kind of hear about that type of censorship resistance.
But in that same 2016 blog post, Vitalik also said, actually, short-term censorship is really important for the functioning of certain decentralized financial applications.
So let me give a quick example.
Suppose we have like $1,000 for sale on chain.
And we want to hold an auction within that block to sell the $1,000.
Well, hopefully we should clear for $1,000.
and the person who put the money in the pot should get it.
But what actually can happen is because of short-term censorship resistance,
you can submit a block in the PBS auction,
which doesn't include anybody else's bid,
and only includes your own stink bid for like one cent.
And then you can pay one cent for the item as long as you get on chain.
So what does that do?
It pushes all the profits from the auction to the proposer.
and the idea of the kind of flashbot style of roadmap where we do mevgath we do mev boost
was given that MEVEV is inevitable we should do our best to make sure that it doesn't have other
harmful effects we should quarantine it and make sure it doesn't leak out and have other
harmful externalities for the rest of the system and my point has been I don't know if
MEV is inevitable. I think we can reduce it. And one of the ways we can reduce it is by increasing
short-term censorship resistance so that apps can have more flexibility and can hold these types of
auctions. Is short-term defined in anything? Is it like one block is short-term or is there any like
concrete and parameter there? We have a formal definition which is based on this abstract notion of a
censorship-resistant public bulletin board. And what really matters is given how much you put in as
your tip, which is kind of your protection from censorship, how much protection do you get out? How
hard is it for the adversary to censor you? And the more blocks you allow, the more you get naturally,
right? If there's five blocks where you could get in, then it should be five times as censorship
resistance, right? As censorship resistance, sorry? And we see that in our definition. But what we're
really trying to do, if we're trying to do decentralized finance, is get real-time strong censorship
of resistance.
Meaning to get in, ideally in a single block.
Meaning to get in in the next block.
Or to make it as costly as possible to prevent you from getting into that block.
Solana cannot comprehend this.
So I want to add one thing in that, which is I think that is that is, that definition of
censorship resistance is very useful.
I think that some of the conversations, some of the issues that, like, generated the
feelings about the current market structure also relates to, like, we have issues
with sanctioned entities by various different countries.
Like certain actors in the MEV pipeline, like Builders Relays,
aren't willing to process transactions from those entities.
And as like a credibly neutral platform,
that doesn't feel like great for Ethereum, basically.
There's also this issue of like even for transactions that don't have value to censoring.
Like if I just want to send you some USDC, like,
block build, there's only two block builders today and they might just not pick that transaction up at
like a reasonable fee level because they're like, it's not worth it for us to include in the building
algorithm. It's not worth it for us to add latency to it. And I would say these are not the type of
valuable transactions that Maccas is addressing. It's not like we're trying to run an auction,
but that like is a bad UX for users basically and it makes people feel bad because it's sort of
just like these actors are just choosing not to include me basically when it seems like they could.
And it's absolutely, that's absolutely something.
we've noticed and have been tracking very closely for metamask users for inclusion.
So it's definitely an annoying part.
And it comes back to that idea of like, can we quarantine MEV?
If that's all we can do, it'd be great to do it.
But right now it seems like it's starting to leak out into stuff like the inclusion of basic
transactions that are just kind of low prior fee that aren't on contentious state and
are not being included because there's some latency game going on in the block.
And maybe George, I mean, Max and I are sort of pseudo core researchers, but as someone who actually spends a lot of time in the sausage-making process, I guess I'm curious, like, what other parts of the conversation we maybe didn't cover. You see an Ethereum.
I'm generally a fan of the notion that it's really hard to get rid of MEV. When you introduce a mechanism, the MEV is like a little animal that moves around and is trying to avoid your measure.
So I think one new one that would add the conversation at any time you introduce a mechanism
that MEV might move to a different part of the system that your mechanism didn't prevent
and you end up in this kind of patch on top of patch on top of patch situation.
And the classic thing, which I'm sure will touch on, at some point it moves to the network level
and adversaries can do things that maybe your protocol did not prevent.
And you might get stuck in another bad situation.
So we should apply.
extreme nuance and deep analysis.
I think this is a reasonable point, and it's kind of the, I would call it the tenderloin
cleanup or the mission cleanup point, which is that you go into, Gavin Newsom goes into,
you know, mission and cleans it out and cleans it up, and then all the sudden in the
surrounding neighborhoods it isn't so clean anymore because of the spillover effects.
But I would say that the goal of reducing the proposed monopoly is to introduce competition.
And so if a lot of MEV is actually a result of the monopoly, when you introduce competition, you're not just chopping up the MEV and giving it to several proposers.
You're actually reducing it because there's actually competition that erodes the monopoly rents.
You have a five-person oligopoly that actually have lower monopoly rent than the one individual.
And that's the goal here.
But I think it's still important to consider whether it might go somewhere else than this.
inclusion monopoly. I want to put a pin in that point that you brought up where we have been patching
the Ethereum MEPV space with improvements. I would say we have made qualitative improvements to the
MEPE landscape as the years have gone on, but we just discover troubles elsewhere as the arc has
progressed. And the point that you're saying is that there are strict improvements that can be
made at times via research and innovation to the MEP landscape via stronger mechanisms, stronger and
stronger mechanisms. And so it's not just about moving it elsewhere and then all of a sudden
we rediscover the problems. We are creating better mechanisms, but maybe over the years,
the Ethereum Research arc around MV has made soft mechanisms that have been patched on.
And I think what we have discovered is that like censorship is perma. It's always somewhere.
There's always censorship somewhere. But with stronger mechanisms, maybe we could be a little bit
more hopeful about having stronger outcomes. Is that is a fair summary?
Yeah, I just personally believe that it's actually possible to get rid of it because that's the dynamics of competition.
Give the proposer monopoly pitch.
You use the term, I would say, I think you could go in to make the case for the proposer monopoly is what drives censorship and you can break it.
Yeah.
Maybe what is the proposal?
Yeah.
What is the, so on Ethereum, the way that your transaction gets on to the chain is that it has to be included by someone called the Proposer.
The Proposer is selected from among the validator set.
and their job is to propose a block.
And so if you want to get in, they need to put you in their block.
And they have a 12 second slot.
So within that 12 second period, there's only one place that you can go to get on chain.
If you want to wait 24 seconds, you can go to the next guy.
If you want to wait 36 seconds, you can go to the guy after that.
But if you really want to land within 12 seconds, if you're time sensitive, you have to go to that person.
So that's why I call it the proposer monopoly.
It's a temporary 12-second monopoly on inclusion.
And 12 seconds might not sound like a lot, but if you went to somebody who works at Jane Street
and said there's 12 seconds where only one person gets to decide which trades are executed,
they would be very, very surprised.
And they would be like, how do I become that person so I can make some money?
Because that is an extremely powerful thing for financial activity.
For just transferring stuff around, maybe it doesn't matter too much.
But if we're really trying to build a chain, which is my goal in terms of crypto research, which is make the chain good for finance, we really need to address that problem because that's not conducive to decentralized finance.
The only thing I might add to that is that, yeah, they choose to sell monopoly into the builder market currently.
So the proposer has monopoly on the inclusion and they're basically going to hold an auction and saying who can pay me the most for this.
Which doesn't eliminate it.
it just passes it on to the highest bidder,
which is almost,
you know,
whoever can extract the most value
from the monopoly.
And that's the problem statement.
I think it's also important to define the goals.
Like,
you can design very different mechanism for,
very different mechanism for achieving just,
I think,
censorship resistance on its own.
So transaction gets in.
And I think there's a plus plus version of that,
which is the,
can you actually remove MEV in the process of
doing that and I think depending on how much you scope it the hammer that you have changes
and there are different mechanisms for achieving that and max has an idea for a mechanism
ethereum research has another idea for a mechanism maybe the truth is somewhere in between
and we need to seriously value trade-offs especially when you want to amend such a mechanism
on top of the Ethereum protocol and what does that mean about things that need to change
backwards compatibility.
Maybe, can I just say one more thing on why I think it's possible to end M.E.V.
Basically, there are two parts of M.E.V.
If you look at the original Diane et al paper, which is there's censorship and there's
reordering. We actually know how to solve the reordering part because we can change the
ordering rule. So assuming there's some good ordering rule, we could reorder. The problem is
we have censorship.
even if we come up with the perfect
reordering rule
and that reordering rule
says that Georgia's transaction should be first,
mine should be second, and Charlie should be third,
if Charlie really wants
to and he knows the proposer well,
he can slip him a hundred bucks
and he can be the only person in the block
and then he's first because we're not even
there. And
I think basically if you
figure out how to do the ordering,
which is certainly an open question
that we'll get into, you push
everything to the censorship layer. And then you have one thing to solve. Right. And if you can solve
that, you actually can significantly reduce MEV, in my opinion. So you're kind of making the
argument that solving censorship resistance is approximate to solving MEV. I think it is necessary,
for sure. Maybe not sufficient, but absolutely necessary. Okay. I think we're about to drop
the multi-proposer punchline. But before we get there, I want to open up inclusionally.
and fossil as context to open up that door.
Maybe, Charlie, you can help us with this one.
Yeah, I would say the editorialized history on this is basically like we had or we have
Proposer Monopoly PBS.
And for the last like year and a two years, as people started to notice the censorship
problem, Max and others, you know, some of the main proposals to improve things have been
the idea of giving proposers back more agency.
for example. So in the construction block construction process, people observed, okay, well, you know, the builder gets to build the whole block and the proposer doesn't get to say anything. So even if they want, you know, this auction's bid to be included or even if they want this disenfranchised person's transaction to make it in, if the builder says no, they don't have any way to express that or any participation in the block construction. So the first ideas were kind of like, okay, give proposers the ability to participate in the block construction.
process, you know, not just give it entirely a way to builders. And then there were some
derivative ideas of that. I would say like pre-confirmation stuff is kind of in this camp or like
proposer commitment stuff where, you know, proposers are promising to say or do certain things
about their blocks and the builders have to follow those rules in theory. And then I think more
recently, Max, I think, had these ideas earlier, but more recently, some of the research
community came to the idea of like, okay, maybe we should have many people contributing to the
construction of a block in some form. And one vein of those ideas is basically the, what you
mentioned, fossil, which is an inclusion list design that has many parties contributing to it. An inclusion
list is basically a list that the builder has to follow. So it says, when you build the block,
this set of transactions or if it's full, some subset of it have to be included from this
inclusion list.
The proposer says to the builder, here's a list of transactions that must be included.
Yeah, well, the original idea was the proposer would say that.
And then sort of the follow-up idea, improvement on that was a bunch of proposers.
So we sample the validator set and we get a bunch of people to contribute to this list.
And then the builder has to follow it.
And then I would say a somewhat like a related line of research, which Max has been championing,
is the concept of multiproposor.
And this says, rather than putting this part in the inclusion list,
which makes it less useful for MEV-relevant transactions,
it's sort of, and we can get into why,
probably only useful for like the USDA-Send
or like the disenfranchised person
that the builder doesn't want to include,
as opposed to, say, the uniswap trade.
Max has been pursuing this alternative line of research
which attempts to address the MEV transactions as well.
That's called multiproposor.
That would be my framing of it, but I'll, you know, you should.
One, again, just to add some extra context on this, the way to think about it is that you're trying to solve this censorship problem.
There were two schools of thought.
One was the execution tickets, big confirmations, school of thought and everything that happened there, which I think all three of us probably, I think we don't like as a general direction.
I wouldn't even say that they're even trying to solve censorship versus that, in my opinion.
I don't even think that they would claim to be trying to solve censorship resistance with APS and tickets.
I think like they think that they're going to do APS tickets and then have inclusionless as the censorship resistance.
Gotcha.
Well, that was the one direction.
The other one was this whole thing where multiple people help the builder or multiple proposers decide on what should go in.
And they kind of are trying to achieve the same thing.
and it kind of like the main area where they differ is like how much of the scope are they
trying to address and the wider the scope and the harder the problem the bigger the changes
that you need to do i think yeah let me i think people are quite confused about the difference
between braid and fossil they're actually both ancestors of the same idea so in like january of
2023,
myself,
Elijah Fox and
Malash Pie,
who works with us
at SMG
and as a professor
at Rice,
we released a
paper called
censorship resistance
in on-chain
auctions,
which basically made
the point that
if you want
to hold an
auction,
you need
censorship resistance
to do it,
and Ethereum
and other
leader-based
chains don't
have strong
short-term
censorship
resistance.
And out of that,
Elijah,
who started a
company called
duality,
was like,
hey, maybe we should try to implement some of these ideas so we can reduce MEP on this chain that we're building.
And he came up with this idea, which I helped him with, which was multiplicity.
And multiplicity is very similar to what fossil looks like, which is fossil's forked choice enforced inclusion lists,
which is basically the testers send some extra transactions to the execution proposer,
and the execution proposer is supposed to include a bunch of those in his block.
So it's constraining the censorship power of the execution proposer in that way.
But it still has a privileged position for the execution proposer.
Now, for choice and force inclusion lists are a modification of that, which works with LMD Ghost.
So that is a pretty similar design.
And in fact, it's like from the same line of reasoning and the same desiderata.
Now, Braid is an attempt to make a version of that design that doesn't have a privilege execution
proposer.
And so each one of the proposers in Braid is treated exactly the same way.
And so I think that's very thematic of the Ethereum vision, which is try not to give anybody
any outsized power, try to make as frictionless a system as possible, and treat everybody the
same, whereas the fossil and multiplicity version, which I think is also like a reasonable direction
and is trying to achieve the same goals, has that one downside. And so that's exactly like the
very specific technical difference, but ultimately thematically they're very similar. And so in
particular, correct me when you disagree with me. Basically, the way that I would view it is the
fossil slash multiplicity approach, which you could call like the inclusion list approach, meaning we got
group of people who construct a list of things that have to be included, and then a builder is
responsible for building a block that includes them. Basically, this probably is only useful
for censorship resistance for like metamask transactions, disenfranchised individuals, and maybe
increasing the cost of censorship for like the auction example you gave a bit, because ultimately
that execution proposer at the end, the person who's building the ultimate block that's constrained
by those lists still has a lot of advantages. One advantage that they have is a timing advantage. So they
have the ability to see all of the transactions in the inclusion list prior to constructing the block.
And we can get more into exactly how that advantage expresses itself. Another that they have is that they
could look at the transactions that are in the inclusion list, and they could decide to, like,
stuff the block. So they could say, you know, I'm no longer able to exclude these freely,
but what I can do is pay to exclude them, basically. Like, I can fill the block with, you know,
useless transactions, basically, and I'm going to have to pay the base fee to do that,
um, to kick out the ones that I want to. But if I'm willing to pay, or if I'm in a time of very
high congestion, then I can still do that. And so,
Ultimately, like, with the fossil and multiplicity approaches, it's probably, it doesn't really address the MEV issue.
It mostly addresses, like, the censorship of transactions that are disenfranchised or that are just not, like, builders just don't include because, like, they're lazy and don't care to today and it feels bad.
It doesn't address, like, the uniswap trade or issue.
And the reason for this basically is, you know, if I'm a really sophisticated trader and I
send, I see the Binance and Uniswop are out of line, and I send a trade closing that arbitrage
to the fossil committee or the multiplicity committee to the inclusion list constructors, right?
Okay, it gets into the inclusion list.
Now, the builder of the block later sees that trade.
And it's going to look at the finance price and say, since I sent that trade to the
inclusion list committee, has it moved, has finance moved in favor of the trade or away from it?
And if it moves away from the trade, they're going to beat me for it. Or if it moves away from the
trade, basically, then they'll let it execute at an unfavorable price to me. But if it's a profitable
trade, they'll beat me for it instead. And so as a as a trader, or, you know, when I'm making
MEV relevant transactions, I'm basically never going to use the inclusion list because I know
that it's going to mean that I get adversely selected against and am less profitable.
So all the MEV still flows through the builder.
And the inclusion list is really only useful as like a sort of...
Last line of defense for the disenfranchised.
Right. It's a side door for transactions that aren't inherently relevant to the Mev game.
Now, in fairness, a lot of folks think that this is still useful.
It feels bad if you're a Metamask user and one of the big builders is just like, you know, this trade is this transaction, which I don't care about it.
It's not that I want to censor you, but like it's going to increase the latency of my blocks.
So like I'm just not going to include it or if you're a disenfranchised, you know, user, et cetera.
I think ultimately like it's a good design property of Ethereum for like USDC sends to get on chain as fast as possible.
And I think that there is like a really strong case for the fossil multiplicity approach that it helps with all of that stuff.
But what it doesn't, what it explicitly doesn't address is removing the builder.
And basically remove the monopoly that the builder has.
Yes, removing the builder and the builder's monopoly on MEV relevant transactions.
Yeah, just maybe they just tile of the things you just set into about like there is no way, no matter how many Math Olympiad and Putnam medals you have.
that you are going to beat me at trading if I'm two seconds ahead of you.
I have two seconds more finance information.
And that means that if you're two seconds behind,
meaning you're submitting on the inclusion list with a design that has a view canonicalizer
like multiplicity or fossil,
you don't even want to play because you know that somebody else is going to come in.
Either it's going to be the execution proposer themselves or somebody else.
who bribes the execution proposer to do this,
and they're going to have a two-second advantage
because they're going to have two seconds extra Binance moving against you.
And if you try to make a trade,
all of your good trades,
when you make a trade and the price moves favorably or stays steady,
you're going to get outbid on.
And all of your bad trades, you're going to keep.
So you only get losers.
And this basically spirals and make it,
so you don't want to trade at all in the multiplicity inclusion list way.
So isn't this maybe a shorter way of saying this, isn't that the inclusion list is really an
inclusion set that doesn't force any ordering.
The builder just needs to include them and doesn't matter what order, they can do whatever
they want.
This means that yes, indeed, the inclusion set.
It's not a good fit for MEV-Baring transactions.
Open question, I think, whether that is desired or not.
But what do you think about the world where the inclusion list has an ordering rule baked into it?
And I guess, like my broader point being, is this a question about having multiple proposers or multiple people doing the thing?
Or is it the question about whether there exists a sound way to do some kind of ordering rule that allows the MEV to be more proper?
Well, I think it's a really good question.
I think before we get into the ordering rule stuff, I would say that Max's pitch for a braid and pure multiple proposer, which is the alternative path from the inclusion list plus builder approach, the fossil slash multiplicity plus builder approach.
Basically, it intends to delete the builder and to replace the builder with an ordering rule.
And I think before we discuss the ordering rule, it probably would be helpful for Max to explain how his approach, like the pitch for we can delete the builder.
And maybe Max, why don't you sketch out the braid, like step by step?
Yeah.
That's exactly what I want to go next.
I want to put a stamp in this conversation because this is an arc of the Ethereum research that is now kind of taking a very meaningfully different path if this is the way that we go.
whereas blockchain's Ethereum have always been in this leader-based system where somebody has a monopoly, even if it's as short as 12 seconds, somebody's always had a monopoly.
And all the things that we've talked about, inclusion list builders, proposer builder separation and trying to propose a builder separation has all been addendums, additions, patches onto the system because of the leader-based system.
And now as you're about to introduce Braid, this multi-proposal set, we're kind of going back very far.
backwards in Ethereum MEV research to propose an entirely new construction of going forward.
That's kind of how I interpret this moment in time of Ethereum research.
That sounds right to me and David, just to be very clear.
I think we're very early in the research phase in all of these things.
Max has more work to do, I think, like on publishing being more explicit on what we're achieving,
the protocol, the proofs and everything around that.
Same thing, like for fossil, we need to do a lot more work.
we might come up with a third new thing.
So I think the way to approach the conversation is that we haven't added an item on the roadmap.
Right.
We are very early research phase.
This is a concept.
This is a trail of research that we're going down.
Yeah, yeah, yeah, yeah.
Because we have, there are so many things that we haven't figured out yet, as we'll find out in this conversation, that it's very early for us to say, hey, this is like where we're going.
Because just nobody has done that before.
This is like very blue ocean uncharted waters.
And actually, the Ethereum ecosystem.
them is way ahead in terms of facing this very hard problem, which is why you don't hear this
discussed in many other places, except maybe Anatoly and Solana, who was probably way ahead
of everyone else on this train.
Okay.
So Max, introduced to us multi-proposers.
Yeah.
So I say that this design is kind of suspiciously simple, because if you ask, I asked a bunch
of consensus experts, like, these are my desiderata.
I want multiproposer.
Can you give me something that does this?
they said, oh, that sounds like it would be super complicated.
But then I had a few more discussions.
I was like talking and Malesh, who again works at SMG as well,
kind of said, hey, why don't we just do like the stupidest possible thing,
which is just take LMD ghost and copy paste it four times or eight times.
And then we just try to interpret the four parallel chains as one.
And at the time, the consensus experts were kind of looking at him like he was crazy.
And then we just kind of thought about it a little more and a little more.
And nobody could really find an immediate break.
And then we kind of convince yourself that actually this does work.
In fact, if you have a single threaded, single leader blockchain that has eventual consistency,
then the multi-threaded version of that, like braid,
inherits that eventual consistency.
If it has liveliness, it's trivial to show that it inherits liveliness.
And you can modify the finality gadget and LMD goes to give good finality properties.
It's a very similar finality gadget as well.
So we basically said, okay, let's start circulating this idea.
and that's where we are now
a few weeks later after that realization.
Max, when you gave, I watched your talk that got circulated on Twitter
and you presented it in the same way where it's like so stupidly simple
and then here it is.
And then you explained it and then I was waiting for further explanation
but then the explanation ended.
And so that's actually kind of what just happened to me just now.
So like my interpretation is that these builder,
the proposers, the validators, are going to make four blockchains if we're doing a four
multiprocessor system.
We're going to make four blockchains.
And then via some mechanism, they just get interpolated together.
They get combined together.
And then that becomes the blockchain.
That becomes the block that gets added to the blockchain.
And so, and that was it.
And that's what a multi-proposer is?
Yeah.
It's like, you know, it's like the owl and then draw the rest of the owl.
So how does that, you know, the collaboration?
Actually, I think is quite important here.
So, yes, okay.
It's like the four of us friends are here.
So we have a block each.
Yeah, I've got a block.
Georgia, it's got a block.
We all got a block.
How does four blocks needs to become one block?
How does that happen?
Right.
So the way this works is what I call execution consensus separation.
So the execution layer, so Georgeos is going to have to implement this,
is going to be basically becoming the,
view canonicalizer in the system. They're going to be looking at the chain, interpreting which
blocks arrived on time and which didn't, which are confirmed, which are valid, which are not.
And in real time, on the unfinalized head of the chain, they're going to be looking at the chain
and seeing which blocks arrived. If there's four blocks that arrive, and we all think that they're
valid, we're going to zip up each of the transactions. So all of David's transactions,
all of George's's's transactions, all of my transactions, and all of Charlie's transactions,
are going to be zipped up into an unordered set.
And then we can apply some deterministic ordering rule to order them afterwards and execute them.
And the deterministic ordering rule might have been previously known as a builder.
That's like the thing that we are replacing the builder with because it's the mechanism.
It's the software.
It's the code that is determining the output of the order.
Well, yeah, except I would say that a builder isn't a deterministic ordering rule.
It's an attempt to solve this very hard, NP-hard knapsack packing problem.
Okay, so four sets of transactions, four different orders, and then there's a rule that all of these things go through, and then the output of the optimization rule, the ordering rule becomes the one block.
What happens if, like, we're all looking at the same men pool.
I'm pulling in your guys' transactions.
You guys are pulling in my transactions, right?
Like, that's how I understand blocks to be made.
What about this contingency?
Yeah, so if you send to multiple leaders, you have to pay the base fee for multiple leaders.
so you're actually going to, it's going to cost more.
You do it if you want censorship resistance.
If you don't, you can just send to one leader.
So that's like one point is that not everything is going to be sent to everybody.
Okay.
I would say, I think.
Could we go over an example just to be precise here?
I am looking at the peer-to-peer network.
You're sending a block around.
All of you guys are sending a block around.
Can I not act as the leader almost by just waiting for all of you guys to send me the block?
And then at the very end, I will send it out.
How does that work?
Is this latency games?
Are you asking about latency games?
Yeah, it seems hard to, yeah, maybe let's talk about the set of things that we're
trying to, we're dealing with here.
Because, yes, of course, there's a VU canon calizer.
Yeah, you take end things and you.
could reduce them down to one.
But like, how should that work?
Well, I think the gigaboolcase for Braid and multiple concurrent proposers is that it is a design
which tries to achieve, I think, what Max calls simultaneous release.
And that means if all four of us are building blocks at the same time, they're going to get
zipped up after we all release them by everyone else watching the chain.
And all the transactions in them are going to get added together and then ordered by
some rule. Now, if there's simultaneous release, which you can kind of think of as, I don't know what's
going to be in your guys' blocks before I release my own, then there's a strong argument, and I think
this is the core argument in your original paper, that I should just add all the transactions I'm
aware of to my own block, because if I don't, you guys may have added them. I don't know what's in
your blocks, basically. And so that my dominant strategy is just to add all the transactions that I'm
aware of, right? And if everybody, if all the leaders follow that same strategy, if that is the
dominant strategy for all the leaders, then we have extremely strong short-term censorship
resistance. And there's no builder. Does that simultaneous release exist? So let me step in there.
There's two versions. There's a weak simultaneous release, which is, so it depends on what
information you consider. In game theory, we have this idea. We write out the game in a game tree.
And we have two types of things.
We have things that happen after each other and things that happen at the same time.
And the formalization of that are if actions happen at the same time in game theory,
we say that they're part of the same information set.
So that means when I release my block,
I have the same exact information about the world as Georges.
So what you consider to be that information is important.
If what we care about is only that I don't know the context of George's block that he's proposing,
I think that's absolutely possible as long as some cryptography implementations exist
that we think probably exist or will exist in the next few years.
Something like a time lock puzzle would help us do that.
So I could release my block and I could put a time lock puzzle that says this will decrypt in one second.
and we can absolutely get the timing games down to within one second.
So that would work.
But the challenging part, the strong version of this synchronous release or simultaneous release,
is can we make it so that we all have the same Binance info?
And that's the one that I'm still not sure if we can get to,
but I'm working really hard on it.
And I hope we can solve it.
I think it's pretty important to get that strong version as well.
But I know we can get the weak version,
at least if time lock encryption works.
I will now paint the bear case as well.
I think there's two bear cases.
So the first is this.
On the weak case, I would say we don't currently have good cryptography for what we're trying to achieve, basically.
I think that you would need farther out there cryptographic schemes like strong BDS or something to get what you were talking about.
And the best that we could do today would be some type of threshold encryption scheme, probably.
and like there's the possibility of unaccountable collusion.
There's also probably in whatever, like this is not specified, but when we did specify it,
there probably are going to be a lot of issues with like free option problems,
meaning if some subset of the participants decide not to decrypt or to decrypt selectively.
Like I think, should we like unpack this for the audience?
So there's threshold encryption, which means I take my information.
And then I encrypt it and I give the key to, you know,
Giorgos and Charlie.
And like if they both have the key, they can open it.
So you need both keys to open it.
That's threshold.
Strong VDF.
VDF means a verifiable delay function just so we're not using too many acronyms.
And that's important for these time lock encryption schemes that I'm talking about.
The issue is that like threshold decryption in general.
general usually is implemented as another layer of consensus. So you end up running on the same
problems that you were trying to solve in the first place. That's why it doesn't feel like a
credible tool that we could use here. And on the, there was an old school around verifiable
delay function, which also was a dead end. So that also felt like a dead end. There was some recent
research on doing time-locked decryption with a very recent line of work that I cannot remember
to explain right now that might be promising. But I guess, yes, yes,
maybe, but even that seems hard.
And even then, the moment that you introduce encryption,
that means that you incentivize spam,
and it goes back to my original point around the MEV moving around,
because you end up in this probabilistic games
where I will just try to flood the network with transactions,
because I just cannot see what's going on.
It seemed rough.
I think, well, first of all, let me just say that there is commit reveal.
We do know how to do commit reveal.
We just commit to the hash, sign the hash, and then you reveal it later, and if you don't reveal, you get slashed.
It has some issues you might really not want to reveal, and the amount you don't want to reveal is more than the amount you've put up,
and so you're not going to reveal and lose that stake that you put up.
But at least we do know that that works, and if we put sufficient stake behind it, we already have 32 ethosate,
which is quite a bit for each validator.
But did the low-carb crusader fiasco last year disprove that,
almost that in the face of an uncapped amount of MEP at risk in flight,
no cap bond can secure you against that attack capital efficiently?
Well, I think, yeah, and actually Tarun and Matthias have a paper about this,
which is like credible decentralized auctions, I think, is the title,
that actually looks at based on the weight of the tail of the distribution of mev opportunities.
So how likely it is to have these big uncapped opportunities,
you lose the credibility with these commit reveal schemes.
So this is certainly a reasonable point.
I think that's why I'm saying.
I'm thinking of Nassimmy goal as a lab in my mind and talking about the X ones.
Maybe zooming out on it for a second.
I think there's a bunch of stuff in the cryptographic rabbit hole about how,
if all four of us are proposers, we can try and encrypt our blocks in a way that's reasonable.
I think what Max was getting at, though, and if you were to zoom all the way out on it, the attack case that I think most people are considering right now when thinking about this is basically like, let's say that rather than all four of us releasing our blocks at the same time, in fact, David is a really sophisticated HFT trading firm.
and he's maximizing his connectivity to the network and latency to other validators and to all the
important exchanges, et cetera, and all the important source of order flow maximally.
And let's say that David even has like a consistent 50 to 100 millisecond latency advantage,
right?
So the issue that we were talking about adverse selection earlier in fossil, I think still applies
here, which basically is as a sophisticated trader, I would argue I'm probably,
never going to send my order flow to any of the three of us. I'm only going to send it to
David because I know that he's going to have the best latest available information, basically,
and then I'll get adversely selected against if I were to send to anyone else. And so you're going to
end up potentially in a situation where rather than, you know, sort of traitors sending to all of us
and getting censorship resistance from all of us potentially including it, that basically all
sources of order flow are going to collapse onto the most sophisticated proposer, and they're going
to emerge as, like, a builder. Basically, like, we're going to rediscover the builder role
through the most sophisticated proposer having, like, a natural monopoly on order flow.
It would be how I would.
David, almost by virtue of his privilege, though, sorry, we're giving you as an example.
I love it. I love it. I love it. I don't know if you're an HFT firm, but if you are,
I keep that part hidden. Yeah. And David, almost like, emulate.
the role of the proposer via his privilege network positioning, even though you're in a
leaderless system.
David has co-located with East Bushwick, so...
Right, yeah, yeah.
Fast connection.
Okay, so Charlie, I think what you're saying is that, like, we'll just kind of rebuild
this same system that already exists, and now we have, like, it's just just in a separate
paradigm.
It's inside of the multi-proposer paradigm.
But can I just defend it ends this critique?
Sure.
Because if the argument is, hey, we shouldn't do this, we should do fossil
instead because there might be a 100 millisecond last look in braid.
There's a two second last look in fossil, like, or longer.
So I think it makes sense that the same emergent properties comes out of the system no
matter what.
And it's more about a question like, is it nonetheless a strict improvement in the mechanism?
And I think.
Let me say that.
Sorry, David to cut you off, but I don't want to let this fud go on challenge here.
because there's a model that we think of that we look at.
Often we look at models.
All models are wrong.
Some are useful.
And usually one model is right for the task that you're looking at,
but isn't exactly right for the other task.
And I think what Charlie's doing is referencing a model that we had in one of our papers
that was actually based on something that Julian Ma,
who's at EF originally proposed, which is basically the price is fluctuating.
You have somebody who acts later and somebody who acts earlier,
and you have this complete unraveling effect that Charlie was talking about.
But it doesn't quite work that way in this system because if everybody goes through
the one fast, through David, who has his latency advantage set up, then you don't expect them
to bid very high in the auction that's not run by David.
They'll bid for to get to be David's favorite, to get to submit their trades to David.
bid. But you shouldn't expect an equilibrium to arise where there's no bids from the other
proposers because there's two stages of the auction. There's the auction for who gets to be
David's favorite, but he's going to choose one. And then that one person is going to win the
auction for basically free. So they're going to not have to pay too much money for the right
to do the arbitrage. So I think I, I think I would say a couple things.
Number one, the 50 millisecond versus two second latency delay in the fossil versus like whatever timing advantage we can garner.
I think there's some centralization tradeoff there, basically.
The second thing that I would say is that timing advantages are probably one of the most obvious, like first order advantages that you can garner.
But I certainly don't think that it's like the only advantage in the metagame, basically.
Like, I think the sophistication of the most sophisticated proposer or what advantage they have,
there are going to be a number of different ones.
And in the Binan sex tax arb, although it's like one of the easiest examples to use,
I think that it's like, you know, even if you go look at the order flow distribution today,
it's not like the only one that matters, right?
So we said there's going to be some consistent advantage on that one,
and we can talk about some others too.
So in the multiprocessor case, again, let's say there's like one,
sophisticated proposer that we can see other people's blocks and potentially listen to
bribery oracles and other types of things, you know, longer than other people can.
I also think that you run into a lot of issues with like the deterministic ordering rule that
we were talking about earlier, where basically like that proposer is also going to be able to
in a maximally advantageous way interact with the order.
rule and any applications that rely on it compared to the other proposers.
So like one of the most common or like one of the probably best ordering rules proposed
or most reasonable is priority gas auctions, right?
Like what we had in the Ethereum mining days.
And another thing that I wonder about with the multi-proposer version of this that tries
to achieve like no builder and we're going to have a deterministic way of merging the blocks,
is like in addition to David being able to see Binance last,
David also very plausibly is going to be able to see what all of the Unispop trades and the three of our blocks are and sandwich all of them and penny the auction, the priority auction on behalf of anyone else who wants to participate in any protocol on chain, right?
So I think that there's a bunch of different kinds of advantages that flow down.
And like the resulting meta game, I think in many different ways, like power will concentrate onto the sophisticated proposer.
but then revenue will concentrate to the sophisticated proposers and then everybody will try to get sophisticated, which is already happening.
But I think-
But that's a centralization point.
But it's already centralizing.
I mean, we have all these kinds of timing games.
We have games that are not being played right now, which is-
That's kind of exacerbating the issue, right?
No, I think it's actually not, it's making the issue better because right now, if you're faster, you get an extra 100 milliseconds or a second of proposer monopoly.
And that's incredibly valuable.
And in this system, you can get an extra 100 milliseconds of being one of N proposers.
So you'll get a 100 milliseconds of last look.
It's not as valuable as extra 100 milliseconds of proposer monopoly because proposed monopoly is incredibly valuable.
Okay.
Here's a basic mental model that I use for it, right?
So in the multi-proposer case, right, we have basically this deterministic ordering rule that's sitting in between multiple proposers.
and we're going to say that there's some difference in sophistication between them now, right?
And then the other side of this, we have people bidding for inclusion with also obviously like a variety of sophistication, right?
And I think that like something that worries me a lot about this, if I had to boil down my concern with it, is that like, again, if we go back to the proof of work days, like the original MEV paper was like how only the bidder side of this market, the people who wanted transaction.
inclusion, how their behavior was like unintuitive and unintended, even while minors were trying
to run honest PGAs or honest priority auctions. And I think basically like the MEV meta game
that would develop on top of braid is going to be the result of bidders and the proposers
with varying degrees of sophistication, like trying maximally to manipulate the system to their
own benefit. And I'm personally, like, pretty unconvinced at this point that we can avoid a
situation where the metagame is, A, going to be a lot different than we intend, meaning it's not
going to be people bidding their, like, true value in this priority auction or something or other
ordering rule, interacting with it, like, quote unquote, honestly in the way that we intend.
And with, like, comparable levels of sophistication or revenue on the part of the validators,
I think probably what's going to happen is power is going to both sides of the market are going to try to manipulate this mechanism of their benefit.
And then the most sophisticated of them are going to garner an outsized amount of power.
And I think the delta between the least sophisticated participant in that market and the most is going to be significantly greater than the least sophisticated proposer and the most sophisticated proposer.
And if this were all true, then I think at the end of it, you end up with PBS, just in like a more opaque way.
Okay.
Well, first of all, that's wrong.
actually not going to be a bigger delta between the most sophisticated and least sophisticated.
I think you're reasoning by analogy, typical VC.
But, but today...
Let me finish. Let me finish the point.
You just went on for five minutes.
You're reasoning by analogy from a model that we wrote about a different context.
And when we actually solve out the model, the adverse selection isn't going to look as bad
as you think it is.
So let's go...
We'll solve out the model.
I understand the propensity for, oh, we don't...
know what's going to happen. But let me tell you, Max Resnick is here. SMG is here. We don't do things
the same way. And we're going to prove the properties that we say the mechanism has. And we are
going to show formally what the equilibrium is. This is not going to be a situation where we come in.
We ship reams and reams of unsubstantiated claims so much that you can't read them all,
like a legal battle where they send you all the file cabinets and they don't digitize it for you so you can't look through it.
And that's what we're going to do.
And if you don't agree with the proofs, then we can argue about the proofs.
But like I'll just say, you know, hey, you may have some skepticism now.
Okay.
That's on us.
That's our prerogative.
We're going to prove the results, which is how things should work.
And then we don't have to go into this saying, oh, there might be something that we're not.
considering, no, we proved it
QED. I'm super excited for that.
I'm super, I'm super, super open-minded to it.
I also don't like the research DOS.
I think we're very aligned on this.
I'll make, address just one thing you said,
which is, to me, the difference say in validator
sophistication would be how late in the slot
you're going to call the PBS block
or how late you're going to call Mevboos to get the block.
That delta, if you're going to play the timing game,
is not that great. I don't recall the number offhand.
it is. But I think the difference between a proposer who's not in a multiprocessor scheme who's not
sophisticated and one who is, if the one who is is getting all of the finance flow and the one who's
not is getting functionally none of it, I think that Delta is going to be a lot different than,
you know, if you call the Mev Boost auction 300 milliseconds later or something. Like I think
because in that case, like the proposer is, the builder is still like getting all the same order
flow, just not the one in the last 300 milliseconds. My claim is great.
going to be a great point to debunk or prove in the work, I think.
Yeah.
Now, I do think that Vitalik, as usual, had the best tweet on the subject recently, which is basically
you could boil, I think a lot of the research conversation about this down today with,
like, can we remove or at least prove some things definitively about this last look?
Like, can we get rid of the last look in multiple concurrent proposers that would lead to this
like breakdown that I'm articulating? Or I think to your point, can we definitively prove some
things about how bad or how not bad that last look is? I think if you know how to quantify the latency
game, then you describe it and you describe exactly the issue with it. If you don't, you write something
on Twitter that says, oh, there might be a latency game here. There might be adverse selection here.
So I think what we need to do, it's on us to demonstrate. That's like if we say, hey, you should do
braid. That's our prerogative to demonstrate it. We're going to demonstrate it. And if we can't,
then we'll probably show it's impossible. And then we can all move on. Right. I was going to say
exactly that. I think even in both cases, I think we're all truth-seeking here. In both cases,
if it can be done, the proof is going to be great. If we find out an impossibility proof,
that's also going to be amazing because it's going to save us from all the DOS and the Twitter situations,
which maybe there's a meta point here.
around all of us, maybe doing up front a lot more work before we try to like, you know,
garner excitement or criticism around an idea.
New projects are coming online to the Mantle Layer 2 every single week.
Why is this happening?
Maybe it's because Mantle has been on the frontier of Layer 2 design architecture
since it first started building Mantle DA powered by technology from EigenDA.
Maybe it's because users are coming onto the Mantle Layer 2 to capture some of the highest yields
available in defy and to automatically receive the points and tokens being accrued by the three billion dollar
mantle treasury in the mantle reward station maybe it's because the mantle team is one of the most
helpful teams to build with giving you grants liquidity support and venture partners to help bootstrap your mantle application
maybe it's all of these reasons all put together so if you're a dev and you want to build on one of
the best foundations in crypto or your user looking to claim some ownership on mantel defy apps click the link in
the show notes to getting started with mantle the obel collective is up and run and is maybe one of the most
important collectives that you've never heard of.
OBLE is bringing distributed validators, or DVs,
to the Ethereum staking stack.
Distributed validators allow multiple parties of people
running multiple nodes to create a single virtual Ethereum
validator.
Together, this makes participating in Ethereum consensus
more accessible, more affordable, and more inclusive.
It is a strict improvement to the Ethereum staking tech stack.
With collaboration from Lido, Etherfi, EigenLayer,
and 50 other entities and thousands of individuals,
the Obal Collective is working to scale
and decentralized Ethereum using DVs.
And now you can get involved,
introducing the OBLE contributions program.
Visit OBLE.org slash bankless
to stake on DVs,
either through a partner staking protocol
or at home.
Earn access to future governance
and ownership in OBL
while contributing to retroactive funding
for projects that are actively
decentralizing Ethereum.
This is your opportunity
to secure inclusion
in one of the most important projects
working to scale Ethereum's foundation
for future growth.
Visit OBL.org slash Bankless
to get started.
That's O-B-O-L-L-L-L.
dot org slash bankless. The Uniswap extension is here. The self-custody wallet created by the most trusted
team in defy-uniswap labs designed to make swapping feel effortless. This extension lives in your
browser's sidebar, letting you swap, sign transactions, and send or receive crypto without ever
losing your place on the internet. Plus, with human readable transaction messages, you'll always
know exactly what you're signing. Navigate a multi-chain world effortlessly with support for
11 chains like Ethereum mainnet, base, arbitrum, and optimism. No more chain switching or token
importing, all your assets are right where you need them to be.
The Uniswap extension is designed to level up your swapping experience with other Uniswap
Labs products as well.
Easily on board to the extension using the Uniswap mobile wallet to begin managing your assets
across platforms and take advantage of smooth, seamless synergies with the Uniswap web app.
So go and download the Uniswap extension today by clicking the link in the show notes.
Just another way Uniswap is helping you swap smarter.
Speaking of garnering excitement, since we've been poking holes in Braid since its introduction
in this conversation, I do want.
want to give Max the floor of how my braid benefit everything upstream of it or downstream of it.
I know there's a conversation around Defi here. We started with like poking holes, but I want to
give Max the chance to just like talk about how braid is the best thing ever. What are the big benefits
that you see of this like multi-proposer paradigm granted that we are able to figure out the research
questions around it and figure out that that problem? Yeah. And ultimately like we're not just we're not
trying to push anything for any personal game. Like, we're just all here trying to make Ethereum
as good as it can be. And if, you know, I happen to believe that we can demonstrate that Brade has
certain properties, we're working very hard. There's a bunch of people working on the paper who are,
you know, working on proofs probably as we speak right now to try and show those properties. And if we
can't demonstrate them, oftentimes when you try to prove something, you either find that it's
possible and prove it or find that it's impossible and then show it.
And we'll release either way.
We're not going to be like a medical study where we don't release the negative results.
We're going to release either way.
I think it's really important.
I think there's a lot of complexity on the roadmap as well that introduces a bunch of problems that braid and execution consensus separation is an alternative to.
I'll give one example.
E.PBS, there's a design out from Terrence and Potos right now, which is, I think, originally based on this proposer timeliness committee, which I'm not sure it was Mike's idea. He did write a post about it a while ago.
From Pepsi, but from Pepsi.
The proposal timeliness? Yeah, okay. So, yeah, sorry if I'm not giving credit words, too. I'm just not recalling exactly the threat of research.
Mike was also into that. Mike was in all of that.
Okay, yeah.
So this Proposer Timeliness Committee idea is basically you commit to the block as the builder.
You send your block commitment to the proposer.
And the proposer selects the highest commitment.
And then afterwards, as the builder, you have some time to reveal your block.
And that is basically this alternative design for EPBS.
And we know we want to do EPBS and protocols somehow.
So Braid is a way to do EPBS.
And this is an alternative the most kind of developed longest chain on the alternative fork of doing just EPBS.
So one attack on that, which is an issue, is you could place some large trades.
And then if the trade goes against you, you choose not to release.
And so you say, oh, well, then we have to have a huge bond available to slash you with.
But, you know, what's the size of that bond?
Again, we have this uncapped bond problem.
So my argument would be, yes, there's a lot of compliance.
complexity here. But in the words of Italic, if we just eat the complexity, if we can figure it out,
if we can show the results are true, which we're working on, then we can eliminate execution tickets.
We can achieve Mbburn. We can get rid of EPPBs. We get inclusionless as a byproduct. All this stuff
that's on the scourge part of the roadmap, we get with one stone. And it might be a big stone,
and it might be a hard stone, and it might be a new stone. But if we could do that all with
one fell swoop, I think that would be good for making a Ethereum look more like the V3
version of the Falcon Engine than the V1 version of Falcon Engine.
I think that's what really attracts me to this conversation.
This phrase eating complexity has been thrown around a lot when it comes to this idea of
multi-proposer architecture.
We kind of like set up this M-AV landscape part of this discussion talking about all this
incremental discoveries about M-EV and then the next incremental patch about that M-EV and kind
not addressing the root core of the issue, which could be the fact that everything is a leader-based
system. And in multiproposers, you at least are leaderless-ish. When we started this multi-proposor
conversation, we talked about my super-sophisticated, highly bandwidth-connected proposal. And so there's
still a leader-ish part of a multi-proposor that we need to kind of figure out. But the idea here is,
like, call it four, eight, some square number that we'll probably select if there's eight
proposers, all of a sudden it becomes much less leader, much less of a leader-based system.
And you're saying that this is an elegant solution to eat a lot of the complexity of all these
patchwork incremental additions to the Ethereum research roadmap, which feels elegant.
And I know that in the Ethereum research space, we like elegant solutions.
It just feels good.
Everything kind of snaps into place a little bit better.
And that's kind of like one of the reasons why we're talking about this in the first place.
Yeah, exactly. I would just say, like, the other designs are not necessarily bad, but I would say there's not a lot proven about them.
And that's kind of, if you want to talk about, Charlie, if you want to talk about unintended consequences, latency games, pushing the MEV elsewhere, Georgia.
Take a look at these proposals, catch up on what the latest research is, and you'll find quite a few issues, I think, if you read them seriously.
and that's not to take a shot at anybody.
It's just how these problems are really gnarly and tough.
And like Georgia said, when you do something, you move it somewhere else.
I completely agree.
I completely agree with everything you said, actually.
And to be clear, I'm actually very bullish on like multiprozer or multiple-party block production, as we talked about.
I guess, like, my two cents would just be, I think that we should be basically very cautious about.
the existence of sophistication and exactly how that will express itself in whatever market
structure we kind of we try to create and I think that to the extent that we're ever
imagining that parties are going to be on a level playing field that we should have like
really strong reasons for believing that and I think that is where like a lot of the research
around Brate is focused and I can I just say I think like if you are being intellectually honest
our strand of the research, our original paper, has been some of the most tackling head-on
that sophistication and rationality of the actors. And we understand it and we talk to the
rational actors just like you. And I think, like, we're very aware of that. And we're
economists. So the way that we model things is through rational actors participating in games.
and we beat the complexity of that to try and get good proofs.
So I think if you actually compare what we've done in terms of tackling incentives and rational actors and latency versus what some of these other proposals have done,
I think we're actually far ahead in terms of analyzing the ultimate outcomes.
Yeah, yeah, I agree.
And I'm pretty sure that whatever I don't have like a decided opinion on what Ethereum should do yet.
but I'm fairly sure that wherever my opinion lands is probably going to be, like almost certainly
going to be derivative of the work that you guys started, whether it is multiplicity or braid
or, you know, whatever the most efficient implementation path is.
Again, I would say, and not to belabor the point, my like narrow quam would be, can we
remove the existence of a last look?
I think that's probably one of the biggest outstanding research questions.
And then to the conversation we were having earlier, even if we netted out, we were
out to that we can't. I'm very open-minded to the idea that we can, or we can have really good
arguments that it's not as bad as we are nervous that it could be. Then, yeah, I think the
implementation route that Ethereum takes to something like this, some method of improving censorship
is probably either going to be like multiplicity or braid or, you know, or whatever else is in this
line of research. Yeah, I mean, how awesome in that in the spirit of the Vitalik tweet that we went from,
years ago doing these vast, you know, MEPV taxonomy, shout out to Tina, and trying to figure out
what was even going on. And now we're down to can we get rid of less look? That's so much more
tight of a problem. I agree. And I think we're getting close. I do also want to talk about
the improvements that multi-proposers might bring to Defi specifically. I think we all would,
might agree that defy is like the first primary big use case of blockchains past just simple money uh what's
the relationship between a multi-proposal uh architecture and and the defy vertical so my like i said
my goal i said this on twitter my goal is to make the blockchain so awesome that the majority of
euro usd volume trades on chain and that might be a pipe dream i currently think that the probability
of that happening is greater than one percent which is one
I'm still working in crypto.
I really am excited about financial possibilities on chain, but I think this is, we have
turning complete smart contracts.
If we add on censorship resistance, they're going to be even more powerful.
Smart contracts are work on me into the space, by the way.
It's like really cool, primitive.
And you can do anything with them.
That's what turning complete kind of means.
And then you add on censorship resistance, you can do so much more.
So like, what are games going to look like when you can't censor the opponent's bet?
You know, that's going to be sick when you can play poker on chain.
And all these other things that I think it will enable as well.
And then there's also a conversation around the layer two to layer one relations that I think this brings.
And this might open up the door into like a much broader subject.
I'll pose this idea and maybe you guys can reject it or not.
if we are say say we get past the hurdle of a multi-proposor architecture being a strict improvement
max here has done all the proofs that he needs to write in order for us to all agree that multi-proposers
are a strict improvement so we're in that we're in that universe now and now and then we also merge it
into the ethereum layer one so the theorem layer one is running this multiple concurrent proposer
paradigm this in the bending of the priority of execution on layer two's versus execution on layer
ones bends back the priority of the Ethereum layer one being a place where really good execution
happens. Whereas Ethereum previously has always been about pushing execution to layer two's.
And all of a sudden, layer twos have this single centralized sequencer doing this one role.
But on the Ethereum layer one, we have this multi-party, multiple concurrent proposer execution
environment, which is better than it previously was.
Does this change anything?
Do you guys think, how does this impact the Ethereum's previous and current roadmap of the
multi the um roll up centric roadmap does it at all if we are prioritizing execution on the layer one at a
higher level than we were previously not sure it does okay rejection rejection of the idea yeah why would
it right like there are two separate things almost because doing more on l1 means that you have more
space to do things with and this is not creating more space this is just saying how do we
utilize or who writes to that same space, the block.
So don't think there is a change.
Okay.
No, I don't think.
Maybe this conversation.
Naturally, one thing to address is that you said the central is sequential
are L2s.
I think the whole layer two space in this is in the infant or adolescent phase where it is in
that world where it has like one sequencer, has upgrade keys, all of that.
I think zoom out and apply the same maturity that.
Ethereum has where Ethereum we said, you know, it's like facing the hardest MEP
problems that we've got it down to the last look and we'll have fault proofs and we'll have
a sequencer that is not one person that is an actual consensus network. Similar techniques
may apply. And actually because you're not in this eventually consistent world,
because I would think that layer two is apply standard BFT algorithms. I would think that the
multi-proposer BFT world would become much more exciting. So I think the ideas apply on layer two,
whether it is fossil, this, or something else.
Yeah, but I don't think it affects really how to think about how much, you know,
activity happened on L1 or L2.
Maybe if I was to ask that question again,
I would need to introduce this also other potential idea of increasing the Ethereum layer one block speeds.
Because then we are increasing space at the layer one.
How much are it?
But now that's a meaningfully differentiated conversation for multi-proposers.
Yeah, but like how much are you going to increase it, right?
like 10 exits.
Like that's not enough.
I think I agree with Georgia's that compression is incredibly important.
And so like we just probably can't get everything we need from the L1.
I think multiprocessor enables some new ways that base roll-ups can work as well.
So base roll-ups right now, the only thing you can do is basically make the block that maximizes the payout to the L1 proposer.
because, again, the L1 proposer is a monopolist.
But what if you have, like, a base roll-up,
you can define an arbitrary objective function,
which is, like, I want to make the user welfare the best.
Or I want the user welfare to be the best subject to these constraints.
Anyway, you can kind of make a base roll-up based on that,
on a censorship-resistant version of the L-1.
And then to the point that George was saying,
like, these are problems that are going to need to be tackled somewhere.
Like if the L2s are where execution happens and we want decentralized L2s, then they're going to have to deal with this multi-proposer problem as well because censorship is just a universal constant.
And that they're, I hope, going to look.
A lot of people ask me why I'm trying to work on, you know, maybe getting this into the Ethereum research circles and maybe even into the protocol.
And they're like, oh, why are you doing this?
especially a lot of people ask when it's, why are you doing this?
Because, you know, Ethereum is so ossified.
They don't do many changes.
But I think that's actually just not true.
Like, we shipped the merge.
That's a huge change.
And we can do more huge changes if it's the right thing to do.
That's not even the most recent big thing that Ethereum has changed.
We did 4-4 recently, too.
It's like we've been doing tons of big changes and we've been doing it safely with minimal disruption.
And so that's in contrast to other ecosystems that are doing.
big changes very frequently with frequent disruptions.
Oh yeah, like the daily Solana Hartford, you know, but also even on layer two, like there's like sometimes like a new upgrade might be announced like a week out or like a few weeks out.
So the iteration cycles are much different there.
Two points to add is like one.
It's actually interesting that like maybe on the layer two you don't need as much of the, you know, sequencer being censorship resistant just by virtue of the fact that you can use the L1 slow path.
to prevent the censorship.
That, granted, that doesn't address the M.EV part of it.
And what was the other part?
Oh, the other part was that in the spirit of talking about large changes,
the theorem has there, we'll see how much out it is,
but like the upcoming statelessness upgrade.
And that's actually, I think, elephant in the room,
the biggest actually breaking change that we're making.
And I would put, I would think of the mergers here.
I would braid and statelessness, like, on the equivalent level of, like, breaking or, like, big, big, meaty change that we need to think really hard about.
And, yeah, we should evaluate it, like, with a similar level of scrutiny.
What other parts of the Ethereum ecosystem might abrade implementation impact?
We just brought up base roll-ups.
I believe pre-confirmations just get completely eliminated.
So you can still have inclusion pre-coms.
You can't have execution-freecoms because what that is, an execution-pre-com is basically the monopolist telling you,
I will use my monopoly to make sure that you get this particular piece of state early.
I mean, I think there's very good arguments that you can't have execution pre-coms anyway.
So I don't know how much we're actually giving up on that.
And then, yeah, I also think it's not directly related to Braid, but I think probably something everyone can, at least a lot of folks have now tried to agree on,
is that lowering the block time is a really admirable goal
that would improve base layer UX,
would improve base layer censorship,
regardless of what else we choose to do,
would potentially improve,
like make base roll-ups, like, work in a more reasonable way.
Or less crazy alternative.
Yeah, and frankly, I think that a bunch of the downstream research questions
of how do we get faster block times
without compromising on decentralization
are just like way better research questions than like a lot of, you know, like where a lot of energy has been going.
There's a lot of other stuff I'm interested in too.
But I think like if we, yeah, if we can get lower block times without compromising too much on decentralization slash in a reasonable way, that would be like, yeah, I would love to spend like a lot more time on that.
I think that would be awesome for Ethereum.
Can we do two seconds?
Before we formally close the door on the braid multi-proposal conversation because I feel like we're coming to an end on it,
want to make sure that we've covered all the bases here.
Is there any part of the Braid multi-proposor conversation that we didn't touch on that I'm skipping over?
Well, I guess since we're about to shift into, maybe I can just segue us in with Braid,
one of the things that is really difficult in terms of getting shorter block time, and we see it on Solana as a symptom of this is basically the leader rotation.
And so Solana has 400 millisecond blocks, but they actually have 1.6 second leader rotation.
So the leader has four blocks in a row.
This isn't great when the leader has a proposed a monopoly.
But if they don't have a proposal monopoly,
we can be more accepting of them proposing four blocks in a row.
So I think Braid could actually potentially help us get to shorter block times
through this vector of not having to rotate the leaders as much.
I think that there's a lot of weight on what Vitalik tweeted about
in Can We Remove the Last Look?
I think that I probably sit closer to there's possibly an impossibility result somewhere in here that I'm nervous about.
And I am very excited to see what future work comes out of it.
I think you also will change your mind tomorrow as you're once to do, Charlie, but I will show you some evidence that the last look isn't quite as bad as you think it is.
Like I know the model that you're thinking of in your head because I wrote about it in the paper.
and I think that there are a lot of arguments for why it's not quite as bad as you're thinking it is.
It doesn't unravel completely the way that you think it is.
So we can chat about that another time.
But I think last look is obviously not good.
I think 50 milliseconds of last look is a lot better than two seconds of last look, just to be clear.
And I think that 50 milliseconds of last look isn't the end of the world.
I mean, I would understand that, though, right?
because the 50 millisecond advantage is accessible to, you know, two people,
whereas the two-second advantage might be accessible to a broader audience.
So I think, like, when I think about...
No, it's accessible by definition to one person in the fossil design.
But you don't?
No, he's making a point about centralizing effects, actually.
Yeah, I mean, I think everyone's super excited to hear the arguments.
I guess, like, my sort of epistemic humility about it is just like, you know, so far we've
never really seen like a trustless mechanism where power in a lot of different forms doesn't
concentrate. And so I think like probably I'm most excited about a roadmap where we pursue some
form of braid or multiplicity and continue quarantining the last look. And I worry quite a bit that
if we don't do that, we don't acknowledge the last looks existence and quarantine it, then the
meta-game that we end up with is going to be not what we intend, but in really subtle ways
that are actually, like, much more destructive. And the simple analogy I would give is, like,
there was a point in which Solana had no Mempool. And then Gito created a Mempool. And then
you started to see Sandwiching because Gito created the mempool, right? And then everyone gets
mad and says, Gito caused sandwiching. And then Gito gets rid of its mempool on Solana. And we fast
forward, like, a few months, and it turns out there's a bunch of sandwiching still. And
why is this? Well, it turns out it's really interesting. There's a bunch of different reasons.
One is like, I think, like, a Russian or like some weird country like sourced validator has
accumulated a bunch of stake in order to be able to see transactions. There's a bunch of backroom
deals between different validators and different people with views into the mempool from RPC providers
and that kind of thing. And I guess like I'm, yeah, I'm super interested to see the future research,
but I guess where I land on it today is just like we've so far never been able to create a mechanism
where power doesn't concentrate that way.
And rather than sort of pretending that it won't or closing our eyes to it,
I generally prefer the direction of like acknowledging it and quarantining it.
And I think we still got a much better system on a bunch of dimensions
if we do implement braid or multiplicity for the weaker form of censorship resistance
that that enables.
I would just say I don't think that what we're doing,
if you look at our research, I think Malesh and I, in particular,
SMG have actually done the most work on latency games and of anybody in the space, actually.
Like we, from a research perspective in terms of the microstructure of Ethereum and quantification
of latency games, understanding these centralizing effects, half our papers are titles centralizing
effects of X. We have a new one coming out, centralizing effects of a tester-proposer separation.
Like, you're preaching to the choir here. We absolutely acknowledge those things.
we're not going to, you know, try and push ahead with something that we believe is inherently flawed on those dimensions, because those are the dimensions that we spend a lot of our time thinking about.
And I would just say, like, we shouldn't try to paint braid as like, oh, we're going to ignore last look. Absolutely not.
Like, in fact, we have a specific desiderata on how much less look there's going to be and we want to prove properties about that.
The quantification, you mean?
Yeah, I think, I think in some sense, like, there's almost no quantification of what that looks like in,
the fossil design right now.
There definitely isn't.
But it's also not a goal.
Because we agreed earlier on that the goal is just for, you know, not the traders.
I don't think that's the only goal, at least the early versions of multiplicity.
That wasn't just...
No, totally, totally.
I'm just telling you that reasonable people might disagree on what you can achieve with a simple,
easy to implement that-da-da-de-da mechanism.
And maybe the point is that this is like as far as we can get
with a simple enough mechanism and then maybe, okay,
if you can do beyond that, might require a bigger change.
That's much more new ones to analyze.
Like I am really, like I said, focused on the traders and making,
that's what my research is about.
That's what I care about.
That's what my goal in doing research in crypto is,
is basically make sure the trading experience
works and make sure markets work and make sure spreads are tight and I don't care about as much
about the other stuff that these other things are trying to get. I think they're much, much,
much less important for us to do. And if the complexity overhead is similar and we get a very
nerfed version, as Dan Robinson, your colleague says, a nerfed version of inclusion list that
isn't for MEV, what's the point? We're going to do almost all the same work for this thing and that's
just my contention would be that you know i kind of like i never really related to like a thermal
one is a place where we're going to have like play all the you know long term like the high
value trading i always thought that like we would move it out to the layer two and we can
actually quarantine and solve the problem in like more tailored ways so that might also be part
of like my embedded assumption here that actually for me the l1 is like for the censorship resistance
and again transitory phase layer two ends up having all the activity
And that's where you can play more, play more with the parameters of the system.
Whereas the layer one, it's a bit more sensitive to me to like start to try to do too many things.
Yeah, I think if the L2s end up being the ones who use Braid because that's where it's relevant and that's where the trading's happening, I'm not going to be a, not going to be miffed about it.
I think that's a fine argument to make.
But I like the L1.
I want to make the L1 great too.
George, how's Russ going?
Pardon? How's Rust going?
Amazing. We've been doing a lot of work over the last year.
We cut the first 1.0 release of the Red project a few months ago, double months ago,
after we did a security audit with Sigma Prime, who are the authors of the Lighthouse project,
big shout out, amazing team, and we love working with them.
Actually, like, we have been, we have contemplated in the past implementing consensus
implementations, not necessarily for L1, so I don't, we are not interested in doing an L1
consensus implementation, but for decentralizing sequencers on the layer two or for
facilitating communication among off-chain services like Aging Layer, ABSs, or symbiotic
restake networks. So generally, really excited. We employ almost 20 people right now working
on open source. I think most of the trading that happened.
on the entire EVM ecosystem uses our infrastructure,
whether it is Dragon Rakita's RIVM implementation
or Foundry for developing smart contracts
or even builders using our code to build blocks.
And notably FlashBots a couple months ago
released the R Builder,
which is an MEV builder, open source,
written in Rust to level the playing field between things.
So yeah, I've never been more thrilled.
And next week we're actually throwing a conference in San Francisco
where we'll showcase for two days what we've done
and hopefully get people excited about building more with it.
I've heard rumors that FlashBots isn't the only builder
using some parts of Reth for their stack as well.
Totally.
It might not be open source.
Totally, yeah.
What are your goals for Reth over the next year or so?
What do you want to see happen in the Reth ecosystem in 12 months?
Yeah.
So how I relate to the Reth project is that it becomes this SDK
that everybody uses to build nodes and other EVM and beyond EVM centric infra.
And, you know, people won't change for a long time.
That's why we had the substrate, the Cosmos SDK.
And I think Reth's final form is that it actually becomes an evolved version of such projects.
You would use Reth to build Ethereum L1 nodes.
You would use REST to build OPERTH, Arbitrum Reth, you know, Polygon Reth, whatever.
You can name a lot of things in that direction.
Actually, this morning, we were talking about how we could implement a Starkware flavored
version of Reth that would use Kairo as at runtime.
It would be a great stress test of the flexibility of the, as I've been calling it, the Reth SDK.
So how I think about the Reth project is a stability for Ethereum L1, and we hope to push forward
the research front there.
That's why this conversation is particularly interesting to me.
It's been really humbling to be a part of the core dev process.
It gives you new appreciation for being sometimes more conservative about what decisions
to make on the protocol because you have to be really precise, really careful, but also it
has given us great appreciation for how we can bring our more, you know, aggressive shipping
to Ethereum while respecting this constraint of being very principled.
And we think we strike a fine middle ground right now.
So that's on the Ethereum L1 stability.
On the layer two world, I'm really excited about breaking through what we've been talking about
the gigagas barrier, which is this.
If you think of a median Ethereum transaction being 100K gas, one gigagas means 10,000 transactions
per second, and we think that we can give every layer two in the world that for free.
And I think it's very important to get the ecosystem to stop talking about performance almost.
I don't think it's, you know, it's kind of like funny because we think a lot about performance,
but I kind of think it's really unexciting to talk about.
I think it's stable stakes.
And in the cloud world, you know, if you want more performance, you just pay for it.
And that's it.
You know, you up your bill a little.
You don't raise a hundred million to go do it.
So super excited for breaking through the performance barriers with the Reth Project's next steps.
And we'll have some exciting.
We're going to cut a new release in two weeks where we hope to have a very exciting benchmark for that.
And the third thing is I just want the Reth project to facilitate experimentation.
I think it's very important for nodes to stop being thought of as this scary, huge, like,
monolith that people care and you know we talk about modular blockchains i don't really relate
to that term i relate to a modular code base uh by having very modular code base you can onboard
new people very easily can iterate very fast the last 30 days the wreath project had 500 pull requests
and 50 contributors and the people that we work on wreath are like 10 so you can see how by being
very intentional about having clear testing process a clear documentation very
clear separation of concerns can move very fast without actually breaking things.
And I actually find that really inspiring.
And it's been just an absolute privilege working with people from the core devs,
from layer twos, and our own team to make that reality.
Yeah.
I do think like every single corner of Ethereum has at least one eye on the Reth project and the
progress.
So I think everyone's rooting for you.
Yeah.
We actually have been working also on, you know, like the Reth project is not just like for
nodes just for building additional infrastructure.
So recently we've been collaborating with SXYNC who's building SP1, which is a ZKVM, to basically
just import Reth code into an SP1 program and just run it.
And that's really impactful because it means that now, okay, you can do a ZKEVM with no code
changes, reusing layer one Ethereum code.
And that's again very important because instead of doing cowboy modification,
that come from custom code that again somebody raised a hundred million dollars to build or something is just going to reuse like production code that's used and secured and is trusted ultimately by a very wide set of production operators so all that is to say very pumped it's been almost two and a half years doing this and yeah i'm excited for everyone to go and run wrath in production cool can i just say i feel like there's kind of a conspiracy theory among
the paradigm skeptics who say
oh,
Reth is kind of a design to take over
core devs and have influence,
but I think if you just listen,
just listen,
replay the last five minutes of Georgia's,
you know,
very excitedly talking about the tech.
Like,
it's clearly not that,
right?
Like,
it's clearly just like people who want to,
like,
make the system better and fix things.
We like the tech.
Yeah,
it's just like so not that.
Nobody here,
we work,
I work at consensus.
We have obviously interests, but like the goal is not that.
It's just to make Ethereum awesome.
And if we make Ethereum awesome, then everybody is going to be better off.
And we're going to grow the pie 100x and everybody's going to be 100x.
And don't get to be wrong.
We have views, right?
And the views might not always align with everyone.
But the views and yeah, we have a portfolio and there might be actual interests, you know,
but it is important that you surface the interests and you can have a real conversation about them versus
choose being, you know, kind of like suss about them and saying, oh, I want this EIP to go in.
No, you have to show proper motivation.
And for example, recently in the core dev conversation, there's been a large push on EOF,
which is an EIP for improving the EVM.
And we think it's important to push that forward to make the EVM better.
It's not like there is some ulterior motive because maybe it would make someone else better.
You just need to make the system better.
you bring sufficient motivation on the table.
And that's how high the bar for the core dev process is.
Charlie, are there any other paths of the Ethereum roadmap, Ethereum research efforts
that are floating around your brain that really excite you or interest you that we haven't yet talked about today?
That's a good question.
Honestly, I'm excited for us to make real progress on the Mav issue.
Although I think you've heard, I think like the debate that you probably heard today,
is actually really positive because we're finally making progress towards like a shared understanding of like reality and what's achievable basically.
And I think for the last, I think it's probably one of the most controversial parts of the protocol roadmap.
Like, you know, more or less people like the blobs.
People like the blobs.
We like, you know.
He doesn't like the blobs.
I know, but more or less, whatever.
Like I think that there's, and I tweeted about this recently, I think there's a number of like freebie or just like super.
positive protocol improvements that it's like when it happens everyone's going to be stoked that i'm excited
about um but i'm probably honestly most excited about us having gone from i would kind of describe
characterize it as a situation where like there wasn't even that much agreement on like what the
important questions in the sort of m evie block production research base were to there being like
really focused conversation um and like the set of things that we could pursue having gone from this like really
wide map to more of like, okay, like we have some general agreement on like we're going to do
something in this direction. And I think that's like incredibly incredibly bullish. Yeah, we're arguing
about whether we're going to have 100 milliseconds of last look or two seconds of last look. It's a pretty
good place to be at. One thing I want to say, I think we as, because it's an attention game in
crypto and attention as a currency and the more attention you get, the more
everything you get. There's a tendency for the excitement around Ethereum to be on the biggest
possible proposals. And Brade is a big proposal, like you said, Durnas. I would agree. It's maybe
even more of a lift than the merge itself. But I'm also working on a few other things,
and I encourage everybody within the ecosystem to work on these kind of like parameter tuning things,
and maybe it's more suited for kind of an economist type of thing. Let me just mention two of them.
I think one of them I've been harping on is the lack of a reserve price for blobs.
And it's not just that blobs are being sold for seven-way, which is $0.
So like in a whole month in June, I believe, there was all of the blobs sold.
The price of them was $5 total in 32 days.
Right.
And similar for other B.A. networks.
Right. Similar for other D8 networks. So I think the argument is, oh, maybe we should price that at the marginal cost of the network. That's an important point. I'm doing the work right now to try and figure out what that marginal cost should look like. Another reason you want to do this is because 7Way is so small. It's like oftentimes 12 orders of magnitude smaller.
And the worth of bandwidth.
It's not worth of bandwidth, but more importantly, when we want the fee to kick in in times of congestion, it takes a long time to kick in because of the way 1559 works.
You can only adjust so much.
And when it's when it's nine orders of magnitude, 12 orders of magnitude that you need to increase, it takes a long time.
And in that time, the fee market is not operating properly.
So my argument is let's bump it up, put it up somewhere near, you know, 10 to the 9th or something.
something rather than 10 to the one.
So like a way instead of a way.
And it will take many fewer cycles to get up to speed.
And I think that would stop a lot of the spikiness that we see going on.
And it would make the market slightly more healthy.
And so I'm going to be working on that.
Another thing I'm going to be working on,
I'm giving a talk tomorrow at SBC about,
which is some slight adjustments to the way we set parameters for execution gas in 1559.
And I just say, like, we need to also, we can think about big things and they're awesome and they're fun to think about and their nerd snipes.
But we should also just like do the little things and get the little things right.
And I think we probably should have put the reserve price floor in when we did 4-844 in the first place.
But now let's put it in in the next hard fork.
It's a very small change.
The code for implementing it is already there.
You just need to set the parameter.
And I think we should do more stuff like that as well.
because those little 1% improvements are going to add up a lot as well.
I think there is agreement.
I think there is not an unpopular take, basically,
that there probably should be made some adjustment to how blobs are priced.
Right.
Max convinced me that the 4844, or not sorry, not 4844,
the gas fee update rule is too fast.
That would be a good change.
I think everyone wants MaxEB.
I have like a running list on Twitter.
Oh, yeah, MaxEB.
Can we talk about MaxEB?
This is really important.
and change that's coming and the next hard fork in Pectra, which is basically right now we have
800,000 individual validators. There's a 32-Eth cap on how much you can put in your validator,
and we have 800,000 of them right now, or even more. Maybe that's the last time I checked,
it's probably more now because everybody wants to be staking on Ethereum. And the problem with that
is places like consensus, we run about 15,000 nodes, and we are running 15,000 nodes on many fewer
boxes than 15,000. And so we sign a signature for each of our attestations, especially during
finality, which requires everybody to attest. And we have to aggregate all of those signatures.
So we have 800,000 signatures for less than 10,000 boxes. And MaxEB allows you to stake much more
in a single validator. And so we'll be able to reduce the number of signatures that we're
sending for the same information. And that will in turn reduce message overhead in the
system and strict upgrade it's a it's a big bottleneck for a lot of stuff and less CPU yeah less
cp u which is an even bigger issue right now because the network let's say you can make it faster
but the aggregation is cp u bound so you're kind of hitting a hitting it limit and we can change
the way the signatures work because right now they're kind of tailored so that the signature aggregation
libraries work really well and if if we don't need to do that as much because we have 15 000 signatures
instead of 800,000 then that might actually be like a basically
why we cannot lower the block time you know as this um you know say you cut the gas limit in
half and you double sorry you yeah you cut the the gas limit in half and you halve also the block
time so to keep the throughput constant um is because of how many signatures were aggregating
there was an original hypothesis that the 32 eth limit stem from which is largely related to the
idea that we were not going to have stake pools um like we were not going to have a lot of people pooling
stake and I think that we're just kind of at a place where like the trade off to the amount of
signatures we have to aggregate that are really just the same party. Like it clearly there is a lot
of pooling and we're bearing like a huge cost for it. And this is not like an anti-solar staking take,
but it's just like it really is just like a strict upgrade, I think. Like it turns out the hypothesis
was not right. And we should like recognize that and like lower the number of signatures we have
to collect massively, which gives an avenue to lower-blerblest.
lot times higher gas limits, et cetera.
I think it's, yeah.
And it's kind of nice because you can still have 32 or if you have more just.
Yeah, it doesn't do anything to solo stake.
There is, there has been, and I'm not a CL dev, so I cannot opine specifically on like if, if this is true or not, but I believe, you know, Terence and friends.
But basically, implementation complexity wise, there are reasons because right now,
the 32Eath is like a constant in your code.
It's like, or a parameter.
Whereas now you have to make it a range of parameters.
And that complicates the code actually.
So pushback around it that you might have seen relates a lot to,
no, it's not like, oh, I changed 32 to 248 to 2048.
I change it from a single value to a vector and that is all around the code base.
Ah, that's where the complexity comes from.
Yeah.
Yeah, because to my understanding, Max E.B is more or less.
has consensus
about inclusion
but it's like
can we ship it in time
and all of that
yeah okay
what exactly is the implementation
the cleanest way to do it
so we don't introduce tech debt
but yeah
which says something about
thinking more about things up front
and you know this is a classic
software engineering thing
how much time do you spend
designing versus implementing
and you know how do you post more time
and then go back and redo the thing right
also super bullish that ETH has a spec
actually
yeah actually it's like the CL side
is like so much better done than the EL side.
The CL spec is amazing.
And on the EL, we're still, I think it's like universally accepted, like, basically,
geth is like the truth and you adjust to it, which is not a problem at all,
but just, you know, it makes the development of things different,
where the CL has a very crisp development process, which the EL cordial process should aspire to.
This is especially nice for, you know, think boys like myself.
who just, we don't want to necessarily dig into the rust or anything.
We just want to see the spec and see what's going on.
And many times the code that you read is optimized and like it might not like be obvious.
It's like some random bytecode like thing that does it faster, but you have no idea what's going on.
So the spec is very useful.
And we definitely, we did a recent paper on X-Cuse and I spent a lot of time looking at the Consensus Slayer spec.
Right.
Charlie, why did you bring that up about the Ethereum spec?
just because with these kinds of software projects, even as far back as Bitcoin and Bitcoin
or Consensus D, like, you know, a lot of stuff to Georgeos' point about, like, Gath being the
reference implementation, you run into this issue where, like, the original implementation, in a sense,
is the spec. It has, like, you know, lots of implementation nuances or choices that weren't, it
wasn't necessarily anyone thought of them or thought about the decision. Like, it's just the way
that it ended up getting implemented.
And, like, I think Ethereum on this spectrum, like, is doing quite well, which is, like,
most stuff is pretty formally specified.
The consensus layer is great because it was almost, we realized the problem and tried
to solve for it from the beginning.
But even the EL, like, you know, you have a pretty good sense of how the protocol is
supposed to work, like, what the intended behavior of it is.
And I just think that's, like, very great.
I mean, like, I think, you know, Ethereum as, like, a protocol is supposed to work.
a super decentralized, like credibly neutral censorship-resistant computer, like knowing how it's
actually supposed to operate is like extremely useful.
Yeah.
And you know, you know, like people like to, there's these two schools of thought.
Should there be one client or many clients?
There's many protocol security reasons that David, you might have talked that knows you
on the podcast, so I will not repeat.
But basically the most important reason for me is that more people knowing how the thing works
is just good.
Right.
I remember Vitalik actually brought up this line is like there's no real point of having these decentralized systems if only like 10 people knows how they know how they work.
I would like to reason by analogy a bit, like I would say it's almost like having a checkpoint so that the node can, you don't have to run the node from Genesis.
Because I'm like relatively new to the research, like med research, even client research is even more new to me.
And being able to go read this back versus having to read the code means that.
I can catch up faster and like we're we're always producing new stuff and changing stuff.
And so like I look at the speed at which Salana changes and I'm not looking at Salana full
time, but I try to keep up a little bit. And then I'll go in and I'll talk to like some of the
G2 guys and like they're like, oh yeah, by the way, like we changed the way that everything works
since the last time. Two days ago. Yeah. And it's and another change is going in in in two weeks.
By the way, you have any feedback about it? It's like, you know, and then I'll, I'll try to get
feedback, but it's so much harder than the way it is working on Ethereum, because we all know
and we all agree and our models are lined up. We all agree on what the model of the thing is,
and it works like that. And on Solana, sometimes they agree on what the thing is supposed to
work like and it doesn't work like that or the other way around. So I think that's definitely a good
thing about developing on. And then you go to Mainette validators and you ask what's going on.
Yeah, and then they actually don't run what's in the codebase that you looked at.
So, yeah.
Well, guys, this has been a fun little chakutory board of Ethereum protocol topics at the very end of this.
But I want to bring it back to you, Max, and Braid, work in progress.
Who do you want to hear from?
Who can support you?
What are next steps?
Any callouts to the listeners in the audience of this?
And who can roast you?
And who can roast you?
Well, I would just say, like, we're welcome to hear roasts.
I would say it's often easier to get people to engage deeply with an idea when they first hear it.
And almost I wish we had waited a couple months.
We we kind of, I gave an early talk at this paradigm research day and it kind of, there was a lot of excitement about it and it got out, which is awesome.
And that's why I'm here talking right now.
Yeah, I got excited specifically.
But I just want to urge everybody to keep your minds open, even if you're skeptical today, read with an open mind.
Do not, you know, immediately shut your mind off to these things.
if we, I'm spending a lot of my day thinking about this stuff, thinking about new ideas every day.
And I think we can probably get there.
I'm very optimistic knowing the awesome people that we have working on it.
And what I do know about MEV.
And I think just keep an open mind, we're going to do the work.
If we can't demonstrate it's not going in, obviously.
And there's many different sides of the argument, but I'm really excited for basically the first time.
about the direction that we're going.
For a long time, I've been like, oh, we're kind of trending in the wrong direction.
PBS has become very centralized in the time that I kind of started working at the
maximization point of the decentralization of PBS.
And I wrote a paper that said PBS is going to centralize and then it centralized in a year.
And so I was like, oh, what are we doing here?
I was like, oh, I'm going to go back to grad school because, like, nobody's listening to my
ideas.
So I'm going to go, like, write some more papers in grad school than nobody's going to
to read there. But now I'm like, hey, like, people are listening. You know, there's ideas. We're
discussing the difference between 100 milliseconds and two seconds of last look. And that's just so
much better than where we were a year ago. And even where we were two years ago, and we're just
starting to talk about these mem topics in detail. Well, Georgios, Charlie, Max, I feel like I
have gained like at least a couple IQ points from just listening to you guys today. So thank you guys
for making your way over here from SBC
and joining me in the studio today.
It's been really fun doing a podcast in person.
Thank you, David.
Thank you, David.
Thanks you.
I love these boys.
Bankless station, you guys know the deal.
Cryptos risky.
All that good stuff.
You can lose what you put in.
We were head west.
This is Frontier.
It's not for everyone.
But we are glad you're with us on the bankless journey.
Thanks a lot.
