Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Vitalik Buterin: Ethereum – State of Affairs After the Merge
Episode Date: October 26, 2022With the long awaited Ethereum Merge successfully executed last month, we caught up with Vitalik at DevCon in Bogota to talk shop: How does he see the state of the network, does he think looming centr...alization is a threat. Was Ethereum's danksharding plus third party layer 2s a good architecture decision? How much of an issue is MEV?Topics covered in this episode:The state of the network and credible neutralityShould validators have agency?How much of a problem is MEVThe Ethereum SurgeThe road to DankshardingDecentralizing the sequencerThe future of the spaceThe one most important thing we need to get right in the coming year for the ecosystem to succeedEpisode links: EthereumDevcon 6Episode 402 - Ethereum – Can It Go Beyond DeFi?Join the Epicenter team!Sponsors: Tally Ho: Tally Ho is a new wallet for Web3 and DeFi that sees the wallet as a public good. Think of it like a community-owned alternative to MetaMask. - https://epicenter.rocks/tallycashThis episode is hosted by Friederike Ernst. Show notes and listening options: epicenter.tv/467
Transcript
Discussion (0)
This is Epicenter, episode 467 with guest Vittalik Buterin.
Welcome to Epicenter, the show which talks about the technologies,
projects and people driving decentralization and the blockchain revolution.
I'm Frederica Ernst, and today I'm speaking with Vittalik Boutrin about Ethereum,
the general state of affairs, and where everything is headed.
Before I talk with Viterlick, though, let me tell you about our sponsor this week.
Teddy Ho is an open-source wallet
redefining the wallet as a public good.
With Teddy Ho, you can safely connect to Defi and Web 3
with everything you need from Metamask, plus a lot more.
You can view NFTs in a wallet across Ethereum, Polygon, optimism and arbitrum,
and you don't actually have to manually add these networks.
They already come plugged in.
Teleho has good ledger support
and is built by a community of developers
that listens to its users.
swap between assets in wallet at a fraction of the price
and conveniently view all of your account balances
across multiple networks with our new and improved portfolio tab.
Over 100,000 people signed the Teleho Community Pledge,
a letter to the Web 3 community orchestrating
their commitment to building a wallet that's accessible to everyone,
radically transparent and fully community-owned.
Teleho isn't just building a wallet that works,
It's building a wallet that people can believe in.
It's time to defend Web 3.
Visit telly.cash today to sign the pledge and download the wallet.
And for Epicenter, we're hiring.
We're looking for a community manager to help grow our audience and take AppaCenter to the next level.
If you're passionate about crypto and creating great content, we want to hear from you.
Look for the job description and details in the show notes.
And please also show with anyone you think might be a good fit for this.
And now, without further ado, let's go to the interview with Vitalik that was recorded at Def Kahn 6 in Bogota.
Hi, my guest today needs no introduction.
I'm here with Vittalik.
Vittalik, thank you so much for coming on EFSA again.
Thank you so much. It's good to be here again.
Fantastic.
We have an hour, and I want to dive right into the meaty stuff.
I hope that's okay.
Sure, I love the meaty stuff.
I mean, I prefer fishy stuff, but, you know, meeting stuff is great.
Okay.
We'll get to the fishy stuff later.
So let's start with meaty.
So the state of the network and credible neutrality.
So basically, if you go to MEV-watch.
Info, it currently shows that over 50% of blocks are actually OFAC compliant, i.e. censored.
What do you make of that?
It's a concern.
I mean, I think it is important not to overstate the concern, though, right?
because censorship resistance is what I call a one of interest model, right?
So if 50% of blocks are censoring, that just means that a transaction that's being censored by
them just needs to on average wait two blocks instead of waiting one block.
And like, you know, even if 90% start censoring, it will just have to on average wait 10 blocks instead of one.
But at the same time, like, it's also not a non-problem.
And it's definitely importance for the ecosystem to start pushing against this more
and come up with ways of reducing the percent of blocks that are censoring and improving the guarantees that transactions have to get into glided reliably.
So I think the most near-term thing that's going to happen in that regard is MEV boost is going to have transaction inclusion lists added to them.
So this is the thing that also has been called CR lists sometimes, but the basic idea is that the block proposer provides a list of transactions.
and whoever provides the full block as a builder would be required to include those transactions
or else the proposal would not accept it.
And what that does is it kind of brings us back to the world of, a world very similar to what
MEVEGF did before the merge, right?
Because what MEPEGF did before the merge is it did not provide full blocks, right?
It provided what's called bundles.
And bundles are collections of transactions that,
can sometimes take up a full 30 million gas, but very often they do not.
And if they do not, then the miner at the time had the ability to just add whatever transactions
they want on top.
And if there were any transactions that the bundlers were censoring, then the miners could just
would even by default basically end up adding them.
And the reason why that architecture did not carry over to MEV boost by default is
basically because there was a desire to reduce the trust dependencies, right?
Because the way that MEV GEF worked, like, was really centralizing in a lot of subtle ways, right?
Like, it basically required minors to be trusted, right?
Because if you give a bundle to an untrusted minor, then, or to a malicious minor,
then that minor could do what's called MEV stealing.
Like, they could figure out what strategy is being used to extract a media in the bundle,
and then just, like, search and replace the address collecting products.
fit with their own and just take all the money for themselves, right? And the trust-based approach
to mitigating that is basically, well, you know, only a few kind of big shot elite miners that
people knew and trusted would be whitelisted to participate in the system, which was like
a kludgy compromise for the more for that moment, but it really kind of sucked. And, you know,
MVB boost was designed without that idea in mind precisely because we don't want to be
centralizing in that way, right? And so it's much harder to create.
a system that gives
proposers freedom to
choose transactions for the purpose of
censorship resistance, but without
giving proposers the power to do MEP stealing.
And so the
simplest approach to doing it was to just
not bother with giving proposers any
kind of control at all and just do
what's called full block auctions, right, where the
builder just decides everything. And that was
MVP Boost 1.0. The next version
has these inclusion lists where the
transaction center specifies a list of transactions and then the builder can add their own,
they can reorder transactions, but they can't remove the ones that the proposal included. And so that
way we get back to this world where it doesn't look quite the same because it's like you don't
have the proposal literally adding the transactions to the end. It's like the proposal provides them
as a list and then the builder chooses the order. But, you know, it's still a different type of
partial block auction and proposers can't say, you know, hey, I know about these and I absolutely
absolutely wants to get them included, right? So that's the first step, and there's more steps
that are going to happen after that, too. You said that you're not worried about like 50% of the
blocks being censored. Even 90% would be okay. Where would you draw the line? Because, I mean,
it's kind of a slippery slope, no? It is. Well, I mean, I would not say 90% is okay, right? I think
I mean, the optimum is, you know, close to zero, right? And I think the question is not a kind of binary
okay versus not okay. The question is,
If it happens, what kind of response are we willing to do to try to change the situation?
Yeah.
Right?
So like one example of a response that I think would be too much, right, is, you know, doing what's called social swashing, right?
Like, which is this meme that I think has actually gone too far and we needs to be, you know, really, really careful about, I think, about saying those kinds of things because we don't want Ethereum validators or CoreDefs to become a kind of general purpose morality police.
but the basic idea would be like, do a hard fork that, you know,
deletes the balances of validators we decide are censoring.
Like, that's just, like, that just kind of cuts through, you know,
way too many norms and, like, it's emotionally appealing maybe,
but it's just, like, it's just totally the kind of the wrong tier of escalation
given the actual seriousness of the, yeah, situation, right,
which is, like, medium but not fatal.
So the things that I do favor is, you know, working harder at these,
kind of better MEV protocols, if necessary, you know, trying, like socially leaning on, you know,
specific stakers and validators to, you know, not use censoring technologies, even trying to, you know,
give grants to and subsidizing alternate builders and alternate relators. Like, there's a lot of kind of
ecosystem level, you know, tools in the toolbox that I think we really should be using. And, you know,
if the percent censoring goes up to 90, then, you know, we should obviously be, you know,
even more but it's I don't know I am like I'm honest optimistic that things are
going to improve quite a bit even after some simple counter measures mm-hmm
let me ask a question to kind of dig into that do you think validators
should have agency because I think basically if you if you decline that right if
you say like they're just they're just a relay network and they should have no
opinion they should just you know be switched
Do you believe that or do you think it's okay for them to be opinionated?
It's a complicated topic.
I think different people inside of the Ethereum Foundation
even have different beliefs on this.
I think our general approach so far has been to try to make validators as dumb pipey as possible.
And, you know, basically have them just, you know,
ideally run a piece of code that listens to the MMPPOL,
figures out which transactions to include, you know, works,
with some kind of builder auction mechanism
and then just chooses blocks.
And the benefit of validators being kind of maximally dump pipey
is like one is just more predictability.
Another is that it makes it easier to run a validator
because if there's an expectation that validators,
like a validator becomes this kind of even part-time job
where you have to actively seek out lots of things
and make judgments, then, like, it becomes an intensive thing
and that's just going to drive more people to work pooling,
which we want to avoid, right?
So, you know, we do want, like, being a validator
to be kind of as low resource as possible,
and that's, you know, both in the sense of technical resources
and hardware, but also in the sense of human resources, right?
So that's kind of the why validators should have low agency take.
the argument for why, like, at least the opportunity for agency for validators should exist
is basically because, like, they are a second line of defense and we should use it, right?
Like, there is some mechanism that gives, you know, censorship or resistance by default
because we did economic analysis and math that says that, you know, blah, blah, the equilibrium
is that in order to prevent the transaction from getting in for N-salots,
you have to pay, you know, like N times 100 or N-squared or whatever.
But, you know, if it turns out that, like, whether because the math is wrong or whether because there's, like, some outside the model thing that we didn't realize that ends up not working, like, it's really valuable to have the second line of defense where the portion of validators that are willing to, you know, both, kind of be more proactive and potentially sacrifice some profit or kind of have the room to kind of wiggle and, you know, make sure that the network's guarantees are kept up, right?
So that's the case for, I mean, like not demanding validators to be opinionated, but at least, you know, giving them the room to be opinionated if they wants to be.
Right. So I think it's a balance between those two.
In principle, one could think of technical ways to kind of enforce validators to not be opinionated.
For instance, kind of, you know, in this shutterized beacon chain scenario or, you know, commit reveal schemes, what do you think?
think of those? Yeah. So, I mean, the challenge I have with a lot of these, like, threshold
encryption-based schemes is, like, the reason why I don't trust threshold encryption is because
threshold encryption has a 50% honesty assumption, and it has a particularly insidious 50% honesty
assumption because 50% could collude to reveal all the information completely undetectably,
right? Like, if 50% in a threshold encrypt, NPC, secret chair, whatever you call it, just, you know,
all decide to send data to the NSA because the NSA leaned on them, then, like, there is
literally no way that anyone is going to find out about this. And, you know, they easily could
run some sub-proticle that's, you know, starts filtering particular things, right? So, like,
I mean, I'm, like, I'm generally, you know, suspicious of honesty, majority assumptions,
and I'm always big on, you know, having clear paths to recovery, even if, you know,
dishonest that majority has become the case. But, like, this just feels even worse than honest.
majority assumptions where in order to do an attack, like, you know, you have to actually do something
visible on the network, right? So, like, that's the reason why I'm kind of skeptical of those
particular solutions. And, you know, even on all your two networks, right, for example, like,
I think, you know, even among flashbots people, a lot of them tends to even, like, prefer SGX to
committees, basically exactly for that kind of reason, right? And, I mean, creating, like, making
sure that it's possible for
layer two networks that do that sort of stuff
and maybe act as decentralized builders
to exist is
I think definitely interesting.
I should also mention there are other ways
to force
validators in their capacity as
proposers to be unimpanionated that don't involve
threshold stuff. So
M.EV smoothing, this is Justin
Drake's favorite idea, is probably the biggest one there.
The idea there is basically that
a testers
by default, like
If they see that a proposal voted on a, or accepted a bid that is significantly cheaper than the actual winning bid,
then they would act as though that block did not actually appear.
So, like, they would basically treat a sort of non-optimal bid acceptance as a kind of invality or unavailability condition,
which is interesting and potentially sort of scary in unknown, unknown ways in different ways,
because like that also starts making a testers be less dump-pipey.
And it also, like, it does go way down the spectrum of removing agency from proposers, right?
Where like even if profit maximization kind of stopped working toward the goal of censorship resistance,
like they would just not be able to do anything to counteract that.
And so that's, but no, I'm sure Justin would have kind of other.
arguments in favor. And, you know, if you talk to him, I'm sure he'll, you know, he'll give great
arguments. So I think, stepping back a bit, there's just, you know, lots of different considerations,
like lots of strategies are still up to debate. And so I think in the short to medium term,
we want to take approaches that are forward compatible with as many likely paths as possible,
just so that, you know, we don't end up taking a particular path and just realizing, like, oh, my God,
we just, you know, completely screwed ourselves and then wasting a year on going back if it ends up leading to the chain censoring really heavily.
Are you worried that this will progress in ways other than the number of blocks being censored?
So, for instance, what I could imagine is that attesters all of a sudden say, look, this block has a tornado cash transaction in it.
I will not attest to it.
And I mean, obviously, that would cause a whole different set of network problems.
Right.
Okay.
Well, so one very subtle thing about attester is that I think people don't talk about enough, right?
is that like, there's the legal bar to preventing, you know, speech and, you know,
even including a kind of, you know, speech that is actually by computer programs is high.
But the bar to compelled speech is way higher, right?
And as an attester, you actually have three options.
One option is to attest to the, you know, the block that contains controversial transactions.
The second is to attest to a main competing block.
And the third is to not attest at all, right?
And I'm not aware of any legal argument for why the second would be mandated.
The third is always an option, right?
And once you start, you know, not attesting at all, then, well, there's different ways you could do it, right?
You could start not just, you know, be in quiet until the, until a censoring block or
until a block containing controversial transactions gets confirmed and then you go back.
Or if that becomes untenable, then, you know, as a valid there, you just exit, right?
And I, yeah, I do think that a lot of stakers and even the, the, you know, the, you know,
the big staking poles, they are well-meaning. And, you know, if it comes down to it and they get
leaned on that hard, a lot of them would be, yeah, willing to just exit and leave. But, you know,
if they don't, then, like, the good news is that we have a very kind of clear technical definition
of, like, what it means to be, like, kind of attempted 51% attacking the chain, right?
And that definition is basically if proposals that appear on time do not become part of the canonical chain, right?
So if a proposal appears on time, but then other blocks start winning, then, like, clearly the majority of a testers do not vote for the head that they were supposed to vote for.
And, like, that's the situation where you can start doing what kind of user-activated soft forks and inactivity leaks and, you know, all kinds of these,
are kind of more extreme mechanisms.
I hope they never happen.
I hope so too, but I think like...
It's good to have them.
I mean, it's kind of like nuclear deterrence, right?
In the sense that like having a very clear story about how to respond,
like, it itself may well be the deciding factor in ensuring we never have to.
Yeah, absolutely.
It's kind of like the maker shutdown.
You know that in principle you could trigger it.
So you already touched upon it.
M.E.B.
How much of a problem do you think it is and who should solve it?
MEV is interesting because there's many kinds of MEV and some of it is a problem and some of it is not a problem and some of it is a problem and some of it is a problem of different kinds.
So I think there's different classes of MEV problems that people tend to identify.
So one class of problem, for example, is just like outright exploitation.
So I send a uniswap transaction.
I want to convert one-eath to, you know, 1,200.
USDC and someone you know some MEV builder front runs and like kind of does a sandwich
attack where they sell some US some youth first and then they buy back at the end and I'm you know I get like 5%
less USC right so that's one example of MEV which is like completely harmful and it would be cool
if like somehow it could be eliminated entirely right the second kind of MEV is emmy that is to some
it's unavoidable and this probably even benign.
So one example of this is just arbitrage, right?
So like if, you know, in the most recent block on Uniswap, the ETHUSTC price is at 1220,
but then within those 12 seconds, you know, the price on finance or whatever shot up to 1227,
then you know, you could do a little, you know, uniswap trade, make, kind of get that transaction
in the first transaction in the next block and then you do the, you know, so you'd
kind of buy up a bit of ETH and then you'd sell some ETH on finance and then you'd sell some ETH
like, you know, you make a couple of dollars of arbitrage profits, right?
So that type of MEV is, like, it's not an abuse of anyone, and that's even good because it keeps prices synchronized.
But it's MEV in general still leads to this issue of economies of scale and block production, right?
Where you need sophisticated algorithms to produce optimal blocks, and these algorithms change a lot.
And like, we really, really don't want proposers to have to, you know, like, update their software.
to stay in touch with all of these algorithms that are trying to optimize MIV.
And the solution with PBS, whether it's kind of MEV boost style extra protocol stuff,
whether it's Ethereum and protocol stuff or whatever, is to try to create two different classes
of actors and try to keep builders as decentralized as possible and try to ensure
censorship resistance even, sorry, keep proposes as decentralized as possible and try to ensure
censorship resistance even in the case where builders get very centralized.
Yeah.
And I mean, we've kind of, we have been seeing that builders have gotten very centralized, right?
Yes.
So do you think it's a problem that apps in principle have to solve?
Do you think this is on the app that the user, because obviously to a user, first of all,
any one user, it's usually not that much money.
I mean, it's like a percent or so.
I mean, there's a lot in aggregate, but for every single one,
It's not that much.
Do you think we should make the DAPs understand that they have to build a process that is MEP
resistant in the best case?
So I think we can't count on all DAPs becoming, you know, good from an MEP point of view
because there's just like there's always going to be dumb developers somewhere, right?
And, you know, those dumb developers are going to cause like, you know, base fees,
flats and MV and all sorts of things.
And the chain just needs to be able to absorb.
that, right? Like, if the chain can just be, you know, taken down by a couple of idiots that use a bad
auction mechanism to sell their monkey is done, like, it's not actually a very robust chain, right? But
from the app, the interesting question is, like, from the app's point of view, like, is it, I guess,
more socially optimal for the apps to take charge on reducing, or redesigning themselves to reduce
M.E.V. because that's in their interest. Or is it better to have some kind of, you know, more,
more abstract, higher layer, possibly protocol level, possibly some malware that prevents that
value from disappearing or into, I mean, you know, exploitative MEP actors' hands or whatever.
And I think there is, the interesting thing is that both paths exist, right? So, like, just
continuing the Uniswap example, because I think realistically uniswap style things are basically
are like 90% of the MED that's going on generally, with the exception of spikes. So, like,
You know, there was that like Zen thing that pushed the base fee up, and I'm not sure what that even is.
And then like the monkey thing that caused like 60,000 ether to get burned earlier this year and so forth.
But in the unit swap case, like one approach is the cow swap route, right?
Like you make a better alternative to union swap where the order is an off-chain object.
And you first make a best effort on off-chain matching.
And if the off-chain matching doesn't happen, then as a backup, you go on chain and you do the swap.
right so that's a very
m-ev minimizing architecture
because in the average case you get an off-chain swap
and so there isn't a way to like make money
by sandwiching or even back-running
it right so now
we could also consider the other strategy right
so let's just for the sake of example
remove sandwiching right and
let's say the sender sets a
slipage of zero and
let's say with fancy account
abstraction in some
you know far future it becomes safe to have a
slipage of zero because if that
transaction doesn't get included, like you don't even have to pay a fee.
So, no, nobody even will bother to include it.
The other thing that happens is backrunning, right?
So like if, for example, I'm selling, I don't know, 100 eth and converting it to USC,
that trade by itself might decrease the price of eth on the AMM by, let's say, 2%, right?
And so on average, I'm losing 1% on the trade.
So what happens is that creates a backrunning opportunity, right?
That basically means that after my trade, the price is 2% out of balance.
And so someone else can come and make a uniswap purchase in the opposite direction to buy back some ETH.
And then on finance, they can go sell some ETH.
And, you know, they can make an arbitrage profit that is assuming optimal everything going to be equal to the 1% swelibage that I lose.
Right.
So what you could do is you could potentially come up with some auction mechanism where, you know,
using weird fancy stuff, you could basically say, yeah, you know, oh, basically find a, like,
auction off the right to learn about this opportunity to backrun me. And in an optimal auction that
would get sold for, you know, close to the 1% that I lose. And so I will, I as a user,
would get most of the revenue from the auction. And so I would get most of my money back, right?
So both of those strategies are legit. And both of those.
of those strategies could maybe work. But that second one requires, you know, a lot of fancy infrastructure
work, but on the other hand, it doesn't require per application work. But then, you know,
if you do the per application work, then it's like harder on each individual application,
but you don't need to do things to kind of complicate the de facto protocol stack that people
are used to send the transactions through to Ethereum. So that's how I view the tradeoff. I guess
the instinctive answer there is like apps would probably take the responsibility on themselves
to minimize MIV in the short term and then in the longer term we can come up with these
more kind of general purpose and all-encompassing solutions. Yeah, I like that idea. How do you feel
about this attempted and in some way also successful narrative shift that the very smart people
at like paradigm and flashboards and so on pushed that basically.
basically extracting MEV is good for the network because it secures it.
I feel like it's a really weird spin.
And I think people, if you onboard someone and you tell them about this,
this is not the conclusion that people usually would go to.
Yeah, I mean, I get it.
Like, I get how, like, you know, it's an outside observer, like,
you could feel isomorphic to how, you know,
Philip Morris has these lovely ads and websites to talk about how they're making the,
yeah, future of, you know, like,
post-cigarette smoking that's going to be much healthier or whatever.
And the reality is different.
But I guess the arguments from their point of view,
if I had to, I think I would explain it like one MEV is inevitable.
And in particular, if they did not step in to try to extract that MEV in a reasonably
decentralization preserving way, then it would have been extracted in a way that's much worse.
right? And I have heard things about how even a couple of years ago, the mining pools that
were seriously looking into having internal teams that would do M.AV extraction. And so if
FlashBots type people never existed and never kind of democratized MV extraction, then, you know,
the argument is that the big mining poles will just get more profitable because they have
these fancy internal teams. And that would just like centralize Ethereum to hell. And we would not even
have an answer to this post-merge and like even post-merge everyone will just consolidate into the
biggest staking pool, right? So that's probably the kind of divide there, right? Like to the extent
that MEV is inevitable, it's like it's better that it gets captured in a way that preserves
decentralization and is an open protocol and, you know, give is a neutral between, you know,
small stakers and big stakers. But then, you know, I guess the, the, yeah, I guess the
argument against that argument
is like well you know
is intellectual energy being
pushed into
a kind of optimizing
fair MEV redistribution
that could have instead been pushed
into making protocols that are not
as MEV in the first place
and that's like it's a good
arguments to have and I think it's definitely
good that there's kind of the skeptics in the
community that are really yeah pushing hard
and making the argument
but I think
think, you know, to what extent each side is right, ultimately ends up, depending on, like,
really complicated technical considerations, right? So, like, one example of a technical consideration is
let's suppose that we had magic delay encryption, where you can encrypt data and it would be
guaranteed that, you know, after six seconds, so, you know, six times nine point three trillion
vibrations of a CZM atom or whatever the physics definition of a second is, you know,
the data would get decrypted, right?
Then you could, like, we would be able to very easily just, you know, have delay encryption baked into the protocol.
And then when you send a transaction, it would come with, you know, some kind of snark that proves that if it gets included, it pays a fee.
And, you know, you'll get included and then everything else is encrypted.
And then once a lot later, we get decrypted.
And that would be extremely MEV minimizing.
And that would like make everyone happy, right?
But the problem is that there's all of these kind of technical considerations and like, you know,
we don't actually have magic delay encryption and all the substitutes that we have are imperfect in various ways.
And so, you know, depending on how the technology, yeah, sorts itself out, I think the end result is going to be some combination of the two.
You said one thing I want to take issue with.
Okay.
Do you think flashbots by and large has been a factor for decentralization in the ecosystem?
Because to me, it looks like the exact concept.
invest. I think they have successfully prevented decentralization of the layer under them, which is, which is my
stakers. And I think that's really important because that's like the one thing that's much more difficult
to reverse. But, you know, on the other hand, FlashBots itself has definitely turned into a centralization
vector. And, you know, they, you know, they have their own narrative about how this is going,
they really wants to improve this over time. But I definitely think the community should also
take charge and kind of push really hard for, you know, both things like MEV Boost and trying to
support competing builders and relays and just like, you know, basically all of the tools in
the toolbox to try to ensure a more distributed market. So, you know, I think that stuff's a really
importance too. Cool. Yeah, that makes a ton of sense. I'm done with the meaty stuff. Shall we go
on to the fishy stuff? Sure. Okay, fantastic. Ethereum core development. Okay. Fantastic. So
So we all know, you know, the merge, the surge, the verge, the purge, the splurge, the splurge,
probably not the right order, but did you know that rhyming is a veracity signal?
Because it kind of, it says to you, oh yeah, that makes sense.
That checks out our rhymes.
I did not know that.
Fine.
It's very convincing.
So let's talk about the surge first.
So Dankshading requires a trusted setup.
Right.
Right.
Should you worry about that?
I mean, lots of people have different opinions.
I mean, my opinion is that it's a one-of-end trust assumption where N is going to be in the thousands.
And so that's much less likely to be the thing that breaks than like 10 other things in the ecosystem.
But like at the same time, you know, I recognize both the like the fact that even that like one-of-an assumption is still worse than a zero assumption.
And the fact that like, you know, people value aesthetic purity and that is important.
And I do think that it is important to try to move away from the trusted setup based approach over time.
So I did do that deep dive into, we looked into IPAs at the Stanford event,
and then I did my deep dive into whether or not we could replace the whole thing with arithmetic hash functions
and try to upgrade to SNARCs or STARCs over them.
And the challenge is that all of the other approaches just to have too many tradeoffs and, you know, they don't quite fit nicely in the same way.
And so it just feels like, you know, all things considered the KZG trusted setup based approach is like the least bad of all possible worlds.
And from a trust point of view, you know, keep in mind that like if like if we don't do something, then the risk is that like the, like, the, you know,
mentality will entrench that, you know, instead of it being best to do things on roll-ups,
it's like, oh, you know, it's okay to use external data availability committees. And so, you know,
if we don't like really take seriously the desire to have some kind of a good and, you know,
very strong and effective scalability on the Ethereum base layer, then like this other layer could
easily, yeah, kind of centralized and lose trust assumptions in a way that's irreversible, right?
So that's kind of the argument for pushing ahead with KZG4844 now.
There are, I think, a lot of things that we've done to try to make it be forward compatible with a speedy replacement with something better when the time comes.
So probably the biggest example of this is the point evaluation pre-compile, right?
The idea here is basically that there is this pre-compile which lets you verify the kind of be.
evaluation of the blob if you treat the blob as a polynomial at a particular
points right and that lets you like that's a general purpose facility that allows
ZK roll-ups to use blobs as a data source and it allows optimistic roll-ups to also
use point evaluations that kind of points to particular locations for a fraud
proof right because fraud proofs these days optimism and Arbitrum both use multi-round
proofs and so they can just you know focus on one value right but the nice thing
about making that be a pre-compile is that it lets us
seamlessly upgrade the cryptography that's used to manage the whole thing without roll-ups needing to
change a line of code. Right. So today, KCG gets used and point evaluation is a KCG.
Gemino-paring, you know, you provide a G2 element. But in the future, you know, it's going to be
maybe a Poseidon-Mercl-Rut, maybe a reinforced-in-Rawrote, maybe a, you know, Poseidon over a 64-bit
field, Merkel route, because I hear Poseid-over-Sy-4-bit fields is, like, insanely fast now.
the
cryptography is going to be totally different
but because
roll-ups are going to use it through this black boxy way
they're not going to need to even change their code at all
and you know it'll be just
done in this
way where it's just completely
seamless for them and you know
we can make that upgrade in a future hard fork
right so that's kind of both
you know why we're doing the
KZG thing now and also the
roadmap for getting off of it
once you know better
the SNARP technology really catches up.
Cool.
Maybe let's talk about the road towards dank sharding, right?
So basically, back in the day, we thought that shards would be smart and they would, you
know, like have compute.
And basically the Ethereum has kind of moved away from that.
And it has seen, basically, shards are now, basically, it's a data availability solution, mostly, right?
Do you feel, how do you feel about that move?
Because one could argue that basically the use case for Ethereum of distributed compute,
this is proven, whereas data availability is not.
And there's also other solutions out there that could have been, you know, integrated, right?
Well, are there, though?
Like, shorted EVM execution is theoretically possible,
but doing it in ways that preserve good assumptions,
is not really doable without working ZK AVMs,
which is not something that we had before, right?
Because like if we just said, okay, you know,
we're going to have 64 shards
and each of these shards is going to have an EVM inside of it,
then like there's a lot of complexity that actually multiplies, right?
Like, one, you have to have an in-proticle facility
for cross-shard communication and, you know, cross-shard east transfers and all of that.
Absolutely.
Yep.
Then two, you have to deal with the question of like,
what if something wrong happens in one of the shards
and it doesn't get caught because everyone only verifies a small portion of the data.
And if you rely on fraud proofs, then you're adding synchrony,
asyncrany assumption to the chain's safety property,
which like if we're willing to do that,
then like what's the point of even having finality, right?
If we're willing to make safety dependent on synchrony,
then we just give up on finality and, you know, just like, you know,
block a test, block a test, and we just like kill half of our consensus spec code.
And like, you know, lots of people would be happy.
But, so, we...
With ZK. Snark, you just completely solved that issue because you actually would be able to, like,
prove the validity of EVM stuff happening in real time, right?
But when the roll upcentric pivot was made, ZK AVMs were still very far away.
And the Ethereum core development process was very, yeah, kind of type, you know,
was pretty much, you know, like fully occupied with both, you know, the merge and 1559 and other things, right?
So one of the reasons why I think we've been preferring a lot of these kind of outside of protocol approaches through a lot of problems is because, like, core developer bandwidth is limited.
And if we can solve problems and iterate on solutions outside the protocol, then it's just much easier in doing it inside.
ERC 4337, account abstraction is another good example of this, right?
Like the our strategy for account abstraction before was ERC 2938, which basically attempted to do kind of the same thing 4337 does except like sticking everything into the protocol.
And like adding a whole bunch of complexity into the protocol to verify, you know, that the verification process of a transaction is not like accessing anything on the outside and, you know, this and that and that.
And it would have taken a long time for CoreDDust to implement,
and it probably would have delayed EAP1-559 and or the merge
by half a year or a year, you know, some pretty crazy amount, right?
So if, you know, but on the other hand,
ERC 4337 is this extra protocol thing.
It's something that exists, you know,
basically thanks to propose the builder separation, right?
Like you would be builders that add the 4337 user operations
into a transaction that goes into a bundle.
And because of that, there's just been this lovely community that's just solving it in a way that's totally separate from the core dev process.
You have Yov and Jorre and the infinitism team and the sole wallet people have been really stepping up at and making an implementation and kind of popularizing and doing a lot of community work around it and there's multiple panels on it.
And like 4337 is sort of really blossoming into this massive ecosystem and we'll even be able to deploy it on layer two before layer one.
And there is even a compelling case for it on a layer two is because the signature aggregation feature of 4337 lets you combine all the signatures of the accounts, if there be a lot signatures into one.
And on roll-ups, the data is the most expensive thing.
And so transactions become two times cheaper, right?
So that sort of stuff, like in the case of account abstraction, right, that's the benefit of kind of spinning things off into an ERC.
and roll-ups are the exact, I think, same argument,
but for the 10 times harder problem, which is scaling, right?
And I think it's been successful so far.
Like, the roll-ups are, you know, very mature in a lot of ways.
I mean, you know, obviously there are some things that are behinds, right?
Like, it has taken them longer than I've hoped to take off training wheels.
But, you know, some have, right?
Like, some of the StarCore systems have taken off training wheels.
Fuel has taken off training wheels.
And I think, you know, we are very,
going to start to see kind of partial removals of training wheels start happening over the next
year. So, you know, the benefit of Deng sharding as a whole is basically that, you know, it lets
us split off the development effort and the problem core developers have to solve as a much
simpler problem and the remaining work gets shunted off to a, and a very separate and increasingly
amazing community of role-up developers. And also, it allows us to really keep the, you know,
Ethereum-based protocol much simpler and, you know, get something out, I think, much faster.
It also moves a lot of the value accrual to like third-party for-profit companies, right?
This is true. I mean, there are costs to this and there are benefits to this, right?
So, I mean, one of the benefits is that, you know, if we have non-Eth tokens in the Ethereum ecosystem,
then, you know, those tokens could contribute to public goods funding, right?
OPE is probably the best example of that.
And I think they've really
kind of taken the kind of theory and philosophy
around public goods funding to a really amazing level.
But there's other examples of things like this
in the Ethereum ecosystem, right?
Like the Uniswop Dow has a huge amount of funding
and it's independent from the Ethereum Foundation.
And so that by itself has significantly increased
the funding decentralization
in the Ethereum community.
So those are benefits, but the costs are, yeah, like some portion of the fee is definitely, you know, goes to other people and some portion that could have otherwise gone to eat holders.
But, like, you know, I don't think eatholders should be the only bad people who benefit from all these people's hard work, you know.
Sure.
Meher Roy had a threat this morning on Twitter talking about how basically if we took the base fee and didn't, and wouldn't
burn it, but instead kind of put it towards roll-up, like Ethereum Foundation-based roll-up
development, you know, we could have an Ethereum-based layer two that's not proprietary.
So the concept of like in-protacle issuance going to specific development teams, like this is
something that has been brought up before and this is something that has been pretty soundly
rejected, right? Like, I think it was EAP 2025 or like somewhere close to that. There was an
attempt to make an issuance of 0.045 E's per block that would go to the ETH1.X team. And that just
got, you know, shouted down really hard. And like the reason is basically that like, you know,
we're trying to minimize governance, not maximize it, right? And, you know, we want the governance
load of the chain to go down over time. We don't, we definitely don't want the chain to,
you know, become legally classified as some kind of economic political agent that we don't want
the core developers. They've been insistent in a whole bunch of contexts that they want their role
to be as narrowly technical as possible. And they're just, you know, very fearful of making these
kind of social value judgments that are going to inevitably leave lots of people very angry at them
because, you know, they do a tough job and, you know, like way too many people are angry at them
for lots of things already.
So there's, I think, a very understandable desire by all parts of the Ethereum ecosystem
to really value credible neutrality of Ethereum and ETH.
And that is something that's that probably prevents that from being a solution.
I mean, you know, if we had a magic wand and could go back eight years in the past and,
you know, like pre-mine an extra $3 million, Ethan, and have that go into a long-term fund
that gets eventually governed by soul-bound tokens and, you know, whatever, whatever, like, maybe.
But, like, we have to live with, you know, the Ethereum ecosystem as it is today.
And, like, the closest that we have to being able to kind of benefit from, you know,
token-based public goods funding is, like, basically allowing these LR2s with separate LR2 revenue streams to prosper.
Okay, that's fair.
Now, maybe if we look at layer twos and compare them to layer ones that are connected via trustless bridges,
so kind of the ZK-like-client bridges that we've seen recently,
which is kind of more of the IBC model, right?
Like it's a cost-mosification of the Ethereum ecosystem.
How do you see that in comparison to the real layer twos?
Yeah, I mean, well, I think the big difference between those two models is like whether or not you have shared security.
right?
It's like Ethereum has shared security and it also has this like really nice tight coupling
property that basically says that like the states of the different things that live inside
and commit to Ethereum are going to be kind of compatible with each other even if Ethereum
gets 51% attacks, right?
Like even if there is a massive reversion, even if something terrible happens, the like you're
not going to like you might be able to revert some things but you're not going to be able
to cause an inconsistency, right?
Whereas if you have chains all separate governance, then, like, first of all, you know, you could try to concentrate and take out one without taking out the others.
Then there's a question of, I mean, what if there's some internal political dispute inside of one that leads to a hard fork, but not the others.
And there, like, there's just kind of more of that kind of, I guess, security concern friction with any kind of interoperability with that model.
So, yeah, basically the argument is like, you know, one is like every blockchain is like a country,
but the other is like, you know, every blockchain is like a country, but they're all part of the same defensive alliance.
And I guess Ethereum has kind of committed itself to that approach.
And then, of course, there's the, you know, you could go even further and kind of take the extreme of saying,
well, you know, no, we're going to kind of force all the countries to merge with each other and make a really big, big superpower,
which is the, I guess, the equivalent of kind of the, you know, ultra-big layer ones.
So, you know, there's a spectrum and, like, there's costs and benefits on all sides of the
spectrum, and I guess, you know, Ethereum has chosen the cost and the benefits of the middle road at
this point.
That's fair.
But the security assumption on layer two's, this is only for tokens at a bridge, right?
It's not for natively issued tokens on layer twos, right?
Why wouldn't it be, though?
Because you can't, you can't, you can't force an exit, right?
Why can't you force an exit?
Because there's, there's not necessarily a representation.
A representation of what?
Of the token on mainnet.
Right.
So you're saying if there's a token where kind of the home base of the token is on optimism,
then, I mean, I think like if, like we get to the point where optimism is fully trustless, then, you know, like,
No, you just, you know, you make a copy of, like, you make a wrapper of, like, of your token on optimism, of your token that is based in arbitram.
And you make a wrapper that's based in the mainnet and you make a rapper that's based everywhere else.
Like, it's basically just, it's equivalence to a sharded system, right?
Like, you're just, like, you have wrapper tokens on all the domains and it does, and almost doesn't matter which domain is the home domain eventually.
Like, this is assuming that the home domain is governance minimized.
I think you do want the home domain of a token you issue to be governance.
minimized, but aside from that.
I think I have to think about that, but I take your word for it for now.
What about centralized choke points in layer twos?
So basically, they typically, at least, at the very least, the sequencer in all layers
that I'm familiar with is still very much centralized, right?
Yes.
So decentralizing the sequencer is very important, right?
And I think there is a couple of different steps on that roadmap, right?
So the first is that optimism already has implemented, and I'm not sure, maybe Arbitram already has this about it.
I don't know.
It's like this concept of backup channels, right?
Like there is an alternate way to send an Ethereum.
Like anyone can send an Ethereum transaction that contains a bunch of optimism transactions.
And the rules of the optimism protocol force those transactions to be considered and to be applied to the state within some amount of time.
I think it might be within the next 10 minutes or something like that.
Right.
So that already means that even if the sequencer wants to censor, the sequencer can't censor.
So that's step one.
But then step two is I think that's insufficient, and you do want to make these systems be independent of a central operator.
And for that, you have to decentralize the sequence.
And I think there's different approaches to decentralizing the sequencer, right?
So one example of this is that, you know, you could make an in-proto.
call auction where you just like make an auction for, you know, you could just do a descending
price auction if you want to make the auction cheap and like limited to, you know, one transaction
per winning bid where you just kind of, you know, buy up the right to be the sequencer for some
future swat. So that's a simple approach and then you can change it around. You could say instead
of one future swat, it could be 100 future swats or 50 future swats. You could even say, oh,
it's going to be 100 slots, but then there is governance that could kick you out if you do things
that are abusive like making sandwich attacks earlier.
So that's one route.
Another route is that you could start, like, you could basically have, you know,
an in-protacle proposer mechanism that's like very similar to what Ethereum does, right?
But then, you know, you have this question of like, how do you handle a V extraction?
And then, you know, are you going to need an auction mechanism on top of that?
And another thing you could do is you could do a kind of, you know, hybrid.
a kind of proposer and layer to attester design, right?
So this is a, if you want to keep the concept of pre-confirmations and you want to keep them
trustworthy, then you have like a bunch of attestors that are, you know, let's say in the case
of optimism, it would be OP token holders or it could be, you know, citizens or whatever.
And they would have to sign off on like every microblock that a sequencer makes, right?
So Sequencer makes a microblock, two-thirds of the attesters sign off.
Secretser makes a microblock, two-thirds of the attester sign off.
the nice thing is that you don't even need a complicated tenderness.
All you need is just the kind of the dumb consensus algorithm that says that the thing is fine if two-thirds of people agree on it.
And like, you know, one-third gets slashed if they assign on competing things.
The reason you don't need a full consensus algorithm is because full-consentist algorithms are harder because they have, like, they work in the case where, like, there's nothing else to appeal to if there's a disagreement.
But in this case, like, there is something else to appeal to if there is a disagreement, which is the chain, right?
So that's another kind of approach.
And then another approach might be is that, you know, if someone makes a really good decentralized builder, then, you know, you could just like make them be the sequencer.
You could even have an auction market between, like, try to only accept decentralized builders as sequencers.
So there's a bunch of approaches.
There's like a bunch of tools in the toolbox.
I think the challenge is like how quickly can you get to something that is significantly better than the, yes, that is.
code that we have today. And it's like, you know, they have to balance lots of complicated constraints,
right? Like there's a, like specific security constraints, there's optics constraints, there's
legal constraints, and all sorts of things. And, you know, they know how to balance all of those
issues much better than I do. But, you know, I'm hoping we can get there soon. Do you venture to
make a guess on how soon, you know, in like, you know, months or years? Good question. I think, well,
obviously the roll-ups have a prioritization challenge between
the two problems that are having kind of similar spirit but they're doing different things right which is
uh decentralizing the secrets or and taking off the training wheels and uh i mean there's you know
there's this challenging question of like which one do you spend more effort on and i think uh
one of the challenging things is that from a like protecting users funds point of view
like prioritizing having safe training uh safe fraud proofs or zK
proofs and reducing or removing the training wheels is more important. But then, you know, I don't know,
it's possible that from a legal point of view, decentralizing the sequencer ends up getting
prioritized by some of them. But, you know, or hopefully they, you know, have enough resources to
have two separate teams, right? Because like, especially when resources are not your constraints, right?
Like the best way to improve productivity is to kind of split a problem in half and make sure those
those halves don't have to talk to each other. So, yeah, I don't know. I mean, I think, you know,
over the next couple of years, we are going to see very serious improvements on both.
Okay. And when dank chatting?
So dank sharding is interesting because it's like inherently this multi-phase thing, right?
So phase one of dank sharding is proto-dank sharding, aka EIP 4844.
And that, like, people really want it to be in, you know, early to bin next year, but, you know, that could get delayed more.
The good news is that the EIP is basically finished.
And, you know, they're even, I think, are like basic devnet implementations.
of it and the dress of ceremony is
started and
so it looks like we're pretty far but you know at the same
time there's still a lot of
work left to be done so
yeah I know it depends like optimistic
spring pessimistic fall
but you know we'll see it so that's phase one
then phase two of dang sharding is
how do we you know go from
one megabyte to 16 megabytes and how
do we actually like start splitting up the
data load that gets
more complicated and
you know
there we have to deal with issues of like, you know, what is the data availability sampling going
to look like, you know, is it going to be a peer-to-peer network or might it be dependent on like
some one-of-end trusted super node thing at the beginning might and then, you know, decentralized more
over time? Are we going to, one of the things I think we're with data availability sampling is
that I don't think we're going to see an all at once transition. I think we're going to see some
nodes transition to sampling over time.
And possibly like the big rich people are going to keep downloading full data.
And then, you know, start off with 20% of the network sampling and then 40 and then 60.
And as that happens, the total cap on the number of blob transactions and like the
megabytes might increase, you know, slowly from the current one all the way up to 16 and
then even higher.
So I think it's going to be a very multi-stage thing, both in terms of the, you know,
trust assumptions and the percentage of the nodes that are participating and just all kinds of
considerations. I want to skip over the verge and the splurge and so on. I think we'll just do an
interview with another KWDEF at some point. But I do actually have a big picture question to
kind of end with. So when you look at this space, the number of hacks, right? I mean, it's
$3 billion this year alone. And the possibility space,
for future hacks is only getting larger by introducing all this ZK technology, right?
Because I mean in principle you can't read. There's going to be bugs in that.
So how do we get to a point where we can in good conscience kind of roll this out to the NUMMI's?
Hmm. It's a big challenge. It's a I mean it's a challenge I actually talked about a roll-up day a couple of days ago, right?
specifically in the ZKK case
that there's going to be a long period of time
between when we have ZKVMs that look good
and they're available and they make polynomials
that look legit to the time
when we can actually trust that they are
a really, really robust and they're like as alindy
as hash functions are or whatever.
But within that time, I think
there's different routes that we can take.
One of the things that I'm a fan of is the
kind of multiple implementations approach.
So we have multiple
ZKVMs. So we don't need to
standardize on a single one, right? Like, you know,
different clients can
have different ZKVMs and you can have
Ethereum blocks
going around with different proofs attached to them.
And within a roll-up, like, you know,
you can literally have like a, yeah,
the thing I suggested is
like a Gnosis safe multi-sig
where the addresses going
into the Gnosis safe just like are
addresses that kind of output a message
that says send the money out if there is a zero
knowledge proof of a particular withdrawal.
So with that kind of stuff, like, you know, we might be able to have a middle phase where we get a lot of benefit from the technology protecting us, but like we aren't fully vulnerable to like one thing having bugs that break everything.
But that's on the on the ZK side and on things closer to the base layer.
On the application side, it's like a different story, right?
because like, I mean, almost all of these defy hacks, they happen in applications that I don't use and I would never endorse using.
And there's like all of these other communities of people that just have, you know, much more aggressive ideas about what kinds of things they want to do on chain.
And, you know, some of them are just inevitably going to overshoot, you know, whatever is they get actual level of capability of what we can't make secure.
So I think the best that we can do is just kind of slowly expands the frontier of what kinds of things can be done safely over time, right?
So, you know, we have safe multi-sig wallets now.
We did not have safe multi-sig wallets five years ago, you know, remember parody, right?
We, you know, we have, you know, uniswap has been safe for a long time, right?
Like, you know, the maker-dow contracts have been safe for a long time and, you know, people start to make forks off of them.
you know, we're going to move toward, like, different Dow governance contracts becoming more and more safe over time.
Safety is going to increase with, you know, more and more fancy form of verification happening.
And there's going to be a bigger and bigger space of things that you can do where it's fairly comfortable that, you know, you can do them safely.
And then there is going to be this kind of bigger zone of, you know, crazy stuff where people go into that zone, they're going to lose lots of money.
And like that's unavoidable.
The thing that we can do and that is important is, of course, like, do a much better job of communicating the difference between the safe zone and the crazy zone.
And that's something that, like, we've been, I think, medium good at over time.
Like, there's a limit to how good we can be because, like, unfortunately, you know, if you call out bad projects for being bad, then, like,
you're not going to be welcomed in the bad project community anymore.
And there's going to be a community of people that just kind of ignore the sane people entirely.
But like try more to do that.
That's, you know, yeah, try more to do that.
Put more safety tools in people's hands.
Try our best to protect the people that we can protect.
Try our best to, you know, when we can response to hack.
after the fact, do a good job of responding them,
just kind of do the stuff that the good parts
of the Ethereum ecosystem have been doing,
but just keep doing them and do a better job of them.
I think that's a perfect answer.
We just got a figure from your people,
so I hope that means one more question, a very short one.
What do we absolutely need to get right in the next year as an ecosystem?
So what's the one thing you think we need to pay attention to
in order to succeed.
I'd still say scalability.
Scalability, okay.
I think scalability is like the big one because I do think that it is one of those areas
where there is a limited time window because if we don't solve scalability by the next
well market, then there's just an overwhelming chance that like forms of scaling by sacrificing
all attempts at trustlessness are just going to become legitimized and dominate.
And that's something that's going to be very hard to come back from, right?
And like scalability is something that, you know, we have to get right, we have to make progress
at getting right.
And, you know, we have to kind of put in the initial steps to make sure that there is a good
path toward improving that scalability over time.
And it's like a big challenge, but, you know, there's also lots of very smart and hardworking
people working on making it happen.
Perfect.
Thank you so much for coming on Epicenter.
Thank you so much for being Epicenter.
Thank you for joining us on this week's episode.
We release new episodes every week.
You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud, or wherever
you listen to podcasts.
And if you have a Google Home or Alexa device, you can tell it to listen to the latest
episode of the Epicenter podcast.
Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter, so you get new episodes
in your inbox as they're released.
If you want to interact with us, guests, or other podcast listeners, you can follow us
Twitter. And please leave us a review on iTunes. It helps people find the show, and we're always
happy to read them. So thanks so much, and we look forward to being back next week.
