Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Danny Ryan: Ethereum Foundation – An Eth2 Progress Update
Episode Date: May 27, 2021Ethereum will switch from Proof of Work (PoW) to Proof of Stake (PoS) likely already later this year in a much anticipated upgrade to Ethereum 2. The switch to PoS aims to make Ethereum both more secu...re and more sustainable by securing the network through Ether instead of mining. A second Eth2 update will address scaling through sharding at a later time.Danny Ryan, Researcher with Ethereum Foundation, has been a major driving force behind the Eth2 project. He joined us for a progress update and we chatted about how the protocol will work in its steady state, what has launched so far, what happens in The Merge, and how PoS will affect centralization tendencies.Topics covered in this episode:Danny's background and how he got into cryptoAn overview of Eth2 - Phase 0, Beacon ChainThe role of a validator and building blocksPenalties and rewards within the protocol including slashingWhat the epoch is and how it relates to finalityThe Proof-of-Stake mergeWhy Proof-of-Stake is favorable for security purposesWhat is the roadmap for sharding?Ethereum feesEpisode links:Ethereum CommunityEth R&D DiscordEthereum on GitHubEth ResearchEF on TwitterEF blogDanny on TwitterSponsors:Solana: Solana is the high performance blockchain supporting over 50k transactions per second to power the next generation of decentralized applications. - https://solana.com/epicenterExodus: Exodus the easy-to-use crypto wallet available on all platforms and supporting over 100 different assets. - https://exodus.com/epicenterParaSwap: ParaSwap’s state-of-the-art algorithm beats the market price across all major DEXs and brings you the most optimized swaps with the best prices, and lowest slippage - http://paraswap.io/epicenterThis episode is hosted by Friederike Ernst & Martin Köppelmann. Show notes and listening options: epicenter.tv/393
Transcript
Discussion (0)
Hi, Danny. It's good to have you on.
Yeah, it's great to be here. Thanks for having me.
Cool. So Danny Ryan is with us for the Ethereum Foundation, and we'll talk about
Ethereum 2 in a bit. And we also have a special guest co-host today, Martin Koppelmann from Nosis.
Yeah, good to be here. I'm looking forward to learn more.
Cool. Before we dive right in, let me tell you about our sponsors.
Our first sponsor is Solana.
Solana is a next-generation blockchain with lightning fast blocks and fees less than a cent per transaction.
Scalability is perhaps the single biggest challenge preventing crypto from becoming the backbone of the world's financial system.
And today Solana may well be the best solution we have.
Go to Solana.com slash Epicenter to learn more.
We would also like to thank Exodus.
Exodus is an easy-to-use wallet which supports hundreds of assets and has native apps for all platforms, including iOS and Android.
And as a fully non-custodial wallet, there are firm believers in the not your keys, not your coins mantra.
Go to ex-dust.com and give it a try.
Paraswop just came out with a huge update that's even faster and more liquid.
It's cheaper than uniswop and comes with a new gas token that can cut your gas fees by up to 50%.
Paraswop is now multi-chain and has expanded to polygon and Binance smart chain.
Start trading at paraswop.io slash epicenter.
Cool. Danny, it's so good to have you on. We've been meaning to do this episode for a super long time. It's been way overdue. And just before the podcast, we kind of talked about the outline. And it was definitely enough stuff to actually fill at least two episodes.
Yeah.
So, Danny, before we dive into the protocol, can you tell us a little bit about yourself? What brought you to the Ethereum Foundation? What did you do before? What piqued your interest in blockchain?
I honestly, it's an honor to be here.
I, um, early on in my blockchain journey, I remember listening to, to Epicenter and kind of gobbling up all of the content.
So it's cool to be on the other side of that right now.
How did I get here?
It's a similar story to most, I think, honestly.
I, I used to be a freelance software developer for many years.
I graduated college and was like, I really didn't want to work a normal job.
I didn't want to work in an office.
I didn't want to live in San Francisco, all that.
So I moved from New Orleans.
and was just kind of like helping small businesses with weird software software solutions.
That was fun.
Did that for many years.
And then I think I started paying attention to Ethereum around the Dow, pre-DOW hack.
Someone sent me an article.
I think it was in New York Times.
It was like all this money is being raised for this weird thing.
And that piqued my interest.
I had heard about Ethereum before and I hadn't realized that it actually launched.
And I started paying attention.
And the Dow in particular, I know it was a,
fantastic disaster, but the fact that that could exist and was happening, really, I think,
allowed me to see and start to process what this technology could do. So I guess that was about
2016. I became more and more obsessed. At the beginning of 2017, I realized that it's all I
wanted to think about and all I wanted to do. So I got rid of all of my freelance clients,
and I said, I'm going to figure out to make this my job. On the journey of
how do I actually make this my livelihood? I heard of this proof of stake thing. And I thought to myself,
okay, well, this doesn't make any sense. This is never going to work. A couple weeks later,
I'm like, okay, this makes sense. This is interesting. But how can I make this a business? Can I do
something with this to make this my livelihood? And I'm like, okay, I could probably make like a
staking pool. So I started reading all about proof of steak and all about how it was going to work
and all that kind of stuff in 2017, thinking that I could make a staking pool.
Then I realized there was still work to do.
So I started helping out with some of the work here and there, like contributing to the research where I could and like helping out with testing and various things online.
So come in of 2017, I had been collaborating on the internet with various contributors.
I had been working on some like testing infrastructure for Casper FFG, different things.
And the EF was like, hey, do you want to join?
We have plenty of work to do.
So I joined the EF at the beginning of 2018.
And at that point, I thought we'd probably launch proof of stake in about like six, seven months.
Little did I know that I would still be working on that very problem today.
And as we will discuss, proof of stake for Ethereum is live, but it's certainly not completed.
And so today I am still working on that very problem.
super cool yeah it's it's been a long time coming so what exactly do you do at the theorem foundation
so i work on the research team and i do a mix of research specification writing and then a lot of
communication and coordination around those two things um so the eth two project is
consists of many teams at the eF many external teams um e2 two
clients, whatever that may mean, ETH1 clients, and the intersection of all that. So I spent a lot
time communicating with engineers and helping people understand things and helping kind of coordinate
and make sure the project keeps moving forward. Yeah, super cool. How many people are, are they working
on ETH2 currently? Do you have any idea? Definitely over 100, but there's five active ETH2 client
teams of varying size. There are
probably 20 people at the EF that work on this stuff full-time.
There's plenty of people that work on it part-time.
Increasingly so, as we approach the merge, which we'll talk about later,
what we call ETH-1 clients are increasingly working on the merge
and working on ETH-2.
And so it's quite a number of people.
It must be difficult to coordinate because, I mean, it's kind of,
there's so much of research angles still to it.
And I think research is something that is not infinitely parallelizable, right?
Right.
And, you know, I'm one of probably many people that coordinate various things.
And it's a very organic open source effort.
And so the amount of coordination isn't incredibly high.
Although at the end of the day, we all need to talk to each other.
We all need to, like, coalesce on single path and decisions.
And so I try to help facilitate that.
then maybe let's talk about the things that have been decided and kind of that are life and kind of
yeah basically tell us about the state of yeah phase zero or what what is life and what is already
working so first i'm going to answer what is what is eith two very broadly and eith two
it's definitely a bit of a misnomer but we can go along with that term it is a series of major
consensus upgrades for Ethereum aimed to make the protocol more secure, more sustainable,
and more scalable.
And at the core of that is the move from a proof of work consensus to a proof of stake
consensus.
So instead of securing the network with mining hardware and energy consumption,
securing the network with the tokens itself, the ether.
And so at the core of that is the bootstrapping and the creation of this new consensus
mechanism. And what, as you mentioned, is live today is what we call phase zero. And that went live
in December of 2020. And that was really the bootstrapping of this new proof of state consensus mechanism
that is called the beacon chain. So in December, tons of theorem community members and different
institutions put a bunch of ether as capital and collateral into what we call the deposit contract
and kickstarted this new consensus mechanism called the beacon chain.
The beacon chain exists in parallel to the current Ethereum network, so in parallel to the
proof of work network, which is still securing all of the assets and applications and contracts
and accounts today.
So we have, on the one hand, the proof of work network chugging along.
And on the other hand, this new consensus mechanism called the beacon chain existing in parallel
to this building and securing itself.
I think today there's something like 4.5 million ether locked and secured in this chain.
I don't know what that's worth today.
It depends on the minute and the hour.
But this thing exists.
This thing finalizes itself.
This thing builds itself.
But ultimately what it does is it just builds and secures itself.
And this is by design.
This is an iterative path to get rid of the proof of work and to move.
Ethereum main net to this new consensus mechanism. Obviously, Ethereum main net is used by tons of people,
secures tons of value. And there's a lot at stake in this operation. And so what we've done is
built it in parallel, kind of vetted it in production, done tons of tests, live. And now what we're
working on is actually the deprecation of the proof of work consensus mechanism in favor for
this live proof of state consensus mechanism. So that's where we're at today. There is a proof
of state consensus mechanism bootstrapped live, securing tons of value, but really just kind of
securing itself in isolation. Then let's deep dive into what it exactly does. So right now it
comes to consensus on what? It comes to consensus on itself. And it's by itself, what I mean is the
proof of state consensus mechanism and all of the little gadgets and things in it. So it has a
validator set. Each validator is worth approximately 32 eth. So there's something like 140,000
validator entities in this consensus. Each one of them has like its own little state. It has its
balance. It has duties. It has like a job at any given time. It has randomness generation.
It has information about finality. So which portions of the chain are finalized and would never
be reverted. And it has a lot of just various accounting between finality and kind of the head of the
chain. So there's a number of operations related to the functionality of this chain. And those
operations are kind of what we call validator level transactions, so system level transactions.
And really what it does is there's a core operation called attestations where validators are
constantly signaling what they see as the head of the chain and what they see as the head of the chain.
and what they see is their local state of the world.
And they use these messages to come to consensus with each other
and ultimately drive this core, like, system layer of the chain.
There's some other operations related to validator activity,
like deposits, onboarding new validators, exits, leaving the validator set.
And a couple of other things.
So really, really, it's like this, it's a proof of stake system.
And there's a lot of different accounting, different little operations going on,
and it builds and comes to consensus on itself.
Let's get to our sponsor Solana.
Now this is a special ad for me to read because I've been a deep supporter of this project
since meeting the Solana team back in 2018.
I invest personally in the project and my company, Course 1 is super deeply involved in the
Solana ecosystem, including running the biggest validator.
So what's so cool about Solana?
Well, we all know that scalability is the single most important issue facing the blockchain
industry today.
And the Solana blockchain is an amazing solution for it.
The network supports thousands of transactions per second with 400 milliseconds block times and over 500 validators.
The special thing about Solana is also that it's not a sharded blockchain.
It's a single blockchain hyper optimized for performance.
So that makes it really easy to maintain compostability between all of the apps on Solana so that they work together seamlessly now and forever.
The Solano ecosystem is growing at a rapid pace and it's a great place to build your project.
or just get involved with the community.
So go to Solana.com slash Epicenter to learn more.
In principle, I can become a validator, right?
So I need 32Eath.
And what do I need to do then to become a staker?
Yes.
Anyone can become a validator if they have or accumulate 32 ether.
And what you do is there's a number of different tools,
the different paths, there's this nice tool called Launchpad,
which has a nice, like, user interface.
You can do this in a number of ways.
But you can go to Launchpad.
You do a key generation step where you generate an active key.
So the key that's going to sign out to stations and build blocks and different things.
And then what we call a withdrawal credential, which can be a cold key.
It doesn't have to be on the machine.
That ultimately owns the funds when the validator's done with validation.
And so what that validator, to become a validator, what you do is you take the 32Eath,
generate some keys, sign a deposit message, and send that ether to the deposit contract,
which is a special contract on the current Ethereum mainnet on eth one. You send it with this data,
and then the beacon chain, which operates in parallel to that proof of work network,
as listening to the deposit contract, coming to consensus on the state of the deposit contract,
and inducting new deposits. So right now it's kind of like a one-way bridge from the proof of work
Ethereum into this new beacon chain. Once you have your validated deposit inducted into the beacon chain,
there's a little bit of like process overhead, a little bit of time that it takes in terms of
consensus on this, then you get a new validator record in the state of the beacon chain.
The beacon state is like this kind of system layer state of this thing. And after a number of epochs,
something like four epochs, each epochs, six point four minutes, you then become activated.
at that point, your validator now, each epoch, will have at least one, if not a number of duties with respect to the beacon chain.
And they get little assignments and they listen to the network and sign messages and talk to each other and come to consensus on things.
I want to directly ask about the deposit contract because the premise so far was that the two things are living in parallel, but that is already obviously a connection.
So apparently, the East 2 chain needs to read from the East 1 chain and kind of needs to understand the East 1 chain.
So does it mean that to run an H2 validator, you also need to run an East 1 client or otherwise, how would you know that this deposit happened?
Right.
So, yes, there is a one way and very restricted bridge from the ETH1 chain into.
the beacon chain. And as a validator, you are running the beacon chain, which is actually relatively
lightweight, and running the Ethereum 1 Proof of Work network. This is to be able to listen primarily
to that deposit contract and be able to build and connect that bridge. To bootstrap this consensus
mechanism, which is a crypto-economic consensus mechanism based off of ether, you do need ether.
And so that link is critical for the bootstrapping and functioning of the system.
As a validator, you do have these two pieces of software that you're running in parallel and communicate with each other.
And this actually is kind of representative of what the system would look like after the merge.
So once these systems are unified, you similarly would have to run the beacon chain, which is kind of the system level state, as well as a piece of software that will give you access into the execution layer,
into the things that we know and love, essentially like geth minus proof of work plus beacon chain.
I assume it's incentivized to be a validator. So what do I get if I get to build a block?
And how is it determined if and when I get to build one?
So there's two primary actions that are rewarded for the validator.
And one is this action called a testing, which you are assigned to a test.
exactly once per epoch, so exactly once per 6.4 minutes. And this accounts for actually
seven-eighths of your validation reward. So it's the majority of the issuance goes to this
very regular intervaled activity, which is nice. There's a reason for this because it helps
with kind of hardening the fork choice and keeping things very stable because many validators
get to, even though only one validator at any given slot is producing a block, many validators
get to throw in their weight and say what the head is. So it makes it very difficult for
these like this monopolistic activity of producing a block to be able to reorg and do different
attacks. And it's also nice because it really smooths out rewards. Whereas in proof of work,
you're rolling the dice over and over and over again. And every once in a while, you randomly
get a chance to produce a block and get a big payout. Whereas in this proof of stake system,
the rewards and payouts are very much more regularized, even if you have just say one validator.
So attestations are this very critical message type for securing the head of the chain and finalizing things.
But then, as you said, this other very critical role is actually producing a block.
And so based off your validator assignments, which is based off of randomness generation on chain, which we can talk a little bit about, every single slot, there is exactly one validator assigned to produce a block.
A slot is every 12 seconds.
It's kind of like the heartbeat of the system.
So instead of this stochastic process of mining, which randomly there's a target for block time.
And randomly blocks get produced kind of around that target.
Instead, there's this click of every 12 seconds.
Blocks can be produced if the proposal shows up.
And every single 12 seconds, a producer, a validator is assigned.
And so if it's your, say it's slot 10,000 and one,
You had a little bit of look ahead, say you had 32 slots in advance.
You knew that you were going to produce a block.
So you're listening to the chain and you listen for valuable things to include in your block
and you build off of the parent and produce a block and broadcasts the network.
Valuable things today are primarily just these validator operations.
So attestations and deposits and things like that.
But post-merge, you would also be including user land application layer transactions,
and seeking to maximize transaction revenue like minors do today.
Let's get to our sponsor, Exodus.
Exodus is a fantastic cryptocurrency wallet that strikes a right balance
between ease of use, security, and great features.
You can get Exodus on the iPhone, desktop app, web app, Android,
whatever platform you use.
It's a non-custodial wallet, and that is so critical.
Because what's the point of crypto if you don't control your own assets?
With Exodus, you always do.
They're old school, and they've been around since 2015.
Over 1.2 million users rely on Exodus, so you know that they've stood the test of time.
They have support for over 100 different crypto assets, and from within Exodus,
you can easily change one different asset to the other.
They also allow you to buy crypto with Fiat, and they even have a great offer where you can
buy up to $500 worth of crypto through their iOS app,
and pay just $1.1 in fee.
So go to exodus.com slash epicenter
and check out their wallet.
We want to thank Exodus for their amazing support of Epicenter.
So what happens if, for some reason,
the validator that is assigned to this block or slot doesn't show up?
A slot can be skipped.
So in the normal case, this looks just kind of like a longer block time.
For example, sometimes we have like just based on the,
stochastic process and proof of work. Like sometimes you have blocks that happen in 10 seconds,
sometimes you have blocks that happen in 30 seconds. So in the event that a slot was skipped,
you would have 24 seconds in between the slot time. And that would look like a slight reduction
in capacity of the chain for that given time. Interestingly, with like 1559 fee mechanics and
variable block sizes that can kind of like be self-balancing in the average. But what we've seen
in the live system is something like a 99.5% participation rate with respect to blocks and
attestations. So in the normal operation of the chain, so not crazy global latencies or not some
attack scenario or major failure in a client, we expect to see almost all the time a block every
12 seconds.
So you said 99.5% for both the...
Antestations and the blocks.
Yeah.
All right.
Okay.
And do I get slashed if I don't show up or if I don't detest?
So we reserve the word slashing for very explicitly malicious activities and what can be more severe penalties.
So in normal operation, we have rewards and we have penalties.
And so you have this 32Eth stake.
And if you do your job well, you can be rewarded.
And if you don't do your job or you like,
are failing, you're trying to do your job and you can't quite find the head of the chain,
you might instead lose a small amount of ether. And so in normal operation, if you, and this is
variable depending on the validators, the size of the validator set, but in normal operation in a
year, you might make anywhere from like six to 12 percent return on that, that third chief deposit.
Whereas if you were offline the entire time, your penalties would equal approximately what you
could have made. So you actually, instead of making 6% that year, you would have lost 6% that
year. And that's not just opportunity cause. You would actually be penalized a little bit and seen
a linear decrease in your stake. And so that's rewards and penalties. For all the basic
activities, you can either receive rewards or you can be slightly penalized if you aren't able to
perform your job. There's individual ones and there's also a group. So like if the amount of
your attestation reward would actually scale with this like group mechanic. So if like,
100% of the validators are online, you can receive maximum validator reward.
But if only 80% of the validators were online, you would actually only get 0.8 of your total attestation reward.
And this is so that there's this like incentive one to not like doche your neighbors and take them down so you can get their reward instead.
And also in the event of crisis, there's this group, this group dynamic so that you want to figure out what the hell is going on and try to fix the network.
can. So those are rewards and penalties. You did bring up slashing. Slashing is actually,
it is a penalty, but it's a much more severe penalty, and it's in relation to these explicitly
cryptographically provable nefarious activities. So for example, you're assigned to a test every
epoch. This is very important for the operation of the chain. This helps finalize things.
This helps secure the head of the chain and the fork choice. And you're only supposed to do it once per
epoch. If instead you do it twice per epoch, you can be slashed because this is an activity that
can lead to a network fault. Essentially, the idea is in proof of work, you have a physical, real-world
piece of hardware that you can only point to one chain or another, so one fork or another. Or you
could split it, you could do 50% on that, or the energy on that fork, 50% on the other fork,
but you cannot put 100% onto fork A and 100% on to fork B.
Whereas in proof of stake, that economic asset is actually just related to you signing messages.
Signing messages is really cheap.
And so the idea here is that messages that can lead to you attempting to apply your stake in multiple places
and it could lead to network faults and confusion as to what the head of the chain is,
we have to make those messages expensive.
Similarly to how allocating your physical resources, the mining hardware,
was expensive. And so, thus, if you do some of these activities where you're essentially signaling
two different versions of a history, you can be penalized severely because they're provably nefarious
messages. And that's what is called slashing. By severe, I mean that if enough validators
were doing that type of like double signaling within a recent window of type of type of
time to create a network fault and that minimum threshold is one third of the validators,
then you'd be punished maximally. So if one third of the validator set is doing slashable things
within a recent time window, then those validators actually lose 100% of their state. And that's
because we want to have like provable security bounds based off of, you know, if and when attacks
happen. Whereas if you're, if this is just like an isolated event, say I'm just running a single validator,
I do something ill-advised with my staking setup and I'm signing double messages by accident,
but no one else has really been doing it in the recent time window.
I get a slap on the wrist.
I get kicked out of the validator set.
But since the fraction of validators that have been slashed recently is very low,
my penalties still relatively low.
I might lose like one-eath and get kicked out of validators.
So that's, so we have rewards.
We have penalties for normal operation.
And we have slashing, which is for these like very explicit,
it nefarious activities like double signing out to stations or producing two blocks in the same
slot, that kind of stuff.
Yeah, talking about slashing, is there a chance to do something that's, yeah,
offendable to the level that you get slashed by accident?
So concretely, well, we have multiple client implementations.
So, well, there might be the situation that one client says, well, that is a valid block
and another client says that's not a valid block and therefore is kind of ignoring that.
proposing another one?
We have seen some, a number of slashings on mainnet since December.
And almost all of these have been due to individuals and institutions creating sophisticated
and attempted to be sophisticated redundancy setups.
So essentially it's very, if you have your keys in one place and you're tracking the messages
that you've signed, it's very simple.
It's very, the logic is, can be, you know, in six.
lines of code. It's very simple to like not double sign. Essentially, a couple of
conditionals and a very, very small database, and I can prevent myself from doing this. But if I have
my keys in two locations, say I'm trying to run client A and client B, and I'm trying to run
them on two machines to make sure that I don't have any downtime, then I'm going to be signing
double messages almost every epoch, and I'm going to almost certainly be slashed. And so this is actually
what we've seen is we've seen some hobbyists that didn't get the memo and are trying to make sure
they don't have any downtime. A couple of them have been slashed. But actually what we've seen more so
is these institutions who want to advertise the best uptime ever. And what they do is they have
like way too sophisticated of deploys and don't manage the keys properly and have the keys in two
different locations. That's the key. If you have your keys in two different locations and they both think
they're in charge and there, you know, there's no communication there, you're going to be
slashed because you're going to, like, eventually have a slightly different worldview, like,
one client might see the block, the other client might not say the block, and they both
will sign something different.
So basically your advice is not, don't do it too complicated set up.
Keep it simple.
Keep it simple.
Because if you're offline, even for a day, like, you're going to lose very, very little
money.
Because, again, we have rewards and penalties for normal operation.
If you're offline for a whole year, you might lose like 8%.
whereas if you if you like do two complex of a setup you're going to uh you know you will lose much
you can lose much more than that so there's many client teams and a number of them have
implemented this new feature uh called doppelgangor detection superfiz came up with it and superfiz
from the east acre community and he um the idea is right when you turn on your client and it knows
that there's some valider keys associated with it it actually won't start its job immediately
It'll wait like an epoch or two and just listen to the network.
And it knows that it's not signing anything and it's not broadcasting anything.
So if it sees any messages come in from itself, it's detected a doppelganger and it says, oh, no, abort, abort, don't sign any messages.
And this is actually a new feature that's rolled out a number of clients.
And I think it's protecting especially the hobbyists with the simple setups.
Obviously, then you have like an epoch or two of extra downtime.
But like I said, downtime's not the issue.
It's really like double signing is like this severely worse activity.
And so I think you can like manually override.
You can do like a flag that's like dash dash capital letters, unsafe, disabled,
doppelgaker detection.
But for most users, I should just run with the default and have those protections.
So when you're looking for a flight, you go through a flight aggregator to see all the different places
where you can buy the flight, to get all the options and make sure you get the best price.
price for your travel plans.
And when you're making a defy swap, just do the same and use parasy swap.
It beats the market prices across all the major dexes because it aggregates them and thanks
to their network of professional market makers, you get zero slippage on your trades.
So they just pushed a huge update that's even faster, more liquid thanks to a brand new
algorithm.
Paraswap is now multi-chain and has expanded the polygon and binan smart chain.
To go and check it out, give Paraswop a try at Paraswop.io slash Epicenter.
You've referred to epochs repeatedly now.
So maybe let's talk about that because that's something we don't have on ETH1.
So what is an epoch and how does it relay to finality?
So an epoch is a collection of 32 slots.
Like I said, slots are every 12 seconds.
So an epoch is every 6.4 minutes.
slots are this unit of time at which blocks and very granular actions can happen, whereas
epochs is like a collection of slots and there's like an aggregate of duties and different
accounting that happen at each epoch. And so every epoch is 32 slots. Every validator is
assigned to attest to exactly one slot per epoch. And so an epoch in advance, my validator client
gets notified that, okay, at the sixth slot into the next epoch, you're going to have to attest to
the head. So at that point, you'll, you'll sign a message and broadcast an attestation.
And similarly, like within an epoch, you can potentially be assigned to a block.
And so all the valid, the entire validator set has this one attestation per epoch.
And thus, at the end of an epoch is kind of the, when accounting is done.
So in the previous epoch, we go and look, okay, how many attestations were there?
Was their agreement? Was their disagreement?
what are the rewards and penalties based off of like the individual and the aggregate activity?
And is there enough consensus on the state of things to update the finality calculations?
An epoch can first be justified and then be finalized.
Justification is kind of like this first round of signaling for validators to say,
I think that this block will remain in the chain forever.
And then once something is justified, they can signal a deeper thought, which is,
okay, let's now say this block will remain in the chain forever.
And so what we have is this like two epoch finality cycle, where at the end of epoch,
at the end of epoch, N, you might justify it.
And at the end of epoch N plus one, you might finalize, optimally finalize epoch in.
And in main net, I think we see like pretty much every single epoch just kind of goes
through that cycle of justification and finality, justification and finality.
And that's also at these epoch boundaries is where a lot of like,
rewards accounting happens and various other things. Like you might also at that point update what
the view of the the ETH1 chain is so that you can induct due deposits and different things.
So it's really like it's these accounting boundaries. It's really these larger than block
groupings of logical activity. And from from a finality standpoint, you can really think about
you have all these little blocks, but you can think about these epochs as more of like larger
packages. It's kind of like the meta chain on top of the little mini chain.
Does it mean the maximum kind of reorg that could happen is like one epoch or two epochs?
Or is that one way to think about it? So there's like increasing depth and probability of reorcs
in the ETH2 beacon chain. And at every slot, a validator first creates a block and then
one 32nd of the validator set is during that slot because there are there are
assigned to that slot will then send an attestation. That attestation immediately gives weight to people's
fork choice and recursively gives weight to the chain prior. And so actually, because of the way that
the validator set, much of the validator set participates each slot, you end up with probabilistic
guarantees at the depth of slots that the chain will be reorged. So it's something, it feels a little
bit like proof of work at the chain tip where there's, you can make probabilistic claims that
that there won't be a reorg unless X, Y, Z happens.
And under normal operation,
those probabilistic claims are actually very, very strong
because most of the validator said is participating
and sending signal at each slot.
Then at the depth of one epoch,
you can have justified.
So something 32 slots ago can be justified.
And this is really, this is like the, like I said,
the first step in finality,
justification, you can make a much stronger claim
that something justified would never be reorged,
because it would require a very large amount of validators, call it at least one-third, likely more in the one-half, even two-thirds realm, to not run the protocol.
So it's essentially to run an altered version of the protocol where they stop listening to justifications.
Because once something's justified, locally you say that's what I want to build on.
And something won't be elsewise justified unless people change their local protocol.
Granted, that wouldn't necessarily lead to a slashing, but it becomes very, very unlikely.
Then at the depth of two epochs, so 64 slots or 12.8 minutes, then you can finalize.
And this means that locally in my software, whoever's seen that that is finalized would never revert.
And you can make claims that no one will see a,
different version of finality unless a minimum threshold of validators for slash. And that minimum
threshold is one-third. One-third is actually, although theoretical attack could happen at one-third,
it's extremely improbable that you could even conduct it at one-third and it'd probably be
much, much higher. And so you can make crypto-economic claims that this is finalized, this will remain
finalized, and nobody else will see a different version of finality unless a large amount of
capital is burned. And so we have, again, we have like the head of the
chain, we get to make probabilistic claims based off all these attestations that things won't be
reord. Then 32 slots, we get justification, which, you know, for almost any operation is much,
is enough depth and enough confirmation. And then at 64 slots, we get that finality, which is like
that ultimate claim of crypto economic. Like, it's not going to revert. Yeah, that sounds to me,
like, like, just very high level on proof of work. It's kind of totally normal.
totally expected that it happens multiple times a day that a single block or even two blocks
get to get reverted. Here it sounds like even a single block or slot revert would be highly
unlikely in normal operations. Yeah, in normal operation, absolutely. So if I saw like 99% of the
slot attestations come in, I can make like, I can, I can be pretty well assured that this is not
going to revert unless there's actually something malicious going on. If instead I,
saw 10% of those that slots attestations come in, then I wouldn't, I wouldn't start locally making
decisions necessarily because I don't have like a high probabilistic guarantee. But in normal
operation, we do see almost all validators assigned to each slot, attesting each slot. And
thus we do get that like probabilistic, increasing probabilistic guarantee that things won't be
reverted. Yeah. So it's probably in normal operation, as good as proof of work confirmations.
and then increasing depth, increasing kind of crypto-economic guarantees of non-reversion.
But on a very fundamental level, this is still very different from ETH-1, right?
Because on ETH-1, one of the general design decisions is of availability over consistency.
So basically, the chain can never hold, but this comes at the cost that there might be re-orgs, right?
And then you have like the converse with BFT-style consensus algorithms that,
basically they have finality, but in principle the chain can halt. So very much like on tendament.
It seems like ETH II does something... A weird hybrid.
Yeah, it has this weird hybrid or this middle ground that I didn't even know existed before.
And so what are the tradeoffs of having like this hybrid model?
Right. So like proof of work consensus, the ETH2 protocol is fundamentally liveness favoring,
meaning that the chain can be built, even if you don't have these like BFT thresholds of validators.
And this is to provide kind of the uptime and availability of the network that blockchain users
and what I think a global decentralized network expects.
At that point, it's ultimately up to local node operators to make decisions about what is accepted.
Like if I'm in exchange, I might always only operate under finality.
But if I'm like sending NFTs to my buddy and like we don't have finality, I know that this, this operation will clear.
And it's not a big deal.
So what we get here is really like we get a live chain and we get a BFT consensus kind of like following.
And so from the perspective of the designers of V2, that's kind of at least for the expectations of guarantee.
of blockchain systems, a really nice compromise. And the idea ultimately is a safety favoring
chain cannot simulate a liveness favoring chain, whereas a liveliness favoring chain can
simulate a safety favoring chain. And thus, the latter is ultimately like a more powerful
construction because it gives more optionality to users on how to interpret the worldview at any
given time. There's much debate on this point and as to like what the quality of service can or
should be and whether it's really worth having these like live chains without finality. But again,
that point ultimately is like the the clicking point for me is really like, sure, if there's not
finality, I don't have to finalize anything. But I do provide optionality to users and systems
based off of like probabilistic guarantees and different things in a liveness favoring.
system.
Super interesting.
So maybe let's talk about the merge for a little bit.
There will be the merge, and the merge will merge ETH1 into the beacon chain.
So how exactly does that happen?
When is it going to happen?
I imagine ETH1 and ETH2 have separate states.
How is that handled?
How do you kind of make them congruent?
So let's think about what ETH1 is.
ETH1 is, and this is a construction for, ETH1
a lot of things and there's a lot of different ways to think about it. But for the purposes of the
merge, we can think about it in two layers. We have this application layer or this execution
layer where all of the users hang out. It's where all the applications are. It's where the
user layer state is. It's where transactions are being processed and all that. It's really like
what I as an end user care about. I care about, you know, my uniswap trades and that kind of stuff.
And then you have this thin, purplework consensus module that's driving it. It's really
like providing the service, it's providing the quality, the, um, it's providing the service to
this execution layer. It's, it's the cradle for blocks. It's providing guarantees about reorgs and
different things like that. And what we have, uh, is, is really these two layers. We have the
proof for it consensus layer, uh, providing the application layer to services, um, and to users.
And then what we've bootstrapped in production today is the speaking chain.
which is a proof of state consensus.
And the idea really here is for at one point in time,
the proof of work module to be driving that application layer,
and at the next point in time, post-merge,
that proof-of-stake module, that beacon chain,
to be driving the same execution layer,
the same application layer.
And so the transactions,
essentially the cradle of Ethereum,
right now is these like proof-of-work blocks
and that proof-for-work consensus,
and post-merge,
the cradle of Ethereum, the thing holding it all together, is ultimately the beacon chain
and the proof of state consensus. And you can imagine the, essentially that payload for the
execution layer is essentially moving locations upon some condition. And so people are running
software that knows prior to this time or prior to this block height, I'm listening to the miners,
and after this time, I'm listening to the proof of state validators. And there's, there's like a number
of little details to work through on the actual point of merge and how you maybe handle attacks on
this boundary and reorgs on this boundary. But the simple case is essentially you have a chain being
built by proof of work at one point in time. And that same chain, that same like payload of
execution layer that validated that end users care about is then being built by these by these validators.
And the nice thing is conceptually these layers are important from like a mechanism design
point, but they actually translate into really nice, like, software reuse. And so we have what we call
ETH2 clients, which are these beacon chain clients. They've built a highly sophisticated
proof of state consensus mechanism. And then we have, like, what is an ETH one client?
An ETH one client really is a highly sophisticated execution layer. It's highly sophisticated EVM,
transaction processing, you know, MMPPool management and all that kind of stuff, plus this thin
proof of work module that literally hasn't been touched since day zero. It's a relatively simple
mechanism from the software perspective, and it hasn't been, it hasn't been touched. And so what
the software after the merge looks like is really taking an eth one client, which is primarily
a highly sophisticated execution layer, cutting out that proof of work module, which is, was the driver
of that execution layer. And instead of listening to that proof of work module, listening to an
ETH2 client. And so the software post-merge actually looks like you take these ETH2 clients,
which are highly sophisticated proof of state consensus mechanisms, and you take an ETH1 client,
which is a highly sophisticated execution layer, and you smash them together, and you have
the proof of state client driving that execution layer, asking questions about the execution layer.
So, for example, instead of the proof of work module saying, hey, give me a valuable transaction
bundle to include this block, the beacon chain client is instead saying, hey, Geth, hey, nethermine,
hey, open Ethereum, my local adjunct execution engine, give me the valuable payload. Hey, process this payload and that kind of stuff.
You mentioned, so there's a state before and there's a state after. Really, that state, there's a beacon state, which is like the system layer state of this proof of state consensus mechanism.
And then there's the application layer state that exists in these, in these like proof of work blocks today.
And really, this consensus mechanism is really good at doing that.
It's really good at company consensus on things.
And so it's really just like slotting in in its state transition.
And in its state, it's embedding the execution layer state of Ethereum into that.
And so if you think about it as like a tree of all the things embedded in the beacon chain outer layer state that is built and finalized,
you're essentially having like the application layer of Ethereum is embedded into it as like a sub.
component of its state. And so that application layer state right now exists in like the
proof of work land. And it's really just taking that application layer state and subsuming it into the
beacon state, which when you finalize the outer state root of the beacon state, you finalize
everything within it. And so you then just if if within it is the application layer state that's
been consensus on, then you get, you know, these finality properties and the other properties that
proof of stake is giving to itself.
So you're kind of spooning over the state, but I mean, in principle, the miners can continue
with the original chain, right? So basically this is kind of a natural break point for a fork.
Yeah, I mean, it's, it's, if anybody wants to run proof of work theorem, I think blockchain
governance ultimately works is that anyone can continue to run it. There's a couple of things
that I think might make it not super viable.
The Ethereum community has consistently since Genesis put this thing in called the difficulty
bomb.
The difficulty bomb was intended to ultimately at the beginning to allow for a cleaner shift
to proof of stake.
Ultimately, like the mining difficulty at these points of the difficulty bomb increases exponentially
so that it becomes non-viable to mine that proof of work chain unless you
actually hard fork. So that might disoaded a proof of work fork here. But another,
another interesting point is that when in the last contentious theorem hard fork, so which created
a theorem classic, there wasn't a lot going on in the application layer. There really was this
Dow thing and then a bunch of like little experiments. And so the application layers could kind
of fork and exist in parallel and it wouldn't, it didn't really, there wasn't like any of these
like big dependencies and stuff. Whereas I would posit that if Ethereum forked today and you had a
majority community stake in one and then a minority community stake in the other, that the
application layer on the minority one is likely going to implode. There's a lot of like interdependencies
and a lot of value and stuff here. For example, oracles may or may not be run on the minority fork.
And even if they are, you have systems like Maker, which if the value of the ETH on one side or the other drops significantly, you have like mass liquidations, then dyes integrated all into defy and you have rippling.
All the, all the backed assets like UTC, tether and so on.
Yeah, I also think like the support for proof of stake in the Eurystrium community is so overwhelming that I really don't think there will be any debate.
or any question kind of what's reaching?
Definitely from a community perspective, I'm like 99.99% are just like, let's do this.
We've been wanting to do this for years.
Can we just do this?
Come on.
Whereas you could theoretically run a proof of work fork, but I think that it will quickly
become a wasteland.
You said earlier that proof of stake has two main reasons behind it.
So one is the environmental sustainability.
And I think that's pretty straightforward.
why that is the case. And the other one is the security. So let's talk about the security aspect.
Why is proof of stake good for security purposes? So proof work and proof of stake are fundamentally
cryptoeconomic consensus mechanisms, meaning that they have certain properties if no entity has
certain thresholds of value securing the network. And so they're pretty similar in that sense
and have some of similar properties because of it. But I think,
the crux of the
proof of stake having higher
security is really because that
asset securing the network is
actually in the protocol.
And so you can not only reward
that asset, you can penalize
that asset. And this,
especially in the failure modes,
leads to much
more secure system. And so
let's think about what happens
if a proof of work chain is 51%
attacked. If a proof of work chain is
51% attacked, that's kind of it. Like, you have a party that has 51% of the assets and they can
reorg and do whatever they want. The only recourse here is forking the protocol so that you have a
new mining algorithm, at which point you bricked all the good guys hardware as well. You can think about
it as there's a budget for the attack. The budget for the attack was securing 51% of the assets,
but then there ultimately really wasn't a cost. And so once I secured the budget,
I'm now just God. Whereas in proof of stake, because those assets are in the protocol,
there is a budget, so secure 51% of those assets. But then there's ultimately a cost as well,
because if you do connect a network fault, then those assets are slashed. So I can do the attack
once, and then I lost all my money. And I have to accumulate all those assets again.
I can do the attack again. You know, whereas in proof of work, I just entered God mode and could
reorg over and over again. And in the extreme, where you're, say, maybe hit like two-thirds
threshold and you're some sort of like censoring majority or cartel, this can also be detected
socially. So I can identify this cartel, this censoring majority. And in the extreme,
there can even be social recovery. So these assets could be forked out of the protocol and
burned. Whereas in proof of work, you don't have this nuance and the only recourse was ultimately
forking the good guys and the bad guys out. That's definitely like something, a failure
mode you don't want to run into. But the fact that that recovery does exist, I think it ultimately
would dissuade even those extreme types of attacks. Yeah, I mean, this is a question about
what is more secure. I think the debate was going on for years. And I think, I really think
it's settled now. I really think proof of stake is absolutely clear. One thing,
where it's way less clear to me how it will play out is the question of centralization.
Because originally I used to bring up the argument that proof of stake would lead to less centralization
than proof of work because in proof of work you have the economy of scale.
So you have, if you spend, I don't know, 10 million on mining, you will get at the end
better return for your last dollar than someone who just spends $1,000 on mining.
while with proof of stake arguably,
or maybe that is less the case
and kind of each validator,
even if you just have like one validator,
you kind of probably will have the same rewards.
That being said,
there are significant arguments,
I would say also for centralization in proof of stake,
and that is likely the idea of staking derivatives.
And I mean, to some extent,
that's already what we are,
seeing is that of course, yeah, there are specifically now where you cannot get your
ESER, well, immediately out.
So it's quite, quite a commitment to do it manually or to do it yourself, kind of stake your 32
ESER.
You could just go to an exchange and they will offer you a nice service.
They will say, okay, well, if you want to go out, well, we find a seller for you, we create
a, well, staking derivative or whatever we call these two.
Yeah, and we have a market for that.
And of course, to someone, to an individual, I mean, that's a big, big plus.
But that comes at the cost of, yeah, well, or it threatens the decentralization.
And maybe you know better than me, the current statistics, but exchanges play already a big role, right?
Right.
I mean, between exchanges and staking pools, it's probably something like 50, 55 percent of staked
assets that we can tell today. There is definitely a strong hobbyist community.
And but you're right, like this, there's this like before the merge, there's this unknown lockup.
And so liquidity is certainly a question and certainly a driving factor for people to move to
these, these other types of institutions. Is there anything we can do about it?
So there are some things that are done today. So one thing I mentioned earlier, this is,
this is kind of a minor point, but the fact that you can participate with a very small
percentage of the network and still have regular rewards and payouts, that's like a nice
little thing. And that helps the hobbyist community. One thing that is designed into proof
of stake systems, which is not in proof of work systems is there's these like discorrelation
incentives. And so if you are with a majority, if you're with a very large staking institution
and they go offline, the amount that you lose is as much higher than,
if you were going offline in isolation or in even a smaller pool. Even worse is if you're with one of
these large institutions and they have some sort of security breach or some sort of bug or issue,
then that causes them to be slashed. Then again, the amount that you're slashed scales with the
percentage of the network that was recently slashed. And so if you're with like a 30% staking pool
or 30% staking provider exchange that has a major slashing event, say somebody internal,
just wants to watch the world burn or somebody hacks the system or they're running just buggy software
or trying to be clever with their redundancy, you know, you would lose quite a bit of capital.
And so there are these disincentives to being with the large institutions.
This might drive you to be a hobbyist and stake at home, but this also might drive you to be,
you know, with a smaller pool.
And so I think even if we end up with a highly pooled landscape, which we certainly are beginning
to see at least in the 50%
range, there's still these incentives to not be with the mega players. And so whereas in mining,
I think we have like two or three pools that, you add them together and you're at 51%. The only thing
that keeps those pools from being larger than that is because it's not socially tenable. Ultimately,
like it wouldn't, you know, people don't want to join the pool that's too big, whereas there's actually
a disincentive of joining pools that are that are that large in the staking landscape. It is unfortunate
that those disincentives are in the tail risk scenarios, which I think that humans are pretty bad
at really assessing tail risk. And so I think we might see waves of centralization and then decentralization
as some of these events happen. So if something major happened to a major exchange, all of a sudden
we might see everyone like scatter their stake and redistribute. But we shall see. Staking derivatives are
certainly another interesting thing. I think we've seen there's a,
there's an entity that has something like 7% of staked assets right now.
And I think that number has increased quite a bit in the past few months.
And that's primarily because they offer, I won't use the word decentralized,
but they offer an on-chain staking derivative.
And this, you know, they have an interesting mechanism where they're kind of distributing,
they're kind of like this tokenized middleware that then like distributes to various pools.
And so although it represents 7% of the stake, it's actually distribute to like 10 pools underneath the hood.
and then you have these staking derivatives that represent like kind of the shared risk across
these pools.
This is, I see as a competitor to exchanges rather than a competitor to hobbyist.
Like if I'm a hobbyist, I'll probably be a hobbyist, whereas if, you know, I was going to
go to Coinbase, I might instead go with this, like, decentralized option.
I hope that we see a number of these, so we see a lot of competition.
And I hope that we see a number of like staking institutions, so we see a lot of competition.
I am more optimistic probably than you led in terms of the, although we might see like a highly pulled and highly institutionalized space, I think we're going to see much better properties than we've seen with mining pools.
I mean, one is that every exchange wants to get on on the action.
So every exchange is probably going to create a product, whereas mining pools we saw very few entities that ultimately are these like very large pools.
Yeah, one argument you brought that is definitely very strong is just this continuous payout.
I mean, that was definitely a driving factor for mining pools in, well, in proof of work.
How is that affected after the merge?
Because then, well, I guess we are talking about transaction fees.
Do they go then to the block producer, or at least probably MEV will go to the block producer?
So how would you think that will affect?
The value of the payload, of the application of their payload, so the transactions that we know and love, goes to the lock producer.
They're the sole entity responsible for bundling and finding that value.
And so they're the ones that are paid out.
This actually is definitely like a, especially in the like ever evolving landscape of MEV.
This is definitely a huge point of research and huge point of discussion.
And ultimately, I think it comes down to the democratization of MBV.
meaning who has access to it, how much of an edge do institutional players have over hobbyists?
And ultimately, I think it's very important for, one, for application layer contracts to design their systems so they don't have highly exploitable MEV.
And two, for open tools and open access to MEV to be created so that hobbyists don't have a huge discount on institutions.
And three, even investigation of layer one protocol techniques for MEP minimization.
For example, 1559 ultimately does put a bound or does reduce the amount of MV available,
essentially because of that in protocol burn.
So there's potentially, this is a very, like, exciting area of research,
but potentially there are other types of techniques that may make its way into the protocol over time.
There's also, there's a security component here. The classic Bitcoin issuance model is not sustainable.
It hinges upon the fact that as the issuance goes towards zero and transaction fees become the dominant
component of the block, that it becomes the mine on the head component of the protocol becomes
game theoretically unstable, whereas I might, you know, if I have 20% of the network and I see these
like huge transaction fees. It might actually behoove me to reorg, try to attempt to reorg
the head, even though I only have a small probability of doing so to try to get those fees.
And what we're seeing with the rise of MAV and that ratio of like block value to block
issuance go up, we definitely see some of those like similar weird incentives pop up on the layer
one security. And so that's like definitely a huge component of research. And I wouldn't say quite
a huge worry, but there's a lot of people thinking about this.
Yeah.
New interesting insight, at least for me, that reducing MEV might be a factor for decentralization.
Yeah, absolutely.
And so it is certainly a concern.
I think in a world in which you had, you know, hobbyists couldn't access MEV and you
had major institutions with pouring hundreds of millions of dollars into optimizing
MEP, that's definitely like not a good outcome for decentralization.
And so there's a kind of a huge parallel tract of research right now on democratization and MEP minimization that I think is going to be critical to the security of the system in the future.
Yeah, I mean, even if it's completely democratized and everyone would get the same, you would still have the variance issue. So you would still then.
So you would have certain, you do have those consistent payouts, which maybe keep your machines, keep your lights on.
but you would see these like larger payouts.
But so it's definitely,
it definitely like the optimal of like having these like very clean,
consistent revenue streams changes.
It changes the calculation certainly.
So the other thing that Ethereum 2 is going to bring,
albeit not immediately,
is scalability with sharding.
So for some reason this is lumped together with the proof of stake transition.
but it won't come live until next year, I'm guessing, at the very earliest.
Is that correct?
Later, for sure.
So, yeah, the ETH II, I think I often say, is to increase security, sustainability, and scalability,
while retaining decentralization.
And this is so that we can, you know, Ethereum can provide a secure and sustainable
environment for the world's decentralized applications.
At the crux of sharding, at the crux.
of this consensus mechanism being able to provide more scale is really like sophisticated
mechanism design. And it turns out that it's much simpler to design these mechanisms when
you have about when you have a participant set, when you have a validator set, when you have like
the consensus entities at hand to essentially like orchestrate and dictate through mechanism
design. With proof of work, there's this notion of there's no notion of a validator set in
any given time. There's no notion of consensus set. And so a sharding, which relies heavily on randomly
sampling and validators and consensus entities validating subsections of the system in any given time
really needs to know this set. You could imagine there are like some proof of work
sharding designs, but they ultimately end up trying to mimic a proof of stake sharding design
in that there might be some sort of like election into a set, like you have to mine for a certain
amount of time and do a certain amount of blocks and all of a sudden you're in the set.
And then you can be sampled.
And it's much cleaner and much simpler to just have a validator set known.
And with proof of stake, we get that out of the box.
And so really, the foundation is get a proof of state consensus mechanism out there.
And then to come to consensus on valuable things.
Valuable things are the execution layer of Ethereum today, the application layer.
And also more things.
So sharding rules.
So take this consensus.
mechanism and utilize it for the value of theory. And there's really prioritization here between
these two major upgrades, it's the merge and charting. The question was, which to come first.
And the really nice thing is that there are tons of scaling efforts, layer two scaling efforts,
going on in parallel to all this layer one development. And so coming online today, and increasingly
so are these roll-ups which scale with the amount of layer one data and aim to provide like
10 to 100x scaling for Ethereum without changing, which I'm moving to the sharding designs.
And so what we get to do is do the merge, which gets rid of proof of work, which makes the system
more secure and more sustainable. Simultaneously, we get all these layer two scaling solutions,
which makes the system more scalable. And then further down the line, bring on sharding,
which would complement these layer two scaling solutions by providing.
providing more layer 1 data to get even more scale.
So essentially all these roll-ups are, they're buying us time.
And so we get to work on the other two components through the merge and then prioritize
sharding.
So essentially, if we did charting today instead of the merge, we might get, you know, we get
all these layer two's and then we get all this scale from sharding, but we didn't get the
increased decentralization and increased sustainability.
So instead we get to tackle them all at once at these different layers.
then enhance it further down the line.
From your perspective, is it, is it an option on the table that it will stay at proof of stake,
that basically saying kind of, well, we do the transition to proof of stake, but after that,
L2s will do the job and that's it?
I don't think so, but I mean, I don't get to decide.
You could imagine some movement within the community being like, this is enough.
Stop, stop messing with it.
And, and, you know, I think ultimately that would be it with 10 to 100 X scaling with rollups, like that would give us something.
I think that Ethereum would be a very valuable and powerful tool.
But I don't think that it would give us the scale that much of the community has imagined throughout the years.
And so what we do have going on right now is there are sharding specs up in the spec repo.
There are people working on R&D and working on prototypes.
I think even within the next few weeks, we might see like a very, very, very important.
small like sharding devnet, which builds upon the beacon chain and the merge.
And but ultimately like from an engineering perspective, like one had to be prioritized
to the other or we just both would take longer. And so yes, there's a lot of unknowns. There's
always a lot of unknowns. And the roadmap and the technical specifications have often been
in flux. And so like you could imagine in 12 months something radically has changed. But I do think
that come the end of the year with the hardening of the sharding specs and with increasingly
some like test nets with some of the people that are sprinting on prototyping that will have like
a good idea as to how this slots in in the next like 12 to 24 month roadmap. Obviously there's
there's this thing we call ETH too which is all the stuff we talked about so far. But there's also like
tons of other shit people always want to do and prioritize. And so like it's a constant kind of debate and
discussion. Other things you might have heard over, like state management and state sustainability
through like statelessness and other techniques, which help reducing the load of running a full
node, which help with clients, all that kind of stuff. So there's a lot of different parallel
R&D efforts. I think I can say very confidently the next major thing to go to Mainnet will be
the merge, but then post that, it's kind of a very active discussion and debate.
Yeah, I'd love to do another episode sometime about.
charting and the other efforts just because, you know, having them like on the sides like this,
it doesn't do them justice.
But I do actually have a couple more very tangible questions on proof of stake.
So who will get to determine the block gas limit after the merge?
So the execution layer is like at the beginning of the merge is untouched.
So the essentially everything dictating that realm.
is ported and instead of the proof of work miners producing and sending signals on that,
it's the proof of stake validators.
And so in the construction of a block that has this execution layer payload, similarly,
the block gas limit could be dictated by a validator than by proof of work minor.
And so that knob still exists.
The block producer can still turn that knob.
The block producer just happens to be a validator now instead of a minor.
Similarly, like the 1559 feed mechanics, which are expected to go live in the next couple of months, that mechanic at its core is in relation to the block producer, right?
And the block producer post-merge happens to be a validator rather than a minor.
On this topic, I really have a very high-level question on how we see, well, or how you see Ethereum.
and specifically with this question of what's the purpose of Ethereum and is there potentially even a conflict of interest.
So what I'm talking about, one way to look at Ethereum very critically currently is look at the high fees and see users and applications paying currently enormous fees.
So one way to look at it very negatively would say, well, they got lurked into this chain.
They kind of started there assuming, well, everything would be cheap and free.
And currently they are getting the maximally rent extracted from those fees.
And I mean the narrative is getting stronger around ultrasound money or kind of making a lot of money in a way with Ethereum.
So to what extent or maybe what's your comment on that?
Or is there a chance to come to lower fees or maybe even,
the question, what is one?
Yeah.
I mean, I don't claim to be an ultrasound moneyist.
I do think that these systems, that the foundational asset of these systems must be
valuable because these are cryptoeconomic systems and they rely on cryptoeconomic consensus
models, which generally are more secure when that foundational asset does have value.
And so I get the push there.
I mean, there's probably two pushes there.
Some people are trying to make a lot of money.
but ultimately a secure ether, a highly valuable secure ether does provide good properties to the system.
So, but that aside, the crux here is ultimately capacity, which is supply.
And there's demand, you know, as outstripped supply quite a bit.
And people really want to use the system because of the network effects, because all the shit is there.
like you said, they've been roped into the system. That's where all the actions happening.
And there's also like an argument that, and maybe against kind of the tail risk thing, but
the system's much more secure than other systems by virtue of everything happening there and by
the network effects and the value of the foundational asset. Ultimately, we need to figure out how
to leverage this secure network for more capacity. And there's a couple of techniques there.
we talked about earlier. But roll-ups kind of go into this realm of can we can we use this as like a
settlement layer to like a parallel system and leverage and get the same security out of it?
It turns out that you that you can. And it has been quite a journey to construct systems like
that. I think plasma was like this promising thing and we kept running into this issue of like
ultimately like there's a data availability problem and there's all these like kind of ransom attacks
and different issues that arise there.
And roll-ups solve that and are extremely promising to use Ethereum more as, like,
a settlement layer to leverage its security to secure much more and secure much more activity,
which is great.
And then ultimately sharding these techniques of random sampling of using consensus participants
in a way that does not degrade security, but can come to consensus some more.
ultimately like what is l1 i mean l1 is a l1's a security model on a certain amount of capacity
and with with with a spectrum of decentralization i think ethereum thinks the ethereum
community uh the ethereum ethos thinks that decentralization is is like a very critical
component to the value of these systems and to the value to the world um and thus all of those
design decisions that sharding the how to get more capacity ultimately
like is unwilling to really sacrifice on decentralization.
And thus,
it's taking time and it's difficult.
And I think one of the reasons that a lot of applications and things are here is,
is because of that ethos,
because of that philosophy,
because of the value that we think decentralization is going to bring the world.
But we see a lot of like other competing systems that I think make different design decisions,
especially on that decentralization spectrum,
provide much more capacity.
And the systems may not have their place in the future.
but I think Ethereum is attempting to build a certain type of future.
And that's really the guiding light.
And I think really ultimately the value proposition.
Cool.
Thank you.
That was super fascinating.
And I'm afraid we kind of have to wrap because of time, not because we've run out
of things to talk about.
Danny, it's been an absolute pleasure.
Where can we direct people to learn more about E2 or even join the effort?
So there's this fantastic place called the ETH R&D Discord.
It originally was kind of an ETH2 effort.
And then the all core devs, the Ethereum 1 effort, it all merged.
And it's just this like fantastic, fantastic place to get involved in anything L1 Ethereum R&D.
So check that out.
You can find links on like the spec repo.
I think you can probably find it pretty easily.
There's ETH research, which is a more asynchronous forum to discuss.
research ideas. There's the E2 Spex repo where a lot of the magic happens on getting the
specifications for this major upgrade in place. And then there's the dumpster fire of crypto
Twitter, which you can at least find access points into all these different things.
We'll link to all of these in the show notes. So thank you so much, Jenny.
Yeah, absolutely. I really appreciate you having me. It was a fun and fascinating conversation.
Thank you for joining us on this week's episode.
We release new episodes every week.
You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud, or wherever
you listen to podcasts.
And if you have a Google Home or Alexa device, you can tell it to listen to the latest
episode of the Epicenter podcast.
Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter, so you get new episodes
in your inbox as they're released.
If you want to interact with us,
guests or other podcast listeners,
you can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show,
and we're always happy to read them.
So thanks so much,
and we look forward to being back next week.
