Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Alexey Akhunov: Ethereum 1.x – BUIDLing Things One Step at a Time
Episode Date: February 13, 2019We’re joined by Alexey Akhunov, an independent Ethereum researcher. Alexey has been working on an ambitious project called TurboGeth. As the name implies, it is a version of Geth which features a nu...mber of speed and performance optimizations. Alexey also leads the state rent working group of the Ethereum 1.x project. Ethereum 1.x came out of Devcon when core developers began to realize that the full migration to Serenity would likely take several years. The team hopes to bring progressive improvements to Ethereum in parallel to the development of Serenity. Topics covered in this episode: Alexey’s background as a computer scientist The story behind TurboGeth and how it differs from the original Geth client The speed and performance optimizations of TurboGeth, as well as its trade-offs What is Ethereum 1.x in the context of Ethereum 2.0 (Serenity) Which people and projects are part of Ethereum 1.x What is state rent and why it may be beneficial to Ethereum Implementing state rent in Ethereum 1.x and 2.0 eWASM and how it can be introduced in Ethereum 1.x and 2.0 The future of Ethereum and the progress towards Serenity Episode links: Alexey Akhunov on Medium TurboGeth talk at Bevcon4 Ethereum State rent for Eth 1.x pre-EIP Ethereum state rent - rough proposal State Rent proposal version 2 (rushed) Ethereum state rent proposal 2 ETH Roadmap AMA EthCC EthCC Epicenter Meeup Thank you to our sponsors for their support: Deploy enterprise-ready consortium blockchain networks that scale in just a few clicks. More at aka.ms/epicenter. This episode is hosted by Friederike Ernst and Sébastien Couture. Show notes and listening options: epicenter.tv/274
Transcript
Discussion (0)
This is Epicenter, Episode 274 with guests, Alexei Akunov.
This episode of Epicenter is brought to you by Microsoft Azure.
Do you have an idea for a blockchain app but are worried about the time and cost it will take to develop?
The new Azure blockchain dev kit is a free download that brings together the tools you need
to get your first app running in less than 30 minutes.
Learn more at aka.m.s. slash epicenter.
Hi, welcome to Epicenter. My name is Sebastian Kutu.
And my name is Friedrich Ernst.
So a bit of housekeeping, we are going to be at the 8CC in Paris.
At least I'll be there.
Other hosts might be there, not quite sure yet.
So 8CC is happening between March 4th and March 10th.
Actually, that's the blockchain week.
ETC is actually happening between the 5th and the 7th.
With those whole bunch of things happening that week in Paris,
hackathons and side events and meetups and such.
It's apparently going to be a huge event.
The organizers have told me that they're expecting up to 1,500 people,
and there are 300 speakers that are scheduled to speak.
So it should be really amazing.
If you haven't got your tickets yet, I really encourage you to do so.
They're actually quite affordable and should be a great event.
And we will be having a meetup at VCC.
It'll be on the Wednesday, so that's Wednesday, March 6th.
The venue, we don't have the venue yet, but we are taking signups for the meetup on our event bright page.
So if you go to epicenter.rox slash eithCC, that's epicenter.orgs slash ethcccc. At least you can sign up there, get our SVP for the event, and we'll send the location as soon as we have it.
So looking forward to seeing you in Paris. And so today our guest is Alexei Akunov.
Alexei is an independent researcher and works on Ethereum and works on different.
projects with the Ethereum Foundation.
And so we talked about a couple different things.
Primarily, you know, we talked about, so the roadmap for Ethereum, what is known as
Ethereum 1.X and how it relates to Ethereum 2.0 Serenity.
And also talked about his work on a project called TurboGeth.
So it was really interesting conversation because it really allowed us to see sort of where
the Ethereum roadmap or how the Ethereum roadmap actually had.
that evolved since DevCon with this new point release called Ethereum 1.X.
With Ethereum 2 looming so prominently on the horizon, it's often easy to forget that
Ethereum 1 is actually going to be here for somewhere longer, and it's necessary and
totally worthy of people's time to actually make improvements upon that.
And we talked with Alexei about what he's doing in that field and how it's going.
All right.
So without further delay, here's an interview with Alexa Akunov.
Hi, we're here with Alexei Akunov today.
Alexei is an Ethereum researcher.
Hi, Alexi.
Glad you're with us.
Hello.
Good to be here.
Fantastic.
Alexei, let's jump right in.
Can you tell us what your background is?
and what you're currently concerning yourself with.
Yeah, so I've always been, so since I was very young, at age I was always doing programming, I guess.
That was my profession all my life.
So I wrote my first computer game in Basic when I was 12 years old, and my brother helped me to debug it because I didn't know how to debug things.
And then it led to me going to university, study computer science and programming.
and then I did PhD in computer science
and then I worked multiple places
and it was always programming
so I learned a bunch of programming languages
and eventually
I learned about the cryptocurrency
in 2012 from my colleague at work
and we started to have a kind of lunchtime conversations
and at first I didn't actually understand it
so it took quite a few attempts for this guy to explain
to kind of to make me understand what it is.
And then I started to research Ethereum when I was thinking about the centralized storage.
And the reason I, we thought about it is that you probably heard about the idea about
the mesh networks, like a mesh internet.
And so one of the colleagues that worked said that, well, we should have this mesh internet
instead of internet providers.
But then I said, well, how are you going to do the search?
because the search requires storage
and so you need data storage
so I started looking at the things
eventually I found the Ethereum white paper
I looked at it and I thought
that it would still not solve the storage problem
but it's interesting on its own right
so since then I started following
Ethereum and
in 2017
in June I
went full time
and working on these projects
mostly related to Ethereum but
sometimes I did some other things
And at the moment, I'm mostly working on this Ethereum 1X project, which is about ensuring the longevity of existing Ethereum network.
So that's my current thing.
Oh, I forgot that I was doing the Trupi GAST most of the last year is my kind of my version of Ethereum client.
Cool.
So maybe let's just let's jump right in with Trubogeth.
So I know you've talked about this on many occasions.
So Geth is one of the main Ethereum clients.
And TurboGath is some sort of improvement on it, right?
Well, yes, in certain ways.
It is not currently functionally superseding it,
but my goal is to be functionally superseding.
Yes.
So what are some of the issues
that you saw in Geth that you felt you could improve upon and ultimately led you to start
building TurboGeth?
So the way it started is that I simply did a profiling.
So in GES, in the Go language, there's a very good toolkit for profiling.
So it was very easy for me to just run some Geth sync on profiler and to see where it's
spends most of its time. So I saw that a lot of time was spent in going into the database.
And then I started digging deeper and I realized that, wow, each go, each access to the state
actually needs multiple access to the databases. So I thought, well, that is really weird.
So I found this was kind of not how I expected it to be. And then digging deeper, I wanted to
fix that part. So I wanted essentially for, so that each account,
access to the Ethereum state takes one hit of the database and no more.
And that was kind of defining goal of this.
And then it sort of turned out to be quite a deep rabbit hole, which took me a year to explore.
How much better does TurboGeth fare compared to Gath?
So it mostly, at the moment it's better in two.
areas, I think. So it might be on par or maybe slightly worse than others. So it's better in terms of the
compactness of state history. So if you, let's say, run an archive node, which is the node where
you have the entire history of state expanded, not in the blocks, but expanded, meaning that you can
access it really fast. So I haven't run the Geth Archive node for a while.
but the last time I did it in summer it was about 1.5 terabytes on a disk.
So that is an expanded state.
In Turbigheth, however, currently the expanded state in Archive node is about 360 gigabytes,
which is a big difference.
And that's the one thing.
And the second thing is the actual speed of access.
So when you want to run some data analytics, you want to retrace transactions from the past,
the turbo geth is definitely like i don't know probably up to 50 times faster so that's why i can do
data analysis on it that i wouldn't be able to do on get so for example for my recent project
i can retrace all transaction to to gather let's say number of s stores in a block in about two days
on my so retrace all the transactions in all seven million blocks imagine if i had if i had to do it
would guess, it would probably take me, like say, 50 times longer. So it would take me like half a year,
which is, you see, it becomes impractical. What have you actually implemented in order to actually
facilitate these improvements? So essentially, I changed the way that the Ethereum client
represents its state in a database, and that's the main difference. And the most of the
clients that exist now, and I think probably all of them except for Beget, they store the state as
the what they call tri or Merkel Patricia tree, which is essentially a tree which for each node has at
most 16 children. And the property of this tree is that if you want to read or write a certain
entry, you have to start from the root and go down the tree. And the deeper is your
entry, the more hops down the tree you're doing. And another important bit about this hopping is that
you cannot do next hop before you did the previous one. So there's a data dependency. So that is actually
what was the initial thing that I observed. So because of the data dependencies, you cannot do these
things in parallel. You have to hop from the route down to the leaves. In Turpaget, I decoupled the
state storage from this patricia tree. So I realized that the only reason why you need a
patricia tree is to compute the state route. But you can store the data as you like. And I like
to store the data in a flat format where you have a key is the hash of the address and
the value is the serialized value. So it means that when you want to access a certain item in
the state, you just need one query to the database, just one. And that makes most of the
of things much faster.
So what are some of the tradeoffs in using?
Because I mean, it seems to me like this is, of course, you know, we would want this
and we will want all Ethereum clients to adopt like these similar improvements.
There must be at some point some tradeoffs, are there not, in order for someone to still
want to use the regular Geth client?
Yes.
At the moment, there are some things that are not supported in Turbo Geth, which are,
working and get, but I think they will all be superseded.
So one of the things that I can not do interpretate is what they call fast sync.
And the fast sync is the way of sort of joining a theorem network where you
download the current state starting from the root and then just rebuilding the Merkel tree.
And this particular way of thinking requires a certain query, which gives you the hash of the node in the patricia tree,
and then you're supposed to answer with the serialization of this node.
Because Trubiget doesn't store this Patricia tree at all, it has no way of responding to this query.
So it doesn't know where this particular hash lives on which part of the tree, in which part of the history.
up until recently I thought this is going to be a big challenge but after the workshop that we've done recently I've discovered that it might actually not be a problem so we're going to develop the new sync mechanism which will actually be more efficient with turbigeth than when gas because it has the flat structure so I think in the future in the near future turbo gas will fully functionally supersede gas if I
if I have enough time to do this.
Just to sort of understand the context in which you're building this,
you're building this by yourself,
or are you working in collaboration with the Ethereum Foundation in any way to sort of incorporate TurboGeth into Geth?
Well, I wasn't, I didn't think about incorporation yet.
I did, started as my own project and then,
then I received a grant from Ethereum Foundation.
foundation in 2018.
And then I had some support from In Fuhrer, because for them it would be beneficial to reduce
the storage requirements and potentially run the nodes on a cheaper hardware.
But because it's not functionally superseding yet, it hasn't happened.
So I don't know whether this is going to replace Gets or not.
I think I will leave this decision up to the Go-Eetherm team.
I never tried to force it through.
And I also didn't have time and resources to actually try to port my changes into the Go Ethereum.
And I was very explicit about it with the Go Ethereum team.
And they find with that they understand that I'm also under constraints.
Just to clarify, can you just quickly talk about what the current state of Trebogeth is?
So is it live?
Can people download it?
Yes, it is live.
The current version does not really have.
have the kind of the Constantinople-Petresbourg fork in it,
it hasn't done as the rebase.
So it is currently rebased up to the state of Go-Eetherium,
which existed somewhere in January, I think, end of January.
And you could currently download it.
You can do the full sync.
So this is the only sync it supports.
You have to start from Genesis and you apply the blocks.
If you have a decent machine, it will take you probably two weeks,
to do that and you will end up with the file which is like 360 gigabyte and then uh you can do your
rPC queries you can process the blocks and some of these rPC queries are much faster some of it
up slightly slower but yeah it is working i haven't tested all the rPC queries but generally it works
the light client is not supported and the subshot sync doesn't work but this could be fixed
So yes, it's, and in fewer, I also managed to sync it some time ago.
So it's sort of why it had some independent verification that it's actually not just a figment of my imagination.
So are you going to drive it forward or is this going to become a thing for the Ethereum Foundation to also, because maintaining this is a lot of work?
Yeah, so I'm going to drive it forward.
And that's why I
Because there is another
application for the
Tubergeth. I'm also
working with the Interchain
foundation because they are very interested
in using Turbobeget as the
kind of engine for the
Etherment project.
And at the moment I'm figuring out
how to
merge this flat
structure that I have in Turbighet
with their
avial balance trees.
and I think I've found a way to do it.
And my idea is to abstract, to modularize rather the code
so that I can use it for both Ethereum client and for Idemand client.
So there is definitely appetite for it.
And as I said earlier, Infura is very interested in actually using it to Begath.
And I also now discovered that the biggest use case for me,
and hopefully for other people, is use it as a data analytics.
source because as I said, it's much more viable to do data analytics with the tour baguette
than with any other client at the moment. And that potentially could be useful for companies like
Google to, because they're already looking into this analytics in the cloud using Ethereum data.
And not just for Google, but for anybody else.
This episode of Epicenter is brought to by Microsoft and the Azure Blockchain Workbench.
Getting your blockchain from the whiteboard to production can be a big undertaking. And something
Everything as simple as connecting your blockchain to IoT devices or existing ERP systems is a project
in itself.
Well, the folks at Microsoft had you covered.
You already know about the Azure Blockchain Workbench and how easy it makes bootstrapping
your blockchain network pre-configured with all the cloud services you need for your enterprise app.
Their new development kit is the IFTTT for blockchains.
Suppose you want to collect data from someone in a remote location via SMS and half that
data packaged in a transaction for your HyperLedger Fabric blockchain.
The development kit allows you to build this integration in just a few steps in a simple
drag-and-drop interface.
Here's another great example.
Perhaps you're an institution working with Ethereum and rely on CSV files sent by email.
One click in the Devkit and you can parse these files and have the data embedded in transactions.
Whatever you're working with, the Dev kit can read, transform, and act on the data.
To learn more and to build your first application in less than 30 minutes, visit AKA.ms.
slash epicenter. And be sure to follow them on Twitter at MSFT blockchain. We'd like to thank Microsoft
and Azure for their supportive epicenter. So I was just going to mention Google and the work that
they're doing. So we had Island Day on a few months ago who's an engineer at Google and who's
sort of leading the project to bring different blockchains like Bitcoin and Ethereum into Google Cloud
so that you can effectively query the blockchain and do data analytics in a very simple query
language.
So what are the applications there and how could TurboGeth be beneficial?
I mean, because I presume Google's got these really incredible machines and they don't
really have, like computing power is not really an issue for them.
How is TurboGeth beneficial in this case?
Well, I mean, obviously if they don't care how much it costs to run these analytics,
then I there's there's nothing for me to offer but if you do care about the cost I think the
too big yet can make the the the cost of running this analytics much small much smaller so it
you can you know because of the efficiency you can run it on like tens of the hardware that you
usually would do so the improvements that are that are made in turbo gas over gas so
mostly a database.
Yes, it's mostly database.
So could this be used for, I mean,
I mean, there are six or seven Ethereum clients,
but only two that are really used,
so geth and parity.
Could this also be used for parity?
Yes.
Though I did actually think and talk with parity about it.
So they need enough motivation
and enough sort of justification to do this.
or it will require the big overhaul of the architecture of their client to do something like this.
Essentially, this is the overhaul that I have done for Go Ethereum to do this,
but the similar overhaul were required because you essentially have to change it on many, many things.
So they had a similar project, not that ambitious, called the ParityDB,
which is essentially flattened representation of the current state.
this hasn't been integrated into the parity yet because they haven't found enough motivation for it
but it might be if we go ahead with some of our plans like an advanced sync client i sort of see that
if if this project is going we will see convergence and then other other clients will implement it
especially if it becomes kind of all-around benefit you know there will be no reason not to
of this.
Yeah, I see.
So if there are no drawbacks, there's no reason to not also implement the superior
database.
Cool.
You earlier alluded to the fact that you also do other things.
You mentioned Ethereum 1.X.
Yes.
Can you quickly tell us and the listeners what you mean by Ethereum 1x and how it differs
from Ethereum 1 and Ethereum 2 and how Serenity in Constitinople actually fit into the picture?
Yeah.
So I'm going to walk you through what I call the short prehistoricist.
as I usually introduce the 1.X.
So it starts with the Cancun in November, November 2017,
where Vitalik gave his closing speech called what was it called Modus proposal for Ethereum 2.0.
And in that speech, he said that the plan or some,
his suggestion is to keep the Ethereum 1.0 as the conservative and safe chain.
and most of the innovations will go into the shards,
shards on Ethereum 2.0.
So people thought, well, that makes sense, kind of.
And then in May 2018 at Edcon in Toronto,
Vitalik gave another presentation about what I think it was called
So You Want to Become a Casper validator.
And so that was about running,
the Casper validators on your laptop, that it basically signaling that the Casper FFG,
as it called a friendly finality gadget, was near, was close.
But then, somewhat surprisingly, for people, in June 2018, there comes a, what I call pivot
in Ethereum 2.0, meaning that the Casper FFG on Ethereum 1.0 would not have.
happen. Instead, there will be a separate chain, which is called a beacon chain that will be
launched as the in parallel to Ethereum 1.0. And then the Casper researchers and the sharding
researchers will be merged into one research team because they turned, they, they turned out to be
doing lots of similar things. And then that pivot basically meant that, well, we are not
going to so the first people thought well maybe that's pivot means that we're going to get
sharding faster or cuspur faster but then by the October and November 2018 when again
italic laid out the potential timeline for the serenity it became clear actually it's not going to
be that fast right it might take three years optimistically to functionally supersede the
Ethereum 1.0 and maybe five years not so optimistically.
By a function is superseding, I mean, you need to go through phase zero, phase one, and phase
two to actually get to the same functionality as we get in Ethereum 2.0.
So if just launching the beacon chain is not enough.
So then people realize, oh, we have to live with the Ethereum 1.0 for another three years
at least and probably for another five years.
and look what is happening.
And so this kind of look, what is happening,
it was sort of the initial chatter
and DefCon4 among the kind of core developers.
Like, whoa, what do you think is going to happen?
Like, is it going to, we're really struggling
with the growing state, with the synchronization,
things taking forever.
Is it going to work?
So this is how the Ethereum 1.1.1.X actually weren't.
And the reason why it's called 1.X is because we don't know if it's going to be 1.3, 1.5, or whatever, 1.7.
So we just put the X in there.
Cool.
So as I understand it, there's a couple of things that are actually part of Ethereum 1.X and different people are working on them.
So maybe you can give us a short overview of who is part of Ethereum 1.X and what kind of projects they're working on.
Okay, so there were initially four working groups that were kind of initiated for
Ethereum 1.X.
So one of the working groups is that we called it state rent.
And now we call it state fees because it might not be just rent.
And so I took on the sort of the leading this group and people agreed.
People who were there.
I hope it's still okay.
And then there was another group about chain pruning.
So this is the group which will be looking at something which is not related to the state directly,
but also something that current Ethereum clients have some issues with storing the blocks,
which I think is going about 70 gigabytes or 80 gigabytes now, the block bodies.
And also the growing event logs storage.
So we have to start pruning them at some point.
And so Peter Shilagi has stepped in to lead this group.
So everything is kind of fluid at the moment.
So the groups are not really sort of restricted to these people.
So we always welcome new people to contribute.
And so the third group was, so the third group was the Iwasm group.
So some people might be surprised what is.
he wasn't doing in there.
It's like how is it related to this?
Like, is it, so I agree that some,
it might sound a bit artificial,
but the initial,
when we initiated this,
the belief was that something like a state rent
or state fees will be a kind of restriction
to the resources that we give to the app developers.
And it's good to bring something
in return.
So you take something
and you give something else.
So you're not just taking or you're not just giving.
It's a give or take.
So Iwasim has a potential of doing this.
And also, Iwasm could help
to reduce the number of point features
that we have to introduce.
There was a lot of talk about
introducing new precompiles
for lots of different
cryptographic primitives.
and that if you look at the history of Byzantine release,
there was a lot of time spent on just implementing two or three pre-compiles
because of the proper gas calculation, lots of testing.
And so instead of doing that, instead of spending the core developers time on that,
why don't we just do what they call the last pre-compile, which is Iwasam,
and then you can implement all the pre-compiles there?
It has a lot of nuances, but these are the two reasons why Ewasum is there.
And currently, Iwasam is basically led by Iwasan team.
Yeah, I'm not going to list all the people because I probably will miss somebody out.
But so then the fourth group that has been formed is the emulation simulation group.
Essentially, this is the group that tries to find out what are the tools that we can use to support the other groups like state rent and the chain-puning group to do some.
sort of test to run some test scenarios and to try to predict what problems we're going to face
in the future. What are they going to be the first things that will break? That's roughly the
description. As I say that I don't have a list of people who work in each working groups because
it's all at the moment it's all very fluid and open. I see anybody who contributing to this as a
part of the working group. So I just want to
maybe come back on this and maybe clarify a few things.
So this is all very confusing to me.
Okay.
The version numbers and then also, you know, Serenity and Constance and Opel.
And as someone who sort of vaguely follows this, I thought it was confusing.
So I can imagine for someone who's coming into the ecosystem, just how confusing it could be.
So we had Ethereum, you know, there's Ethereum like up until now.
and at some point, you know, the idea of Ethereum 2.0 was put out.
But as it stands, it looks like all the features that are on the roadmap
will be ready for production deployment until maybe three to five years from now,
or at least until that arise to stability, it might be some time.
However, the Ethereum blockchain and the system,
as it stands has a number of problems and a number of issues.
So during DefCon, these people came together and said,
okay, let's form these working groups so that we can come up with a dot release
in the version one of Ethereum, which would include some of these features
so that we can continue to build the ecosystem and build that's on top of Ethereum.
Yes, that's exactly correct representation.
also people realize that the fates of Ethereum 1.0 and 2.0 are linked.
One cannot live without the other.
And this is the important bit.
So you cannot just simply forget about what's happening on the Ethereum 1.0 and hope that
we will get there with the Ethereum 2.0.
So it had one supports the other.
Okay. And just so in the one point X version of Ethereum, which is this version that we aspire to to build at some point and to release, there are sort of right now four things that are in that roadmap. One is state rent, which will come to in a few minutes. The other is chain pruning. So optimizations on the size of blocks and logs, Ewasim and emulation simulation tools. And so.
What's left then in Ethereum 2.0 in the roadmap?
Okay, so Ethereum 2.0 is a very ambitious project.
And the parts of the work that we are aspiring to do in Ethereum 1.X will be very useful for Ethereum 2.0.
For example, it seems to me that there is some consensus that the state rent will be required in Ethereum 2.0.
and the
but it
the difference between the
sort of the state rent
in the second Ethereum
I would call it
and the first Ethereum is that
in the second Ethereum it could be
pure. It doesn't have the
legacy of the current ecosystem
current contracts. You don't
have to deal with my transitional
issues with the things that you have
to look at. So in the
theorem, in the second Ethereum we can just
introduce it in pure form without
all the mitigations for the
certain vulnerabilities. But the lessons that you learn with the first Ethereum will be
invaluable for properly introducing it into the second Ethereum. The same, I would say, for
Iwasm. You can learn a lot of lessons on the way and apply Iwasm in a much better way to
the shards when it comes in. And I mean, the chain pruning obviously will also be
basically my conclusion is that everything that we do will be useful in second Ethereum
because it will make it a better system.
So it will inform a lot of design decisions.
In addition to proof of stake and beacon chains and all these other things.
Yes.
Okay.
It seems, it feels like this was a natural thing to do anyway,
like to do things in an iterative sort of fashion?
Yes.
It kind of seems logical to me.
Yes, there was a, I see it as the sort of the gap, which was probably temporarily over, was temporarily overlooked.
And we simply just recognize this as the gap.
And then we have to still put resources in this gap rather than just shifting them all to the second Ethereum.
Cool.
I think, I think this is a great preface.
for actually talking about these proposed things in detail.
So can you briefly recap what you mean by state rent and why we need this?
Okay, so first of all, the idea of state rent, or some people used to call it storage rent,
it is not new idea, and a lot of people entertained it back in 2014 and 15 before the theorem started.
But previously people were concerned about the cost of storage.
Essentially, they were looking, oh, you know, when you're increased the, what I call expand the state of Ethereum by, let's say, creating a new contract or by introducing a new item into contract storage, then you pay for it once in a gas and then it's there staying forever unless you decide to free it, which, you know, you might not never, never do.
And so people were representing this problem as like, okay, so you pay it once, but then other people will have to pay for it for like till the end of time.
So at some point I realized that this is probably was the failure of the previous kind of state rent researchers that they concentrated on this particular cost, on the cost of storage.
because if you start this argument,
you very quickly find yourself arguing about things that you don't know.
We don't know how much it costs to store this thing.
We don't know how much, depending on different kind of storage,
how the cost function tails off and things like this.
So instead, we sort of pivoted from this approach
and realized we're going to only talk about the performance applications
of the state rent, or sorry, not state rent,
of the large state.
This is the problem that we've seen is that as the Ethereum gets more use or even with a constant use, the size of the state grows and that brings some performance problems, which we could observe.
It's not something that we speculate about.
It's something that we can measure.
And this is one of the reasons why we have this emulation simulation group to help us.
Okay.
So but the other part that it didn't answer, I think, is the, so if we don't bother clearing the state that is that people use.
And the concern is that the state is probably something that you use for a while, but then you, you know, the apps come and go.
A lot of them come and go.
They will, you know, if we, but the state that they've been using is in the system.
So everybody has to keep downloading it.
back and forth. So what we can talk about is the total state, which is essentially, let's say,
10 gigabytes at the moment. Everybody has to download 10 gigabytes when they join a network.
Then let's say that if there's a 6 gigabytes or 8 gigabytes of this, which nobody cares about,
like people rarely use it. It's just there because they were there first. And then there's a useful
state, which is everybody who knew new people who come to the system, they have to be
intent with using just 2 gigabytes.
And so the problem is that this kind of the useful state kind of keeps shrinking.
So you essentially end up with a lot of garbage in your state, which everybody has to shuffle around.
But the actual useful space is really constrained.
You can compare it with, let's say, the state of the property market in the like central of London or something like this.
where there's a lot of houses empty.
They are actually owned by somebody, but nobody really lives there.
And then so the rest of the, so all the people who want to live there,
they have to be content with a very small number of houses.
And obviously they will have to pay a lot of money for renting them and stuff like this.
So if you instead just get rid of the empty houses,
but kind of get rid of them, build new ones and redistribute them in a sense of the rent.
So I think our premise,
is that this is the important for longevity.
So the system does not become kind of this dead, what are you called, a ghost town,
where there's lots of empty houses and nobody can use them.
Okay, so basically the idea is to charge people not once when they start using storage
and then paying them back a certain amount if they free it up again,
but actually charging people by the day or by the block that they're actually using.
the storage. Exactly. So if they decide that they don't need this
things anymore, they can just withdraw the ether or they just
leave it, leave it, and then it will be kind of garbage collected by the rent. So I see the
rent as the garbage collection mechanism, mostly.
And a lot of programming languages actually have that build-in as well, right? So basically
that you free up space that you no longer need.
Yeah, the difference between the programming languages and this is that
we have a very difficult problem of determining what is not used anymore.
So that's why we need things like recovery,
is that if we made a mistake and removed something that people actually need,
there has to be a way to bring it back.
Okay, so walk us through the process.
So basically, say you have a smart contract now and it uses storage.
So it has to pay rent, so it has to have funds,
or someone needs to pay funds on behalf of it.
So what happens if no one pays?
So under the current proposals, all of them, actually all three of them,
so when the smart contract exhaust its balance and the rent balance, there are two separate things.
So when there's nothing in both of these balances, then eviction happens.
So eviction under the current proposals that eviction doesn't just happen automatically,
somebody actually has to poke this contract.
So by poke and mean that somebody has to create a transaction which touches it, like to access it in those
some way. For example, somebody queries the balance. And then in the end of the block, this contract
gets evicted. And eviction happens differently for for non-contracts and contracts. For non-contracts,
which basically just have some monetary ether, eviction means just removing from a state.
Because apart from the, because there's nothing really useful in that, you know, there's not
useful information. For contracts, of course, there's a storage. So eviction on the current proposal,
So does not completely remove it from a state, but it leaves a so-called stub, which is essentially the hash, the commitment to the entire state of the contract before the eviction.
And this stub, unfortunately, it does have the effect that it does not completely remove it from the state.
So it has to still dangle there.
But this stub is what allows us to restore it later on.
If it was by mistake and somebody realizes, so the biggest example is, for example, if you're,
had the Maltisig wallet with lots of tokens on it.
And then you made mistake by, you didn't pay up the rent.
And you realized suddenly, oh, my Maltisig is gone.
And there was million dollars in it.
I wanted back.
So you would be able to use the recovery mechanism to rebuild the storage of this contract
in another contract and simply use a special op code to restore it from the stub.
And then you get your contract back.
You can top it up with the rent.
and then you keep using it or you can move your things elsewhere.
And the stub, where is the stub stored?
The stub would still be stored in the state.
So this is the kind of the price we pay for the recovery.
So the stub is expected to be 32 bytes sort of hash,
which is a commitment to what the contract looked like before it was evicted.
Okay.
And so I'm not sure I understand how that solves the problem
of freeing up state if the stub is stored in the state?
Yeah, so this is basically a non-perfect solution.
And we have a more perfect solution down the line,
but we want to see if this non-perfect solution
is actually going to be enough.
So obviously for the contracts,
which have no storage at all or very little storage,
this stub is probably going to be,
enough so that there's not enough benefit in clearing.
But for the contracts which have a lot of storage,
that 10 million items, sorry, not 10 million, so 10,000 items,
million items, for those contracts, of course, the benefit of clearing
will be quite big.
So instead of 10 million things, sorry, 10,000 things,
you can have one hash in the state.
And everybody has to download only just that one hash
rather than all the 10,000,
sorry, 100,000 items.
items yeah it's a non-perfect solution but we hope that it might be enough for our purposes
okay i'm still not sure i understand so the the hash itself contains just the hash of the state
but where is the like coming back to this idea of recovering a multi-sig wallet um
where is the actual data like so if if that data gets deleted from from from
the blockchain. Okay, so this data will have to be recovered from the history, obviously. So if you're,
if you want to recover your multi-sig, you have to go to our archive node or some node which still
have the history, recover that like recover what the state was and then reconstruct that state
on chain and then instruct the blockchain to instruct the EVM to recover, the restore it.
Okay, so it only, it doesn't alleviate archive nodes from having to store this data.
It only alleviates the regular notes from having to store the state data.
Exactly, because the problem with we're seeing is not actually the disk space that users have.
So, as I said, we are trying to not care about this too much because it,
but we are actually now looking only at the active current.
state that everybody has to download when they join the network, which is the much more acute
issue to solve. And your Maltisig will be deleted from that state, but leaving this
stab that you can use to then prove that, okay, this is what's the state of Malteseek, please
recover it. And it will be recovered. Can you use this as a feature? So basically saying this is
a contract I want, I want to have on the public ledger, but I don't need to
access it often, so maybe only once a year or so.
Oh, sure, yes.
I will let it run out of rent and then basically only the stub has to be saved by everyone,
and then I will migrate it to a new contract.
I will restore it when I need it again.
Oh, yeah, of course.
You could probably use it to save some money on rent, yes.
So it's just kind of hibernating your contract.
That sounds very similar to stateless contract design that you find in,
some other chains. So for instance,
a polka dot uses this or also
our chain. Can you compare those two?
Yeah. So the
difference is that
in the stateless contracts, we assume
that when the contract is
represented as a stub,
it's still accessible by the normal operations.
In our proposal,
when the stop is
when the contract is in a hibernation
state, when it's a stop, it's not,
it's not accessible by anything.
It's basically invisible by EVM,
with the exception of this special op code,
which is called Restore 2.
Only that upcode can see that stuff.
Nothing else can.
So to other contracts or to other observers,
it looks like it's not there.
And so with the status contract is not true.
So with the status contract paradigm is that it's supposed to be usable.
You're supposed to be able to mutate that state
to access the bits in the offline chain storage.
With our mechanism, it's like you have to first restore it,
bring it back on chain, then you can use it.
And then if you're finished using it,
you can let it expire and then cleaned up if you want to.
So when you talk about earlier,
you mentioned a perfect solution and non-perfect solution.
So yeah, if we find, if we find that this is not enough,
For example, if we find that there will be lots of tons of little contracts having the shash subs and then still the state is still not small enough.
The perfect solution, it's not actually perfect, but so it's basically completely removing the contract from the state.
And there are three alternatives how to deal with it.
So first of all, not recovered at all.
that is the kind of the nuclear option basically saying that once it's gone to the stage there's
no way to bring it back the second stage second option is what vitalik suggested in his paper on the
resource pricing it's essentially you when you want to revive the contract which is not in the state
not even a stub you need to prove two things you need to obviously reconstruct
the state that it was.
And then you need to prove that at some point in the past,
this was the state of the contract.
And the way you prove it is through the hash chain,
through the header chain and through the state route.
So this is the first thing you need to prove that it existed
at some point in the past.
But the second thing you need to prove that
it did not exist at any point after that.
So this is what they call exclusion proof.
So first you do inclusion proof.
and then you do exclusion proof.
And for the exclusion proof, it's tricky because you basically have to prove for every block
since the eviction that it wasn't there.
And there are ways to optimize it, for example, if you say that we now mandate that every contract
has to leave at least for 1,024 blocks.
And then it means that we don't have to do exclusion proof for every block,
every block, but just only for every 1,024 blocks.
And the way we can mandate it is say, you say, whenever you create the contract,
you prepay the rent for one year, for 1,000, 24 blocks in advance to make sure it will not get evicted.
And the third way to do it, which is, is what I call the graveyard tree is essentially
eviction of the contract will require to have access to some kind of graveyard miracle tree, where all the evicted
contracts live. And so if you want to evict something, you say, okay, I'm going to, this is the,
you know, the state of the graveyard tree, like Merkel proof to this place where I'm going to put
this contract. And this is now I'm putting this contract into the graveyard. And this is the modified
Merkel tree proof. And then later on, if somebody wants to revive it, then they, they give a proof.
This is the contract inside the graveyard tree. And this is the update of the graveyard tree without
this contract so you basically take it out of the graveyard and put it back into the chain
so that you don't have to do exclusion proofs but this requires everybody who ever want to evict
or restore contracts to have this full copy of the graveyard graveyard tree
but we hope that it will not it will not get there to to those things so i will
I've described it a little bit in my first proposal.
I've excluded it from the second for simplicity.
I might bring them back in the third,
but basically we hope we will not need these measures
because they're slightly more advanced.
Looking at this, so basically,
if this were a completely new system,
obviously this makes a lot of sense
that you don't pay for storage funds
and then you can use it essentially forever.
I mean, that's the way that this system.
should be designed. But obviously that's not the case here. So basically you have to move this
from a system where people actually deployed smart contracts under different assumptions
to this new state rent system. And to me it seems there would be a lot of complications.
Correct. You're absolutely right. So this is what makes this, I would say, both very challenging
and very rewarding at the same time
because we're not designing the pure system,
we're designing the kind of the migration
from the legacy system
to some non-perfect system that we introduced in.
So that's why we, when we started analyzing
the implications of the rent on existing contracts,
there were few things that basically sprang into the mine.
So one of them is what we call,
the dust griefing vulnerability.
And the conclusion was that most of the contracts that exist today will be vulnerable
under this vulnerable to this attack.
Well, to this, yeah, to this grief and attack.
I can explain to you if you want.
Yes, please.
Imagine that, take an example, our beloved ERC 20 token contract, which has things like
transfer and approve.
And so
the approve is a good example because
the approve is essentially allows somebody
to pull the tokens out of your
account.
And also
another feature of approve is that anybody can call it
even without being a token holder.
So I can call approved and in any contract
without even having to acquire tokens.
So all I need is just a tiny bit of ether
on my account. And I can approve
lots of things.
So if you imagine that if we have this token contract, which has the information about all
the token holdings inside it, in its storage, then under the state rent regime, this token
contract will start paying like the rent proportional to the number of items.
And so that means that if I am a villain, if I want to hurt this contract or to make sure
that they abandoned it or maybe a competitor or something.
So what I will do is that I will start doing a proof like on lots of lots of random things
so that it can inflate the storage of this contract by using a little bit of gas.
And so I will condemn them to pay lots of fees forever by just doing some tiny investment
of gas things.
So that's what I call the dust Gryphon attack.
So I create a lot of dust.
I can do it by transfer as well.
I can purchase some tokens or acquire it in some ways and then just distribute them over lots of dust accounts,
which will also inflate the storage. But approve is much better because you don't even need to take the tokens.
And the same applies for lots of other contracts.
So for example, for the Dex, for the either Delta for IDex.
So every trade settlement is the is creating another storage item.
So as you trade, you basically inflate, keep inflating this contract and stuff like that.
So this is one of the first things to solve.
And so far, the intuition is that most of the contracts that are exist and popular today will be vulnerable.
And they will have to essentially be rewritten, which is a bad news.
And then another realization we did is that if we look at the,
the contracts which depend on each other, let's say if you have some sort of decentralized exchange and you have a thing like maker dow, which now have some links to each other. So like you can move the contracts from one to another or you have some other interrelations. So if we say, okay, now we're going to introduce rent, all you guys are going to be vulnerable. You have to upgrade all of you at the same time. This is not plausible, right? So you have to say, well, you've got this time to upgrade maybe one year or something.
something. And this is how you can do it. And you can do it in one by one. So first contract,
which is the dependency of everybody upgrades first, then the other dependent contracts to upgrade
next and so forth. I know this is really challenging, but we will see. We still don't know
when this problem will really become a kind of crucial to solve, but current intuition it will have
to happen within the next two years.
Will you help people determine what kind of contingencies and dependencies there are?
Because basically this seems like it is enormously complex.
It's a little bit like fixing an engine while it's running.
So you take out little parts and you need to make sure that the engine actually keeps running
while you're actually switching out parts.
Yes.
This is enormously, you mean, this is kind of one of the biggest part of the
project and at the moment I in the plan in the kind of the project plan I've been creating I called
this part ecosystem research and that would consist of essentially enumerating all the different
sort of contracts and daps that we have and then for each of them to have somebody looking into
those contracts and determining what are their vulnerabilities what will happen to them
and what are the ways they can they can rewrite and modify this and of course then having this
information going to these developers of these contracts and have conversations with them say this is
how you're likely to be affected this is how we think you should try to rewrite and get their feedback
maybe they will give us the idea about some missing features in the proposals maybe something
which we haven't thought about so yeah this is a this is going to
be a larger work and at the moment i think uh we're trying to make it to make it more kind of
community driven uh and by this i mean we're planning to create a lot of uh get coin kit coin grants and
bounties so that multiple people can work at once on the this different because it will require
like massive parallelization parallelization of efforts like i'm not trying to do this myself because
simply there's not enough hours in a day. And so there's going to be, there has to be a lot of
people working on this. So this is probably going to be the big, the most intensive part of this
whole thing. So this is going to be a massive undertaking. And just to just to give people an idea of
the of the unforeseen consequences that this could have, you recently tweeted about the parity
contract that was suicided last year. So can you talk about that? Yeah. So this is the
So first of all, I don't want to give people an impression that this was intentionally designed this way.
It's a realization that came to me when I was reading some tweets by John Morellian.
So he was asking whether create two.
So they were discussing to create two consequences.
And they, the Morelian is asking like, is it going to enable parity,
parrity motorcycle recovery?
And I said, well, obviously not something else good.
So essentially in the proposal number two, there was this part which is called replay protection, which means that so when your non-contract account gets evicted from the state and gets reinstated, so you can reinstate it by sending some ether to it, then it would normally get non-zero, which means that you can repeat the nonsense that you had before.
And so as, let's say, if you pretend that you are the person who has the private key from the account that deployed parity motorsick library, like pretend that you're Gavin Wood, if it was a Gavin Wood, right?
And you still kept that private key.
And then we deployed the supply protection, and then we deployed the rent and eviction.
So then what Gavin would do is that he would take that private key, take that account, he will remove all the ether from it.
So it will be zero.
Then he will get it evicted by poking it.
So his contract gets, not contract.
So his account gets evicted.
Then he puts some more ether into it.
It comes back with an non-zero.
Then he says, okay, when I created the Parody-Mautosig library, my nonce was, let's say, 35.
So then he does 35 transactions to something else.
And then it gets to the same nonce as he had when he deployed a Maltosig library.
Now he does the transaction which deploys commas-exing.
completely different contracts, which doesn't have the vulnerability and the problem fixed.
So the library comes back at the same address and everybody gets access to the funds, right?
It's not, it wasn't designed this way, but it was the reason why it sort of became possible
because we did not think about the nonsense, not just, nonces are not just for replay protection.
NONSI is also used as the determining of the contract addresses.
And now it has to be a consideration.
So in the third proposal, I will replace this particular replay protection mechanism
with another which does not repeat the nonsense.
So essentially the conclusion of this, the nonsense cannot be repeated.
Interesting.
So I mean, this is an enormously ambitious undertaking.
So what's the timeline on this?
and do you intend to have some sort of proof of concept?
Yeah, well, the timeline for the whole project is probably anything between 18 months and 30 months.
Because we are, so the, for example, the state rent, which is the most complicated bit of it,
it has many pieces in it.
So at the moment, when I'm writing Proposal 3, every change has a letter from A to S.
So how many letters are you figure out yourself.
And they organized in the sort of dependency diagram, which shows you which change is necessary for the prerequisite for another change.
And so this such diagram already exists in the second version, so you can have a look.
But the third version is very difficult.
And then using this diagram, we split it into pieces.
So this could happen in the first hard fork.
This could happen in the second hard fork, and this is happening in the third hard work.
And actually, interesting bit about it is that at some point, let's say after first hard fork, we also get some side benefits.
We will be able to increase the ball gas limit, which currently we're not recommending doing.
We do not recommend doing because it will accelerate the state size growth.
So, as I said, if it's three hard forks and if we assume that each hard fork takes us nine months to execute, then it will be 27 months, right?
But we will already be starting getting some benefits after the second hard fork if we start evicting the non-contract accounts.
So proof of concept is, I think, you know, this is how we're going to operate with this thing,
is that the proof of concept has to be done as an iterative process before you even get to the EIPs.
So that's my opinion specifically about for this project because it's so complex, is that.
So we already had one first proof of concept on the first version of proposal done by Adrian Sutton from Pegasus.
So we're going to be doing more of this proof of concept potentially again with Adrian, but also engage some other people to do that.
So essentially, the idea after one or two proposals versions, we will do proof of concept to figure out what has been unspecified, what is all ambiguous.
us. And obviously, this proof of concept will also allow us to generate the test cases so that it's
pretty easy for the other core developers to then implement those things. So we do a lot of work
up front so that when we put out the EIPs, we already have proof of concept and we have test
generated. So that's the ambition. Earlier, we talked about Ewasim and mentioned that Ethereum 1.X
would have Ewasam as part of its roadmap and that in Ethereum 2.0, there would be improvements on Ewasom.
Now, let's not, I mean, I don't think we'll go into the details of what Ewasm is.
If our listeners can go back to episodes 245 with Martin Bessie or 263 with Justin Drake
for a more in-depth discussion on what exactly is Ewasom.
but with regards to the roadmap, can you talk a bit more about so the steps that we would see here with Ethereum 1.X and Ethereum 2.0 with regards to Ewasm.
So, yeah, Iwasam is one of the reason Ewasom has been brought under this kind of umbrella of the Ethereum 1.X is that it enables us to not concentrate on what we call point features.
So if you look at the, let's say, Byzantium release of the Ethereum, which,
happened in October 2017 it included four I think for precompiles new
compiles which is the sort of optimized subroutines which equal for some
cryptographic operations and although it was very useful it actually took a long
time and a lot of work from the Ethereum Core developers to prepare it's very
tricky and since then there was more and more requests for more precompiles
Because a lot of things that people find useful are simply too costly to implement in EVM, the bytecode.
So there's more and more requests.
And at the moment, we find ourselves like, you know, if we try to implement all these requests,
there's going to be no time for other things.
So EOAS is seen as a solution to this as what they call, what some people call the last pre-compile.
So essentially, you roll out the engine, which,
will be more efficient at executing those operations, so more tuned to, let's say, hardware,
as the EWASM is. And so it will enable us to introduce these features, like, much easier.
So that's what, that's why I call EWASM as a meta feature, as opposed to these point features,
which are specific precompiles people asking. So we don't have to spend our time coding up
specific pre-compile that people want.
We just do things, give us an execution engine,
and maybe in the beginning it will be used for core developers
to quickly introduce this request of pre-compiles
and in the future to just open it up to everybody.
So if you want your pre-compals or whatever,
you just deploy them as the EASM contract.
So that's the vision.
And EWASM would then in effect run in peril to the EVM?
Yes, so there is no plan in at least in the theory in the first theorem to replace EVM with EVASM because I don't think it's practical.
And other people also think it's not practical.
So there will be some ways to call Ewasm subroutines from EVM code.
It might be done via special op code or some precompile.
essentially there will be some kind of boundaries where you enter the ewasm and now the evasm execution
engine which will be in all the Ethereum implementation it will take over from EVM so at these boundaries
and then when the Ewasm procedure subroutine finished execution it will give the control back to the EVM
this kind of the and at some point during the execution EVASM there might be some points where
the Ewasm code will require access to
to some of the Ethereum states.
So it's not just going to run and do some pure math computation,
but sometimes it will have to go and fetch something from the state
or maybe update the state.
Can you talk about some of the ways or some of the scenarios
where it would be useful for AVM to call up an Iwasm subroutine?
Yes.
So one of the things that was kind of discovered
in by the work of let's say Greg Colvin when he was working on the what they call
EVM 1.5 project so he said that he did a lot of experimentation and he discovered that
because the EVM has such a long word which is 32 bytes all operations have to be done on
the long words and it's much much less efficient than if you just had operations on 64 bits
which is mostly implemented in hardware.
So Ewasim, for example, is much more attuned to the hardware execution
because it has 32-bit registers.
It has 32-bit and 64-bit registers.
So when you execute this code, you don't have to do a lot of aggregation of the, you know,
because the math on the long words is much, much slower.
So the idea is that we, for a lot of the useful things, we can execute things on Ewasim faster, just simply because it has different arithmetic and because it might have more optimized compilers and things like this.
So it has, for example, I don't think it has a dynamic jumps like VM has, so it allows some sort of more static analysis, like in other.
in other optimizations.
That's my view of this, at least.
Cool.
So I think we need to wrap up soon.
So I have some questions as to how you see the future of Ethereum.
So it seems that for Ethereum 1, there is still a lot of potential to make Ethereum 1 as is better,
without actually touching upon both sharding and proof of stake.
So given that Ethereum 2 is going to take longer than expected
to actually be put into place,
what would you hope to see in the coming months and years with Ethereum 1.X?
And should, for some reason or other, Ethereum 2 fall through entirely?
So should this fall apart just because basically each of these two big topics, sharding and proof of stake, both of them are enormously complex and meshing them together only adds to that.
So, I mean, do you think there is a danger of that not happening and Ethereum 1.X and putting the pressure on Ethereum 1.6 to actually step up to take up, to take, to, to,
take over for the next couple of years?
Well, I think that obviously there are a lot of uncertainties about the future of Ethereum 2.0
because at the moment they have a phase two, phase zero pretty well specified.
But as we saw in some of the reviews, I think one that I read was from James Prestwich.
So he did a lot of interesting review where he says that as you go through the phases, there's
more and more uncertainty and for example the phase three which is so it's almost like there's
very very vague about what is this going to how this is going to work so i do sometimes have some
not worries i don't worry about that much but i do have some uh some uh doubt that it might take
a bit longer than let's say even five years for this to happen and uh whether the the there will be
clear benefits and how exactly sharding is going to be done. So, I mean, me personally, I don't,
I like fixing things that currently work rather than designing new things, designing new things,
because this is probably not my strength to try to implement completely new things. So I like
fixing things that already work. And this is why the, the Ethereum 1.X project is kind of perfect. So what we see,
in Ethereum 1.X, I think we will be able to solve one of the biggest problems without any
controversy or without any hard works. I could go into this if you would like. And if we see that
the Ethereum 2.0 is not, you know, has some more delays, then we will redouble our efforts.
And I think we can do certain things to kind of to keep Ethereum working. We'll have to
do some extraordinary measures, for example, we might need to make the state access remote and
do what I call poor man's sharding of state. So which is like a sharding, which is not enforced
by the protocol, like in the second Ethereum, but the sharding which is emergent simply
from the fact that people are not storing the entire state anymore. So this is the things
I might see happening in the future.
But if we, so what I do believe is that if we keep
focusing on this and not simply hoping that the things will keep
working. Because I think if we simply hope that things are going to
keep working, they will stop working. If we if you keep
focusing on making sure they do work and just keep fixing them,
we have much better chance of basically keeping it
alive for as long as we want. I think.
And there are some people, like Greg Colvin, he believes that, well, Ethereum, the Ethereum 1.0 is probably going to be alive for as long as, or used for as, you know, maybe forever.
Like, I mean, maybe it will coexist with the second Ethereum. I mean, maybe the entire transition will never happen.
Maybe some people will prefer to stay on the list on this system for a very long time.
really force them to go away, can you? Well, maybe you can, but we'll see how this happens.
So let's look into the future now and I just want to get your thoughts on this. So, you know,
presuming that proof of stake chains are the future of blockchain and that proof of work chains
become less and less used because proof of stake has clear benefits. And what we're seeing right now
is like Cosmos is about to launch
and, you know, PocaDot is also
making some headway and like these
these chains are proof of stake negative.
If it takes three to five years
for Ethereum 2.0
to fully, you know,
to be fully realized with proof of stake
and charting
and like, you know, side chains and everything,
do you think there's a risk that Ethereum
would lose some of its network effect
and some of its sort of authority
as, you know, the
primary and sort of like
authoritative smart contracting
DAP platform to other chains
that are natively
proof of stake and already
having the ability to build apps and things on them.
Yeah, so this is a very interesting question.
So to first, to address the point
of proof of stake, I think
I was really looking forward
to Cosmos launch because
I, to me, was the first
kind of non-trivial
proof of stake system that will
come to sort of production and I'm really excited for it to launch and I think there was a bit of
a competition there between all these three things and now obviously we'll probably will see
Cosmos launch first and PolkaDot after that and Ethereum only the third and we will see how
it goes and hopefully this is going to work but I will still see that the proof of work is not
Proof of work is not dead yet.
So we will be stuck with it for a while.
And to the second part of your question is whether Ethereum might lose its appeal.
It might actually do it.
And one way to not let this happen is to actually bring new experiments and innovations to it.
So as an example, a lot of people look at state rent as some sort of negative.
kind of negative thing.
But I actually, I would say that
lots of, if you look around, a lot of the
blockchains that reach the certain
scale, you know, in the beginning, when you launch
a new blockchain, it's always like, yes, yes, we're going
to be a super duper blockchain. It will kind of scale
enormously. But when they do reach certain scale, they start
seeing the problems of the growing state. And these problems
repeat again and again everywhere. That's why
and in lots of projects they started to think about the state rent to introduce it but nobody
actually done it so far so what i see if the theorem does it first and actually shows how it needs to
be done this is going to be a really big step forward not step backwards so it it will be the first
real-life introduction of this concept which everybody was just talking about and theorizing
and now we have it in practice and um
It will be basically, I can see it as the competitive advantage, pretty much.
And the other things as well is, you know, if you, to do the things like Ewasom,
I know that Fokod already has made it, it wasn't, but again, we will see who is going to do it first.
Okay, well, Alexei, thank you so much for joining us today.
It was fascinating to get a glimpse and into this.
And I also really kind of get to understand where things are at with Ethereum right now.
It's true that since DevCon, a lot of things have been percolating,
and it really helps to get someone to really lay out the current state of things and where things are going.
Okay. Thank you very much for having me.
It was a real pleasure to have this chat.
Thank you.
Thank you.
Thank you for joining us on this week's episode.
episode. We release new episodes every week. You can find and subscribe to the show on iTunes, Spotify,
YouTube, SoundCloud, or wherever you listen to podcasts. And if you have a Google Home or Alexa device,
you can tell it to listen to the latest episode of the Epicenter podcast. Go to epicenter.tv
slash subscribe for a full list of places where you can watch and listen. And while you're
there, be sure to sign up for the newsletter, so you get new episodes in your inbox as they're
released. If you want to interact with us, a guest or other podcast listeners, you can follow us on
Twitter. And please leave us a review on items. It helps people find the show, and we're always
happy to read them. So thanks so much, and we look forward to being back next week.
