Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Vitalik Buterin: DAO Lessons, Casper and Blockchain Interoperability
Episode Date: February 21, 2017Ethereum Founder Vitalik Buterin joined us once again to discuss the state of Ethereum and the efforts to innovate the protocol. We covered the takeaways from the DAO fork, switching to the proof-of-s...take system Casper and how to think about blockchain interoperability. Topics covered in this episode: – Lessons from the DAO hack – The security flaws of Proof-of-Work – Why Proof-of-Stake will provide more security and scalability – The state of Casper and transition timeline – Blockchain interoperability Episode links: Ethereum Foundation Ethereum R&D Roundup Valentine's Day December Development Roundup R3 Chain Interoperability Paper Epicenter 58: Ethereum, Proof-of-Stake and Future of Bitcoin Epicenter 91: Frontier Launch and Scalability This episode is hosted by Brian Fabian Crain and Meher Roy. Show notes and listening options: epicenter.tv/171
Transcript
Discussion (0)
This is Epicenter Episode 171 with guest Vitalik Buterin.
This episode of Epicenter is brought to you by the Merkel Week,
a blockchain conference, training seminar, and hackathon taking place in Paris from March 9th to 12th.
Learn from leading experts and get certified on building blockchain applications designed to enhance organizational governance.
Get your tickets at Merkelweek.com and use the promo code Epicenter to get 30% off early bird tickets.
And by Jacks.
Jax is the user-friendly wallet that works across all your devices and handles both Bitcoin and Ether.
Go to JAAWX.io and embrace the future of cryptocurrency wallets.
Hello and welcome to Epicenter, the show which talks about the technologies, projects, and startups driving decentralization and the global blockchain revolution.
My name is Brian Fabian Favian Crane.
And I'm Mayor Roy.
Today we are glad to welcome Vitalik Boutrin back to the show.
We'll talk about the DOOFOC.
Casper, Sharding, interoperability, ZK Snarks, and the application space of blockchain technology
with them. Generally, we have our guests, you know, introduce themselves, but I think
Vitalik doesn't need an introduction in our space, so I'll probably just dispense with that formality.
And perhaps we could just jump in and talk about the DO4. So the DO attack and the resulting
fork of the Ethereum system.
So from the outside, like, we have all seen what happened.
And I thought it would be a good opportunity to know what was it like, what were those, what was that month like to be in your shoes, Vitalik?
So maybe if you could tell us your version of the story.
So 2016, June 17th at 3 p.m. local time, I was in Shanghai.
And it was just another normal day I was visiting China talking to a few of our local partners.
And at one point, I got a message on, I believe, Skype in one of our kind of semi-public Skype getter channels that just said, hey, you know, someone should check this out.
It looks like the balance of the DAO is decreasing, so someone should check this out.
And I immediately went and I checked it out.
And, you know, the balance said it was 9.5 million ether.
and I immediately got concerned.
I started looking, and it definitely seemed like Ether is getting drained fairly quickly.
So I sent off a message to some other people from our team,
and over the course of about 15 minutes,
Christian, Christoph, and a couple other people went in and tried to see what was happening,
and it fairly quickly became clear that, like, yes, this was very, very bad.
and at that point
a fairly large number of people
in the Ethereum Foundation
Heathcore, Sloc,
kind of came fairly rapidly started
and of talking to each other and trying to figure out
what was going on, was there any way to try to
stop the situation?
Like is there anything that could be done?
What was happening?
What would the consequences be?
I know.
and just like get an idea of everything in general.
So we had about two or three hours worth of fairly frantic calls and Skype discussions.
At the end of that, that first blog post came out where we basically said what had happened.
We suggested the soft fork and hard fork strategy.
then after that we the Go developers started going on going off to actually implement the soft
work and I was kind of trying to be online and like trying to be as helpful as as as much as I
could but like even still there was there were quite a lot of and there were limits to what I could do
personally because I was, first of all, I'm not actually one of the Go developers and I don't
have too much experience with either the language or the clients so that like I've made a few
online patches, but I'm not this sort of person who would like actually be able to be the right
one to practically implement like an actual soft work patch. So I know like Jeff and his team
were working fairly hard on that for several days.
I mean, I think, like, it was the basic kind of scaffolding was done in about one or two,
but, you know, as usually, spent quite a bit of extra time going over it and testing it
and making sure everything works well.
And that was in parallel.
I mean, we were trying to kind of get an idea of what the community thought that we should do.
And I remember there were at least initially some informal kind of polls that were happening on Reddit.
There was also a poll that was happening in the Chinese community.
And, you know, being in China myself, quite a bit ended up kind of passing messages back and forth between, like, the various different, like, the channels on WeChat and Reddit and so forth.
And, like, on both sides, it seemed fairly clear that there was at least initially something like 80 to 85% support for the soft.
fork and about 60 for the hard fork.
So, you know, we basically took that as, you know, we have a mandate from the community to
definitely try a kind of any non-hard fork routes to resolving the situation if at all
possible.
So we wrote up the code for the soft fork after about a week pushed it out.
And then at the same.
same time, there was a lot of other efforts that were happening with, like, developers that
were trying to go in, like, inspects the DAO code, see if there's, like, some way of counter-attacking,
try and, like, figure out all these other, like, strategies. Can you slow the attacker down?
Can you kind of drain the child DAO? Can the attacker drain the child DAO that you drained
and so forth? And what we figured out, basically, is that you could potentially have this
game, and if you had the soft work, then you could prevent the attacker from retaliating
until you could win. But without the soft work, it was fairly unclear, it was like fairly
kind of unclear, especially initially. And like, it seemed like it might be possible to just
keep the money frozen forever, or it might be, you know, there's always the risk that like
someone will discover even more bugs inside of it.
And me and Grunzier basically stronger started recommending against trying to kind of play
those kinds of complex games.
And he started pushing for just doing a hard fork and getting it over with.
Then the post came out that basically said, you know, the soft fork was kind of deemed to be unsafe.
And at that point, the soft fork effort was abandoned.
It's in parallel.
I know there was that one Chinese team that was working on building carbon vote and pushing it out.
And carbon vote, I believe, started running it like roughly around the same time as the soft fork attempt failed.
And that was about two weeks in.
And after the soft fork failed, there are still about three weeks left for us to do the hard fork.
and sort of within that time, people were scrambling to try to figure out the hard fork specification,
plan the hard fork, figure out what the consensus tests are, figure out what else would need to be tested,
and do some extra network tests because, you know, just because of the possibility that this work would go less smoothly
with in something like the transition from frontier to homestead.
Then, you know, the votes started coming in on carbon vote.
And I remember on Reddit, it seemed kind of very chaotic.
At least for myself personally, I mean, I was surprised
that actually didn't receive any, like, serious death threats.
I mean, I received a lot of messages from trolls and I still do,
but like nothing on the order of, you know,
like actually
threatening to kill me
which is I guess
kind of a nice silver lining
then
eventually the carbon vote showed that
result of about 80
5%
in favor of the fork and like that's
a result that I kind of accepted
because it also seems to roughly line up
with like a lot of the other poles
that were happening at the time so like I saw
mining pools were around like 75%
in favor I saw
support for the hard fork increased after the soft work failed.
And so, I knew that a strong super majority of the community was definitely in favor of it going ahead.
And we, three days before the deadline, so three days before the attacker would be able to make their next move to get to get the money out.
We released the code and people installed the code.
And I remember when the hard fork was just about to take place, we were all in Cornell at a workshop that we were organizing together with IC3 and a few Zcash people were present.
And at like 9.20 a.m., 40 minutes before the first day of the workshop, that block 1.92 million hit and everything seems to kind of.
kind of go through smoothly.
So that was the first part.
And then obviously three days later,
the whole kind of classics
side of the situation started to
kind of take hold and
things kind of continued going from there for a while.
So looking back on that,
original promise, one of the original thing
was this unstoppable world computer, you know,
no censorship, no immutable.
no immutable you know there was some of the core promises of of ethereum and and you know
looking back on that today we have Ethereum classic we have Ethereum I have heard many people say
this is you know irreparably damaged Ethereum substantially damaged Ethereum I'm not sure if I agree
with that personally but I'm curious what is your point of view do you feel like this has done
lasting damage to Ethereum do you feel like it was mostly just a
valuable learning experience and Ethereum is fine? Or how do you look back on that event?
I mean, first of all, I think Ethereum is definitely fine. And I think like outside of a fairly
kind of small, you know, group of people that are like really strongly into the sort of purity
morality of, you know, if it's stained once, then it's gone forever. I think like most people are
even people who disagreed with a decision or many of them are kind of fine with.
it and I think over time they're starting to see that, you know, the Ethereum, a kind of
governance is stabilizing more and more and that the project's continuing to move forward.
But in terms of kind of less extreme consequences, I think there's quite a bit of good
and there's quite a bit of bad. So I would say, yeah, like on the good side, I think the
hack ended up doing wonders for the progress of safe smart contract programming.
After that, after that happened, I noticed that, you know, within the next two months,
there were at least five teams that showed up that were all trying out various different
approaches at improving smart contract programming safety, whether it's better languages,
whether it's better development environments, whether it's formal verification, you know,
It really was this kind of big, huge sign to the academic community that basically said,
look, this problem is real, and, you know, there's money at stake.
And this is a place where you can contribute, you know, the knowledge that you've been figuring out over the last 30 years right here, right now.
And I think that's something that's gotten quite a few people excited.
And on the negative side, there was obviously a bunch of PR fallout.
I mean, I think obviously on net negative,
but I think the great majority of the negativity
is attributed to the hack itself
and not necessarily any of the decisions that followed it.
So I was thinking at the time,
even when it was quite controversial, you know,
was this a good thing, was this a bad thing?
You know, some people were saying,
oh, it sort of damages this immutability idea and stuff.
But I think if one took a step back
and if one isn't so deep in this,
this whole crypto community and looks at this, then it was very clear to me that the fork was
the choice that would be looked at more positively than just letting the theft proceed, right?
Because otherwise it would have been a huge thing like another Mount Cox, 150 million stolen.
Like this is much more like, okay, community rallies together, undueced theft.
And that kind of sounds like a good story, right?
If you sort of from an outside, I'm like, okay, maybe it's still not so secure,
but, you know, at least even in this decentralized network,
they can come together and do something about something like that.
If you believe in, like, certain kinds of kind of applied social chaos theory
as, you know, at least some kind of modern sort of philosophers
trying to explain things like the financial crisis do,
then you would say that, you know, a major crisis in any ecosystem is inevitable.
And, you know, you also say that, you know,
the facts that our major crisis happened at a time,
when the community was well coordinated enough to basically undo about 85% of the theft,
is actually a really lucky and amazing thing. And realistically, that's not an opportunity that we're
likely to have quite so easily in the future. Let's take a short break to talk about the Merkle
Week, a blockchain training seminar, conference, and hackathon taking place here in Paris for March 9th to 12th.
is organized by Eureka certification, and it's an event that is designed to help entrepreneurs,
developers, and decision makers gain practical experience using blockchain technologies to build
distributed governance in their organizations. So it's a four-day event, and it's broken up
in the two parts. First, March 9th, there's a full-day training seminar featuring an impressive
list of speakers, including Gavin Wood, William Mugayar, and Peter Todd. You can get the full list of
speakers over at Merkelweek.com. And as an interesting,
attendee, you'll get to participate in training courses and demonstrations for Bitcoin and
Ethereum. And these are designed to help you build and tests, blockchain applications meant
to enhance operational efficiency in your businesses and organizations. Then, over the weekend,
for March 10th to 12th, you can put all that knowledge to practical use by participating in the hackathon.
And here you're going to get the work with other developers, designers, and entrepreneurs,
and you're going to come together. And you're going to come together. And you're
are going to work on real live Bitcoin and Ethereum applications under the close mentorship
of those leading experts.
And by the way, there's a 10,000 euro prize for the top three teams in the hackathon.
So come join us.
Come spend the weekend here in Paris for the Merkel Week from March 9th to 12th.
Remember, all you listeners in the UK, that's only a two-hour trip on the Eurostar,
so don't miss out.
So get your tickets over atmerkowweek.com and be sure to use the promo code epicenter
at the top of the checkout page for 30% off your early bird tickets.
And that offer is valid until March 3rd.
So we'd like to thank the Merkel Week and Eureka certification for their support of MEPAETA.
You're going to park the topic of DEOs for the timing, only to pick it up later towards
the end of the show.
But the DO FOC was perhaps not intended for the, what you are intending for is a fork
to move Ethereum from proof of work today to proof of stake.
So let's kind of move into a discussion on why that's the plan.
So recently you published this article which was called, which outlined your proof of stake design philosophy.
And in that article, you laid out for basically the grounds for at least attempting the move to proof of stake.
So can you explain why you want to take such a transition?
in a network that is live and has over a billion dollars in value.
Sure.
So I would say proof of stake has a couple of major advantages.
So the one that people bring up a lot is that it really reduces one of the biggest
weaknesses of proof of work, which is the very large and inefficient hardware cost and
electricity consumption.
So if you look at something like...
the Bitcoin network, like it burns hundreds of millions of dollars a year in capital depreciation
costs, the electricity costs, maintenance costs, all to maintain this network. And the
computations that these miners are doing, like, they're basically just kind of pointless busy work,
right? They're just problems that are created for the sake of being hard. And, you know,
it's not really providing any kind of extra value to.
society, it's basically just doing this sort of busy work for the purpose of
proving to the Bitcoin network that the mine are capable of doing the basic work exists
and isn't like some kind of civil attacker. And I mean, personally, I've never really been
comfortable with like that aspect of either Ethereum or Bitcoin. And I've been always kind
of interested in seeing, you know, are there solutions, are there ways to kind of reduce
efficient energy consumption.
And back in 2013, I was really interested in various things like proof of storage,
useful proof of work, which is the idea of coming up with a proof of work algorithm
that simultaneously does various forms of scientific computation.
Like you could imagine a proof of work that like simultaneously does some kind of machine learning
or, you know, protein folding or whatever.
Like there are ideas around like proof of excellence, which involves like coming up with
the proofs of humans trying to solve, like, mathematical problems or, like, other things
that are difficult for humans to solve, and, like, there are various other ideas.
And eventually, I kind of came to realize that proof of stake is just, like, the simplest and
most feasible one.
So that's one argument.
And, I mean, it's also important to note that the, this kind of argument of avoiding waste
actually has two sides to it, right?
one of them is just the kind of social arguments that wasting electricity is bad, wrecking the environment is bad and so forth.
And, I mean, on the environmental side, I'd probably say that the external environmental costs of hardware manufacturing or from something that's underappreciated and maybe even worse than the external cost of the electricity consumption.
But the second side of the coin is that if you're not expending as many resources on your consensus algorithm,
then that means that the protocol does not have to issue as many coins.
And that means that the kind of cryptocurrency and a blockchain protocol can be more deflationary.
And like in general, people kind of like that.
Yeah, like there's, I mean, there's definitely a tradeoff,
because, like, my, you know, if you don't have any block rewards, at least in the context of proof of work, then, you know, you don't have enough security to run the blockchain.
But at the same time, like, if you can come up with a way to have higher levels of security and not increase issuance, then that's something that most people are willing to take.
So that's one side.
The other side, which is also interesting, is that my opinion is that proof of stake blockchains actually are more secure against.
kind of like very large and serious attackers.
And the argument that I raise here is that with proof of work, like, okay, you know, there is
some cost to producing more A6 than the rest of the network combined and using those A6s
to pull off a 51% attack, right? And that cost is like somewhere around $200 million.
Now, the problem is that there's, if you can do it, you can do it.
that, then for a fairly small additional increment in cost, you can do what I call a spawn
camping attack, which is basically an attack where you attack 51% attack the blockchain, then as
soon as it starts recovering, you 51% attack it again, and then you 51% attack it again,
and so forth. And the end result is that you basically destroy all the trust in the system.
Now, generally when you bring this up to, you know, people like Bitcoin core developers, they
say, oh, well, if that starts happening, then, you know, we can just hard fork to a new
kind of proof of work, and we can basically make all those A6 useless. But the problem is then,
okay, let's say I'm an attacker who hasn't like $250 million or whatever, enough resources
to spawn camp Bitcoin once. Well, once you move away from A6 and then onto general purpose
hardware, then I can probably spend another like $100 million. Like, it's going to be less than
250 because the
hardware accumulation is going to start
from scratch. But let's say,
I'll probably be willing to spend another hundred million
dollars to 51% attack
and spawn camp Bitcoin again.
Now, the problem is, though, is that the second
time around, you can't hard fork
to a different proof of work algorithm anymore
because the second time around,
everyone is mining with general purpose hardware.
And so if you do more
hard forks, then
the spawn camping attack is going to be able to
continue. So the conclusion
of this basically is that
realistically there actually is a finite cost
that a well-resourced attacker can pay
to essentially kill off a proof of work blockchain for good.
And
in my opinion, this is actually quite unsettling, right?
And my opinion is that
one of the really nice things
about the kind of cypherpunk spirit
in general is that
it focuses on this idea of
attack defense asymmetry and cryptography.
So, like, if you look at kind of systems, you know, the world in general right now,
the cost of attack is generally much lower than the cost of defense, right?
Building a building costs $5 million.
Making an IED to blow it up might cost, you know, less than $50,000.
And, like, most kind of adversarial environments in the world actually operate in this way.
But with a few exceptions, and one of the major exceptions,
actually is cryptography.
You know, like, one of the really nice things about cryptography is that, like, I personally
Dan signed messages with a public key, and I can do this at a very low cost, you know,
the signature costs like 0.0001 cents worth of the electricity to produce.
But the cost of actually cracking that signature is so large that, you know, not even a major
national government stands even a chance of doing it.
And, like, that's something that's, like, extremely powerful.
And, but, you know, that kind of cyphorpunk spirit, if you look at proof of work consensus systems,
it doesn't carry over, like, at all, right?
So the cost of attacking a proof of work blockchain is always necessarily going to be less than the cost of defending it.
And, like, it can't be more.
And the reason basically is that, you know, if you want a 51% attack a blockchain,
then that means you have to have some.
spend more on hardware plus electricity than everyone else combined, but then, you know, oh wait,
that already means that if you can spend more money attacking than the network has spent defending,
then you can win.
And realistically, you can spend much less because, like, a large portion of those electricity
costs have already been spent and you're never going to see them again.
So the nice thing about proof of stake is that I feel like it actually does come close to replicating
this kind of cypherpunk spirit because you, instead of having more, you, instead of having
this kind of spawn camping vulnerability, you know, sure someone can 51% attack a
approve of stake blockchain. Okay, fine. Now, one of the key properties that we're trying to
design into Casper is this idea of what I call auditable Byzantine fault tolerance,
which actually does go a bit beyond Byzantine fault tolerance because auditable Byzantine
fault tolerance doesn't just say, you know, if the network broke, that means that more than
one third of the nodes are Byzantine. It actually means if the network
breaks than more than one third of the nodes are Byzantine and you know who to blame, right?
So you have cryptographic proof that you can use in order to show that, you know, oh, you know,
these know, these validators are the ones that were responsible for the 51% attack.
And what you can do is you can just like coordinate a hard fork on the community level and
you can just like continue the chain and those validators can get their deposits destroyed
and you just keep going from there, right?
the cost of the attacker would be something like, you know,
$100 million worth of ether of like all these deposits that God burned.
But the cost of the network would basically just be,
oh, hey, it's just an unexpected hard for it.
Like, it would maybe be two or three times as bad as what would happen if,
or what happens back in November when we had that consensus failure between Geph and Parity.
But like, it's not that much worse, right?
Like, okay, you know, people would know what happened.
People know what to expect.
You know, the blockchain would need to continue.
These values would get slashed and life goes on.
And sure, the attacker can keep on attacking it again and again,
but, you know, the attacker would have to buy another 10 million ether
and keep on doing this each and every time, right?
So the equation is really tilted in favor of the defender here.
And, like, I would even say one of the other nice properties of this kind of approach
is that because such a system would be able to just, like,
Honey Badger recover from a 51% attack so well,
I would argue that a 51% attack would actually increase the value of the underlying
cryptocurrency because people would realize, oh, wait, you know, there's an attacker
and suddenly a bunch of ether got burned, and so the rest of it is going to be worth more,
right?
So because of that, I think, like, the process of even trying to buy up enough ether to pull
the attack off would.
end up, ironically enough, increasing the price.
So what kind of conjecture is that people would realize this,
and that basically no one would even try doing at least that kind of attack vector at all.
And people would focus their energies on relatively cheaper attack vectors,
like finding software bugs in operating systems that let them hack into people's computers
or whatever else people can do now.
So that's the general thing that we're trying to go towards.
I mean, there's also obviously a lot of kind of specific things that we wanted to do.
So one of the things that we've been doing a lot in the last probably four months is we've been making a really serious effort at trying to understand kind of abstractly incentive compatibility in the context of,
cryptoeconomic protocols.
So just think in like a very abstract sense, you know, how, given any protocol,
how would you think about figuring out how to incentivize the participants?
And the thing that we realized is that we came up with a method, or like a combination of a
couple of methodologies, right?
Where like one of them is this notion of auditable fault tolerance, where you would try
to create systems where if the system breaks and you absolutely know who is at fault
and you know that they unambiguously did something bad,
then you can just destroy their entire deposit, right?
Because, you know that they did something bad,
and you can penalize them,
and if you're going to penalize them,
you might as well penalize them all the way to it to the max.
Now, that's one side of it, right?
And designing consensus algorithms that have this auditable BFT property
is something that's fairly important.
Now, another interesting situation that we came up with is,
What if you have a situation where some validator or some participants in a consensus protocol
caused the consensus protocol to either fail or have reduced performance,
but you don't know exactly which one.
Let's say that you can nail it down to one of two.
Then the approach that we ended up converging on is, in that case, you penalize both.
So if you can nail it down to one of n, you penalize all n of them.
Now, the reason why this is nice is because, like, first of all, it achieves this nice and set of compatibility property.
And it also has the really good side benefit of being a very effective fix or mitigation against things like selfish mining, right?
Because, like, basically, you can show that if you follow this methodology, then any deviation from optimal protocol behavior.
and by optimal protocol behavior,
imagine something like the Bitcoin or Ethereum blockchain,
you know, miners just always creating blocks
one right on top of the previous, right at top of the previous,
no stales, no uncle's just like a straight chain.
Like you can show that if you follow this methodology
when you design your incentives,
then any deviation from optimal behavior
will not be profitable to anyone.
Like it will lead to anyone who might be outfall losing money.
Right.
And so, you know, if you follow this approach,
then all these kind of large classes of attacks
become something that you don't really have to worry about anymore.
And this is a methodology that you could apply to proof of work,
you could apply to proof of stake,
like you'd apply in a lot of contexts.
This is the methodology of incentive compatibility you're referring to.
Well, I mean, incentive compatibility is like a generic game theoretic term
that just basically means, you know,
the mechanism encourages validators or participants to act in ways
that are that kind of promotes the goal that you that you want to have but this is our methodology
at trying to achieve incentive compatibility so this is obviously super exciting and i i very much agree
that uh proof of work security model is kind of flawed and if you think in the longer term it's
really unclear how bitcoin is going to be secure we did yesterday or we recorded an episode with
about, you know, Bitcoin fee market and unlimited and how that's going to work.
But I think what's clear there, right, is that it's, it's unclear how that's going to work.
But what's the timeline here?
When do we actually expect Casper to be implemented?
And what do you see as some of the risks?
And I'm guessing right now, like, it's hard to tell, but like start of next year,
seems like possible.
Like in general, the kind of pipeline that we have to go through, right, is step one,
finalize the algorithm.
Step two, make a test network and simultaneous we do a bunch of like academic verification
and auditing of the algorithm.
Step three, once we're happy with it, implemented across all seven of the clients,
then probably run another test with it for like three or four months and then finally release.
So each of those stages is something that takes time, could potentially have delays, has its own issues, just as, you know, we went through when we were launching Frontier back in 2015.
And each one of those stages has like some risks to it.
I feel like right now we're getting close to the point where the research and like algorithm specification stage is kind of coming close to resolution.
Now, I mean, I know you had Rick on your show, and, like, I know that was definitely a great episode,
and he talked a lot about some other fancy Casper features like subjective consensus and various other things that he and Vod were thinking about.
So one of the challenges is going to be is that we're going to have to come up with a red line where basically say,
this is a subset of features that we're happy with for now.
And, you know, some of us are going to focus on getting.
getting this into Ethereum and like making sure that the real live network can benefit
from it as soon as possible, you know, obviously subject like safety constraints and so forth.
And at the same time, continue research on that if can we improve Casper and make it have
more and more of these nice properties over time.
And like those are two tracks that have been kind of starting to happen in parallel already.
and, you know, I expect they're probably going to continue happening.
So, like, in general, I think our research, especially in the longer term,
is fairly kind of multi-threaded where, you know, you have, like,
myself researching some aspects of Casper on one side,
Vlairee researching some aspects of Casper on his side,
then some research on charting happening,
then some research on protocol economics,
then some, like, you know, things like making correct incentives for managing
contract storage size and general kind of state size account creation account deletion
privacy and zero knowledge proofs and like all those other issues so all of those are things
that we're kind of thinking about at the same time i mean any of them i think have the risk that
it adds up being a much harder problem than we thought um on the once we get to
testing.
I mean, I think
it's definitely going to be a challenge
to actually
develop the test network,
like run it, make sure that it
does everything to our satisfaction.
That's all in general,
more of a kind of software development and engineering
challenge. And then implementing it
across all seven clients, like running the test
and so forth, that's another set of challenges.
I mean, look, once
the, we're confident about the
outworth of itself. None of the rest has like has that much fundamental uncertainty in it.
It's basically just this kind of fairly long and kind of incremental slog that might take
less time and might take more time.
Today's magic word is steak. That's S-T-A-K-E. Head over to let's talkbriccon.com to sign
in, enter the magic word and claim your part of the listener reward.
One of the terms that one hears related to Casper is this idea of consensus by bet.
So generally like the way I tend to think is once you have a system where you can define
a set of public keys as validators of some form, a lot of systems end up taking the approach
that they go to traditional Byzantine fault tolerance literature, right?
You have consensus algorithms like practical Byzantine fault tolerance and the families that are derived from there.
And so once you have validators defined, you can use all of these traditional algorithms to implement consensus.
But with Casper one hears of this new idea, which is consensus by bed.
And what we'd like to know is what is consensus by bet?
And is this a point of focus for you right now?
So the general idea behind consensus by bet is basically that you can think of validator signatures as being commitments that say, I am willing to get some reward in a history that has property X, where property X might say, you know, it contains some particular block or it contains some particular state route, in exchange for undergoing some penalty in all chains that do not contain X.
and the theory basically is that you can kind of both mimic proof of work
and including resolving proof of work, nothing at stake issues,
and potentially go even further
by basically having a consensus algorithm
that consistent validators having the opportunity to make these kinds of bets.
And like in the original formulation, you can think of a bet as basically saying
plus X in chains where,
that contain, let's say, some state root S and minus Y in chains that do not contain that state
root. And you can think of validators as having the ability to make these bets at different
odds, where you can think of the odds as being like the ratio between the X to the Y.
So, for example, if X is one and Y is a penalty of minus one, then it would make sense to
make that bet if you think that S has at least a 50% chance of being in the history that ends up winning.
but if you get a bet that has, you know, X being plus four and S being minus 16,
then that would only make sense when you think there's an 80% chance.
And so the idea is that you give the validators the opportunity to make these bets,
and they would start making those bets.
Now, initially, you know, there might be a fork, there might be like a choice.
You know, do you choose state root S or do you choose state root T?
initially validators would be fairly confusing,
they might only make 50-50 bets in one direction or the other,
but eventually once it becomes clear which one's winning,
validators would be able to make bets
with progressively higher and higher odds on one of them,
and eventually they'd be willing to make bets at maximum odds
that basically say,
in exchange for a medium reward in history containing S,
I am willing to lose all my money in all histories that do not contain S.
So they just like fully commit their money to this,
particular chain. And that's when you know that that particular chain or up to that particular
checkpoint is kind of quote finalized. So that was the original idea. Now, we have been recently
de-emphasizing that and for a couple of reasons. I mean, one of them is that people in general are
not comfortable with, or at least not fully comfortable with taking on this kind of risk that,
oh, you know, what if something really, really unexpected happens and what if I made some bets that
We're 99.9.9% confident, but it turns out I was wrong, and now I suddenly lose a bunch of money.
So, you know, it does impose extra risks on validators, and validators would have to be compensated for those risks.
So that was one concern. The other concern is that one of the properties that we're trying to keep in our algorithms is this notion of bounding the griefing factor.
So what I mean by that is that the griefing factor is basically like a coefficient that says, you know, how easy is it to maliciously attack other validators in the system?
So if the griefing factor is five, then what that means is that there exist ways for malicious actors to spend $1 in order to make some target lose $5.
If the griefing factor is, let's say, one-half, then that means that the malicious actors would have to spend $2 to make the honest actors to lose $1.
And what I realized is that if you assume an attacker that controls the majority of the stake, then the griefing factor on this kind of system could, in some models potentially end up being very high, basically because, like, the validators would start kind of expanding, pushing,
out to infinity or pushing out their bets toward kind of infinite
odds on one side and then you would suddenly come kind of with overwhelming
odds and kind of flip the bet on the winning kind of
state over to some other answer and then all of a sudden it looks like there's
consensus happening around the other answer but then there's people that made all
their bets in the original direction and they all end up losing a bunch of money.
So, like, for several reasons, we ended up de-emphasizing that approach.
And the approach that we're thinking of right now, actually, is one that is much closer
to a traditional Byzantine fault-tolerance, except that with a few different kind of changes
to the algorithms and a few changes to the security model.
So one of the changes to the security model, for example, is that we don't just care about
fault tolerance, we care about audible fault tolerance.
I mean, there's also a slightly different definition of liveliness.
There's also a few other small changes.
But that's roughly the approach that we're looking at.
So that's very interesting that you're looking now more towards traditional Byzantine fault tolerance literature
in order to finalize a consensus algorithm.
I thought one other difference that struck me as unique in Casper is that,
that in much of the traditional Byzantine fault tolerance literature, BBFT, those systems
prioritize consistency over availability.
So what that means is in case there's a network partition, for example, let's say the communication
with China is broken off, China and the West is broken off, then if it's Bitcoin,
blocks will keep on producing on the Western side and the Chinese side.
So that is a system that prioritizes availability over consistency.
System is still available, but you have two different blockchains now.
And then traditional Byzantine fault-toin structure, many consensus algorithms are
they prioritize consistency over availability, which means no new blocks will be produced.
So the system grinds to a halt, but the blockchain doesn't fork.
Now with Casper, what I keep hearing is that you want to prioritize availability over
consistency. Can you like walk us through why you are making that choice and what part of
traditional literature fits that kind of description? Sure. So and the main reason I'd say why we
care about availability is because I mean first of all in a public blockchain context like one third
of validators just dropping off one at the same time is a very real possibility. Like for the
petitions could happen. No, it could just get lazy.
like lots of things could happen.
And saying that if that happens,
then the network just like halts is unacceptable.
So people,
we just really wants to have this property
of like maintaining what proof of work has
where,
you know,
as long as there are at least some nodes
that wants to keep the chain going,
the chain keeps going.
Now, then of course there's a question of like,
how does that end,
kind of mesh together with
traditional BFT algorithms, which are, as you say, consistency favoring.
And there's kind of two general approaches that are kind of combining the two.
So in general, I would describe Casper as being an availability favoring algorithm
that also tells you how much consistency you have.
Right.
So the kind of nice thing about that definition is that, you know, in some sense,
you do have kind of as much of both as you think.
can get, you know, or as much of both as, you know, the, things like the CAB theorem allow you to have.
But, and the way that this ends up working is that, well, so once again, one of two approaches,
where one of them is that you have some base algorithm that is availability favoring.
So, for example, if you look at, like, a lot of the older proof of mistake algorithms that, like,
rely on this notion of
like proof of work style validators making blocks
on top of each other, that like
that is availability favoring, right? Like that keeps
going even if there's only 1% of the nodes
that are offline, but it doesn't have
any notion of finality. And then
basically we would take this availability
favoring backbone and you would
kind of layer a consistency
favoring finality layer on top of it.
And the idea would be that if you have more
than two thirds of nodes that are online,
then both things would work
and you know, you would have your availability, would
have your consistency. Now, if more than a third of nodes drop off line, then the consistency favoring
finality layer would just stop finalizing. You know, it will repeatedly try and try and try again.
It would fail every time, but the availability favoring chain would keep on going. And what this means
is that the chain keeps going, but clients on the, like users that use the chain and even applications,
like even smart contracts that are sitting on the chain, would all be a way.
of the fact that they were sitting on a chain which suddenly has lower guarantees of security,
at least past some particular point.
And they would be able to make their own judgments about how they interpret that.
So basically, like individual applications would almost be able to choose what their own
tradeoffs between consistency and availability are.
Now, the second approach, it has similar properties, but instead of having two separate mechanisms,
it has one mechanism.
And in that one mechanism, you would have what's called a subjective finance.
out of a threshold. So subjective
fidelity threshold basically means
that instead of
having a fixed
like hard in
protocol threshold of like for example
you need to have two thirds
of all notes sign up
or prepare in order for anyone to start signing a
commit, you would try to
make all of those things
endogenous or you'd make
all those things just like be choices
that get made by validators
or get made by users. And so
individual users would pick,
how many prepares they're satisfied by,
how many commits they're satisfied by,
and the idea here
would be that if, let's say,
all of a sudden, 40% of nodes drop offline,
and if there's common knowledge of this,
or if there's approximate common knowledge of this,
then the chain can actually keep finalizing things
in the sense that
you have this guarantee that says that
as long as the 40% that are offline actually are offline,
then, you know, people can lower their finality thresholds,
and within that context, you can finalize things.
So we have a guarantee that says, you know, either things,
like the chain keeps on going consistently as before.
That sounds really, really cool,
know, that you have kind of both of those advantages,
right now on the one hand, applications know what's going on,
and there can be risk assessments made,
and, you know, exchanges know, okay, we have to be careful,
we have to wait, we have to wait for extra confirmation,
etc., etc. By the same time, the chain keeps going, even when there's petitions,
even when there's all kinds of issues. I think that really kind of combines the best of both
worlds. So I'm really excited to hear that that's possible and that's the direction that you guys are
taking here. Let's take a short break to talk about Jacks. Jacks is a multi-coin wallet created by the
people at DeCentral. Now, in the past, if he had a whole bunch of cryptocurrencies, it was a pain to
handle them. You either had to leave them on an exchange, which was insecure, or you had to have
all these different wallets, which was a hassle. Fortunately, now with Jacks, those medieval days of
darkness, misery, and suffering are over. Jack supports multiple cryptocurrencies and new ones
are being added. But it's not just storing cryptocurrencies you can do with Jacks, but you can
also exchange them directly from within side the wallet, thanks to their shape-shift integration.
And since there's only one seed, Jax makes it super easy to back up and sync to the other devices.
Jax works with Windows, MacOS, Linux, Android, iOS, and has browser extensions for Firefox and Chrome.
So go to jacks.io, that's J-A-A-DoubleX.I-O, to download the wallet and get started today.
We'd like to thank Jax for the supportive Epicenter.
Let's move on to another topic that we wanted to cover.
So you wrote a really nice short paper.
30-page paper or something, for R3 about the chain interoperability.
Now, chain interoperability, I think, is a topic that's become much more present,
much more attention on that.
We have a whole bunch of projects in this space.
I mean, there's, of course, a ripple with Interledger that has been working on this area for a long time.
There's also some more novel proposals like the Polka-Dar proposal by,
the Heathcore team and then also the one I am partially involved in, which is the Cosmos proposal.
So there's a whole bunch of different ones. So would you mind us running us through just what are the
main challenges and approaches to making blockchains really interoperable so that one can, you know,
move values, see mostly around, build applications that may be involved components that live on
different blockchains?
Sure. So in general, as I described in the paper, there's several kind of major ways that you can achieve interoperability, and there's several major categories of things that you can use interoperability for.
And what you can think of those, there's also different ways of making the categorization.
So there's the kind of more computer science theoretic characterization, which is about what kinds of relationships between events on chain,
A and events on chain B as you wants to create.
And then there is the more kind of application layer one of, you know,
what exactly you are even using this for?
So then there's the kind of various different technologies.
Right.
So first they talk about notary schemes,
which are basically a kind of multi-sig federations.
And that's a trust model that's fairly simple to understand.
You know, if you trust the majority of the people in the federation,
then you can kind of trust that federation to say what happened here,
what happened here.
if something happens here, do something there, if something happens there, or do something here.
Then the second model I describe is this concept of hashwalking, which is a generalization of the Tier Nolen protocol from the Bitcoin kind of forums back in 2012 and 2013.
And the idea behind hashwalking is basically you use this kind of scheme where you make an event A or you make an event on chain A and an event on chain B.
both be dependent on someone revealing a secret number that has some particular hash.
And the idea is that if the number gets revealed, then, you know, either party would be able to
paste the number in and make things happen on both chains.
And if that number doesn't get revealed, then, you know, neither of those things can happen.
And if you try to make the event happen on one side of the chain, then the process of doing that
reveals a secret number.
And so the other party can take the secret number and kind of transplanted into a transaction
on the other chain. So it's a fairly
kind of simple technique
and it can do quite a lot
so it can do cross-chain exchanges
for example, but it does have
one major limitation. And the way that I
described the limitation is that
it can't do, like,
it can do what I call cross-dependency
but it can't do what I call causation.
Right? So a cross-dependency basically says
you make an event on chain A and
an event on chain B both be dependent
on some other event C.
And in this case, C is like revealing some secret number.
But what they can't do is make an event on chain A generally caused an event on chain B.
Right.
So for example, in particular, if there's an event on chain A that's not caused by an individual,
it's caused by a smart contract.
Then, you know, smart contracts can't keep secrets.
And so this protocol can't really even work at all, right?
So in those cases, you have to move beyond hash-walking into other constructions.
And the third major category of technology that I talk about is relays.
And, you know, we've already have BTC relay for about a year, which is basically a Bitcoin
light client that lives inside of the Ethereum blockchain.
So Ethereum contracts can verify Bitcoin transactions and they can do things conditional
on Bitcoin transactions taking place.
And this allows for this other kind of kind of causal interoperability between the Bitcoin
blockchain and Ethereum blockchain, where events on the Bitcoin.
Bitcoin blockchain can directly trigger events on the Ethereum blockchain.
So I talk about all three of those technologies in depth.
I talk about like what you can do with causality and what you can do with cross-dependency.
So for example, with cross-dependency, you can do cross-chain exchange, but you can't move assets across chains.
So like you can't do the equivalent of a side chain.
But with causality, you can basically do everything.
And, you know, I talk about side chains.
I talk about kind of Fed coin, like in private chain contexts,
I talk about making a contract or smart contracts in one chain
that are connected to assets in another chain and various other use cases.
One of the things you were writing about in there,
an observation you made in your paper that I thought was very interesting,
was your point that, you know, as soon as you start moving assets between chains,
there's always a little bit of a, you know,
a little bit of a risk there, right? So there might be some attack vectors or something can be suppressed,
or one chain dedos, or, you know, maybe just one chain gets 51% attacks. So there's all kinds of things.
And so in a way that the security of those assets can get, you know, a little bit weakened,
and maybe it's not as strong on the chain where they're moved to as opposed to the chain where they originated on.
So what I'm curious about is what kind of, if we look really far in the fusion, if you think of,
of thousands and thousands of assets issued on blockchains
and there being kind of seamless exchange
between all kinds of assets.
Do you think those will be issued in lots of different chains
that tend to be controlled by maybe the parties
responsible for the asset?
Or do you think that this issue of moving assets
and around chains is big enough
that there will be a strong incentive to,
you know, issue maybe many of them on, on some chains and central chains that,
um, maybe have a lot of security and, and then moving others, maybe onto those chains or
on the third chains, but that there's a sort of, uh, an effect that we have assets and asset
issuance and management concentrate on a few chains. Do you think we, we will see that?
I think, uh, and first of all, you have to distinguish between different types of assets.
So, for example, you have issuer backed assets,
and then you have kind of pure cryptographic assets like Bitcoin and Ether.
And one of the things with issuer backed assets is that there is an issuer,
and if there's an incentive to do it,
then the issuer can just issue the, like, many different versions of the asset
on just about every chain that they care to support, right?
Like, you can issue, you know, the gold-backed tokens on Ethereum,
on counterparty, on Ripple, on NXT, on Bit Shears,
and on any system that supports people being able to issue tokens at the same time.
And realistically, you might as well do that for any system where, you know,
the potential revenues for the issuer are greater than the costs of, like,
basically doing the integration and teaching their customer support staff how to handle that particular chain.
So that's something that I think issuers definitely are going to do.
Now, in the case of cryptographic assets, obviously you can't do that.
And I think there are going to be some situations where, like, you don't want every
cryptographic asset issuer to actually be in the business of, you know, keeping track of
and supporting all of these chains necessarily.
And I think especially for smaller chains, the approach of, like, using side chain-like techniques
where you actually do have, like, portable assets and you can kind of move them from
chain A to chain B and then back to chain A.
I expect that to be a paradigm that does end up having some merit.
Although, in general, I think there are going to be a large categories of assets to just
have one home chain get used on that one home chain and no one really tries transplanting
them anywhere.
Okay.
And so we just did an episode about the cosmos as well.
And, you know, the architecture of cosmos is that there's kind of this hub, which is connected
through essentially side chains to all kinds of other chains.
So do you think that's a model that will see traction that makes sense?
Or do you think it will be more about having maybe bilateral connections
between all kinds of different chains and not having this almost central hub
or sort of a decentralal hub, I guess, in the middle?
So I think one very specific area where a kind of decentralized cross-chain solution is really
needed and is really going to have a lot of value,
specifically exchange between Krasi and assets.
Right.
So I don't even, I don't mean proper portability.
I specifically mean like trading A for B, like or ether for Dogecoin or whatever.
The reason basically is that, you know, right now we have centralized exchanges for that.
Centralized exchanges get hacked easily.
They have the fairly high fees.
They have all sorts of annoyances to them.
And it would be really nice if we could just like have a decentralized solution.
Right. So I think that's something that could be potentially very promising if done well.
And I mean, I don't expect there to be one solution there to rule them all there.
I expect people to try coming up with various different solutions and like some of them taking off in some contexts.
I mean, to some extent there are network effects here because, you know, if you have one system, then it's much easier to connect you more blockchains to that one system than it is to make an entire own use system from scratch.
but at the same time, you know, you have things like BTC relay that can just focus on one link and do it well.
So, I think we'll see some of both.
Cool.
So, yeah, the internet of blockchains is going to be like, I think, one of the big themes in the future.
And it remains to be seen how exactly it plays out, whether it's a harbenspoke model or chains interacting directly with BTC delay.
Moving on from that topic, I'd like to jump to the things.
theme of applications. Now I think I think a few months back you wrote a blog article in
which you laid out your view on what applications are going to what thinks blockchain
technology is going to be good for right and you had this idea of that blockchain
technology would be won't have very large killer apps but will enable a long
tale of small applications.
So I would like to revisit that idea and perhaps have it, have you explained that idea in your own words first.
I'll just quickly answer.
So, first, it's important to note that the reason why Ethereum exists in the first place and why I started working on it is because I realized that there is such a large number of different blockchain applications that you can't just create a blockchain protocol just for every single one.
of them. And like you can't try and like explicitly target applications one by one and try and like
target a feature for each one. The only way that you could really target the generality is by
taking the Ethereum approach and just creating a programming language. Right. So I would say the
idea for Ethereum even by itself started with this kind of vision of a very diverse array of
blockchain applications that are all kind of fairly individually, or individually. It's a kind of
individually not significant enough to be worth their own blockchain, but collectively very important.
So I started realizing that, you know, there's all these different kinds of applications.
And after the Ethereum project kind of became public and as people started discovering more and more things that you could do with it,
that opinion that I had just kind of kept on growing.
and at some points I realized that, you know, if people ask me,
what is the killer app of Ethereum?
I just realized there aren't really any good candidates.
And then, you know, if I started asking my other question of, wait,
but if there aren't any killer apps, then, you know,
doesn't mean that Ethereum is worthless.
And then I realized that, you know, oh, wait, you know, it's like the ideas that
Ethereum brings in or, and the implementation of this ideas is valuable.
but the value doesn't come from any one single application.
The value comes from all the applications put together
and the interactions between them.
And so, you know, it's about the fact that you can have digital assets on the blockchain
and you can have your company shares be on the blockchain.
And once you have a digital asset on the blockchain
and you have company shares on the blockchain,
then all of a sudden it becomes trivial to do, let's say, an equity crowdfunding
by just like doing a smart contract that automatically issues shares in exchange for.
digital, like, let's say, like, digit school, or like, some U.S. blockchain-based U.S.
dollars or whatever.
And, you know, if you look at identity management on the blockchain, if you look at, like,
certificate revocation on the book on Ethereum, and, like, all of these different use cases,
you start realizing that, like, all of them really do seriously complement each other.
And, like, the killer app in some sense is this kind of combined vision of, like, all
these things working together that like some of us call web 3.0.
Yeah, I would agree with that.
Maybe with the one edition that money and and the sort of digital gold and
electronic cash may be a kind of a killer app on its own even without those other things.
Although of course those other things also enhanced the utility and power of that.
Would you see that the same way?
I'd agree. I mean, I think the addition of the economic layer to the kinds of the set of things that you could do in a decentralized way is a very important and fundamental.
Well, Vitalik, we are kind of at the time limit and we had a lot of other stuff we wanted to talk about.
Actually, we want to talk about sharding and we want to talk about CK Snarks and how those are coming to Ethereum.
him. So we won't have the time this time, but hopefully we can have you on again soon.
To do that, I also hope that the recording worked out well because there was a little bit of
connectivity issues, but hopefully with the local recording, it should be fine. So thanks so much,
Vitalik, for coming on. And thanks so much for listeners for tuning in once again. So Episand is part
of Let's Up Bitcoin Network. You find this show and other shows on Let's Upbicon.com.
and if you want to support the show,
you can do that by leaving with us at iTunes to view.
That helps new people find the show,
and it's very much appreciated.
So thanks so much,
and we look forward to being back next week.
