Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Vlad Zamfir: Bringing Ethereum Towards Proof-of-Stake with Casper
Episode Date: November 16, 2015With Proof-of-Stake (PoS) a blockchain is secured not by spending an external resource such as electricity but by using value internal to the chain itself. The promise of higher security at a lower co...st is what also drives Ethereum to plan a move away from Proof-of-Work to PoS in the future. Ethereum researcher Vlad Zamfir, who leads their effort in the complex search for the optimal PoS consensus system joined us for an in-depth discussion of the challenges of PoS, the approach Casper is taking and what consensus in Ethereum land could look like in the future. Topics covered in this episode: How he became interested in consensus and Proof-of-Stake (PoS) Why Ethereum plans to switch to the PoS systems Casper in the future What the Long-Range Attack and Nothing-at-Stake problems are and how Casper addresses them How a betting mechanism is used to get validators to reach consensus What centralization pressures exist for validators and what operating a validator will look like The role of security deposits in Casper The difference between Tendermint and Casper Whether forks create big UI/UX issues for Casper Why Casper is very light client friendly Episode links: Introducing Casper - Ethereum Blog Review of Casper, Ethereum’s proposed Proof of Stake Algorithm Casper Protocol Specification Slasher, Ghost and Other Developments in Proof-of-Stake Epicenter Bitcoin Episode 58 with Vitalik Buterin on Proof-of-Stake This episode is hosted by Brian Fabian Crain. Show notes and listening options: epicenter.tv/105
Transcript
Discussion (0)
This is Epicenter Bitcoin, episode 105, with guest, Flood Zamfir.
This episode of Epicenter Bitcoin is brought to by Hyde.me.
Protect yourself against hackers and safeguard your identity online with a first-class VPN.
Go to Hide.combe slash Epicenter and sign up for a free account today.
A show which talks about technology, startups, and projects driving decentralization
and their global technology cryptocurrency revolution.
I'm here today with Vlad Zamphir.
Vlad is probably known to some of you because he's the researcher at Ethereum who's been behind the sort of next level protocol
of bringing proof of stake to Ethereum because that's been a project for a while for them to switch away from proof of work to proof of stake.
And that's exactly the thing Vlad is working on.
He's just got back also from London where DefCon was taking place.
I'm sure many of you have heard about it as well.
as well it was the very first sort of pure Ethereum conference a full week and so yeah thanks so
much for joining us today planned thanks for having me yeah so how was how was DefCon
was awesome I was actually kind of shocked at how much like more sophisticated larger
this was then DevCon zero which was like just over a year ago we've come a long way
in one long year
Yeah, I mean I was at DefGon Zero, at least for part of it, and that wasn't exactly a huge thing.
And now, how many people were there?
I think 300 around.
Yeah, that's definitely impressive, and it's great how far the project has come since then.
Yeah, it's kind of shocking to look back.
So when did you get originally involved in Ethereum?
So I got involved at the Toronto Bitcoin Expo back in...
April of 2014.
That's when I really, I had learned about Ethereum a couple of months prior to that,
but that's when I really got involved.
It's when I met mostly Ethereum team and sort of started volunteering for them on a full-time
basis or basically as like an obsessive hobby.
Yeah.
And so was from the very start your interest, especially in consensus and how the
Ethereum Network reaches agreement?
No, no.
Actually, at first I started working on kind of higher.
levels of the stack. I was working on like distributed applications and infrastructure that
distributed applications would need. For example, like file sharing and reputation systems.
And then only after I started learning a lot more, was I even able to get involved
with the consensus protocol discussions. And then I started going down that rabbit hole. And then
around September last year, I realized that like proof of stake is like possible, like that
we could actually secure blockchain with digital signatures.
And then I started learning a lot more and going down that rabbit hole, which turns out to be
kind of much more complex than involved than I ever would realize.
So what was it about consensus and the possibility of switching to proof of sake that fascinated
you so much?
Well, so I mean, consensus protocols are like key to this kind of like decentralized
the centralized technology, right?
And proof of stake specifically is interesting because it is much more economically efficient.
It can be, you can have the same level of security at a much lower cost, which is appealing to me.
And also, there are some aspects of blockchain scaling that are much easier with proof of stake.
And blockchain scaling is like, you know, has been important to me, you know, before proof of stake became important to me.
Right. So did you study before, did you study computer science before or?
No, my background did mathematical statistics.
Okay, so I guess that's actually quite related, no, and especially the way Casper works, and we'll get to that, but you know, with the probabilistic models you're using and that sort of economic approach to it.
Sure, yeah, it's somehow a little bit tangentially related.
Okay, so starting about Casper, I mean, we've had some people on the show, for example, we've had like Adam back
I can Greg Max on the show once and I asked them about proof of stake and they were like,
oh, this is impossible, it doesn't work, like it's been proven, it doesn't work.
And then there are some problems, right, that people have talked about with proof of stake
for a long time.
And the two best known of those are what's called the long range attack and what's called
a nothing at stake problem.
And so obviously if Casper, right, you have to solve those in some way.
Maybe we can, we can start by talking about these problems briefly and, and,
just sort of how you address that. Sure. So actually we could have solved both problems with the same
mechanism, which is to use security deposits to secure the consensus. So just for the people
that know, can you also run us through what those problems are and what those attacks actually
look like? Yeah, sure. So the nothing at stake problem is basically the problem that you have no
disincentive from being Byzantine in a traditional proof stake protocol because signatures
are easy to produce and because you aren't punished for signing multiple blocks on multiple chains,
your incentive is to sign off forks when they come up.
So the nothing at stake problem is the problem that you don't really lose anything from behaving badly.
Right.
So with Bitcoin, right, if you compare with Bitcoin, if there's a fork in Bitcoin, well, you can't mine on both chains because, well, I mean, you can, but you have to split your hashing power.
But if it's just a signature, then what stops you from signing on two chains at the same time and when you have a fork?
Then the problem, I guess, would be how do you know they're going to come together again if you know you can have them run at the same time?
Yeah, that's right.
And so to be a little bit more explicit, if you split your mining power due to like uncertainty by which fork will win in Bitcoin, you'll surely lose, you know, one of those portions of mining power.
And so you have a really strong incentive to mine, mostly, if not all, on the chain that you believe, will be the one that's successful.
Whereas in traditional proofstake, your incentive is to sign everywhere because it doesn't cost you anything and you're going to get returns on either if there are returns on any.
Okay, great.
And what about the, oh, so how is this problem solved?
So you mentioned security deposit.
How exactly does that work?
Yeah, so the idea is that to produce blocks,
you have to place a security deposit.
And if you behave in a demonstrably Byzantine manner,
then some or all of your security deposit
will be removed from you.
And so there will be something at stake.
I mean, the most kind of naive basic thing to say
is if you sign blocks on both chains,
then someone will observe that, produce a little thing
we call an evidence transaction, which includes
the proof that you did this,
and then that would lead to the forfeiture
of your security deposit.
So the idea is that there is something at stake
as this large security deposit.
And how does that translate into the long-range attack problem?
Or what exactly is the long-range attack problem?
So the long-range attack problem is a problem
that if an adversary somehow gets control of, say,
a key that held most of the coins at some time in the past,
they could use that to create blocks from the past,
creating a big chain that ends up having a higher score
than the chain that is currently the consensus chain.
So the long-range attack problem has to do with the fact
that digital signatures are only secure, economically
secure, so long as they have coins behind them.
And then over time, keys that used to have coins on them,
no longer have coins on them.
And so they're not really economically secure anymore.
There's no cost you really need to pay
to compensate someone for having that.
And so their long-range attack problem is that people will use old keys to produce chains with higher scores
in order to kind of revert history and affect the consensus.
So basically it could be like at the beginning of the chain, right, there's some keys that hold the deposits of the validators and the bonds.
And then, you know, people move away from that.
And, you know, I go to you and say, hey, you had these keys in the beginning.
Why don't you give them to me they're worthless to you anyway.
I'll pay you something and I'll do that with a lot of people and all of a sudden I have a majority at that stage and then I can start creating a chain from there and sort of overtake the real chain and all of a sudden I have a parallel chain that's actually fake but it looks as if it's real.
Yeah and I mean it's not fake. It's like it is real and it's just better than the one.
that exists that everyone is on.
It just wasn't the consensus until you just showed it to everyone and then everyone
would like switch over to that because it's a better, it's a better history.
How would it be a better history?
In the sense that the fork choice rule gives it a higher score.
So you know, in Bitcoin we have this for choice rule that says like the heavier chain,
i.e. the one with the most proof of work is the one that is that you should choose if you're
like a client trying to figure out what the consensus is.
And then similarly in proof of stake, traditional proof of stake protocols,
you would have like a longest fork rule.
Can you explain that?
What would that look like, the longest fork rule?
Yeah, I mean, basically, everyone would adopt the fork that is just like longer,
like has more blocks in it.
And you'd be able to make a longer fork with these keys from the past
because you kind of, you have most of the keys, right?
And you can make your fork have more participation than people,
actually participate in like the traditional in like main fork okay so because if you
have all the keys let's say maybe otherwise the problem is like latency because
the keys are across the globe or maybe sometimes people don't the participation
rate really that's because people don't really always stake right and then that
slows it down yeah okay okay interesting and so how does the security
deposits, I mean, it makes sense how it addresses the nothing at stake problem, right, by punishing
people who abuse that and sign on several chains, but how does it solve the long-range attack
problem? Sure. So basically what happens is that you will only trust a signature from someone who
you know currently has a security deposit. And so actually, the authentication model is somewhat
different because instead of kind of authenticating the current state of history based on the Genesis
block we were going to use the people who currently have security deposits in order
to authenticate the consensus and download whatever changes are required or
whatever state is required in order to synchronize with the consensus so
instead of authentication ending in the Genesis block it ends in a kind of
much more timely piece of information which is the people who have security
deposits now so you have like a list of currently bonded validated
and those are who consign to help you synchronize to the consensus.
So that's kind of important because it prevents long-range attack problems
because we don't use that kind of, we never use keys that no longer have deposits on them for anything ever.
And then the way you kind of stay synced up is that the next set of values,
validators is signed in by the current set of validators.
And so you can kind of stay synchronized once you've synchronized once.
Now before you have the list of monitored validators, you're going to need to get that information out of band
before you can synchronize the consensus.
It kind of serves as like one public key for the entire consensus.
And so everyone in the world who wants to synchronize will need to have this list.
Really, more realistically, it'll be a hash, which you can provide Merkel proofs too.
If you have this hash or this list, then you can synchronize.
And everyone needs to make sure that they have the canonical consensus hash or list.
And so basically, and I don't think that this is actually too terrible because public key cryptography is hard and people don't like authenticating public keys.
But it's going to be a lot easier when there's only one that everyone needs to authenticate.
because then we can use publishing.
And we have lots of reliable means of publishing.
And there are like other things.
Like you could, for example, if someone wants
to take a payment from you, they don't have an incentive
to give you a false hash because they want to accept payment
on the authentic chain.
Right.
So because I mean, let's say someone did make a long range
attack here.
And now I have a list.
And I have a chain too, and on that chain, I have all the validators.
I mean, and then I could go to you and say, hey, you know, here's the chain.
I have the validators.
So presumably the security you would have is by if you beforehand, if you know,
who is the true list of validators, then when I come to you with my long range chain,
you'd say, no, these are the wrong validators.
You know, I don't trust this chain.
I only trust a chain that has the same set of validators.
Yeah, that's right. I mean, that's exactly the idea.
And so you, yeah, you just have to stay synced up, and trust some source to get that correct list of validators.
Yeah, definitely. So, but if you're staying synced up, then there is no trust there in the sense that you can authenticate all the changes instead of validators using kind of this economic proofs, right?
Where you basically can rely on it because if it wasn't true, then these people would lose large amounts of security.
security deposits.
Yeah, I mean, to be honest, this seems like a very reasonable thing to me, right?
Because it's, I, the question I guess will be how do you technical implement that, right?
How does a client actually get that list?
Because of course, people presumably won't be doing that manually, but there will be some
mechanism in the client that automatically fetch this list from somewhere.
And that's where a lot of it will depend on.
So that's a good question, right?
And this is something that we kind of talk about.
One idea for most users is that when you get your client, firstly,
because you don't actually have any ability to audit the code,
it would just kind of come with the most recent list of validators.
But for a user, who's like a power user,
you'd want to authenticate that yourself out of band.
And you'd want to go and look through various places
where people are talking about and publishing these things
and make sure that you have the same one that everyone else does.
And basically, the user experience for that
is still to be seen.
I'm sure there are solutions that are super secure
where the client kind of just asks you to find
the most recent set of validators.
But totally, from a protocol point of view,
it's totally necessary because the set of validators
changes over time, the set of people with security
deposits changes over time.
And there's no way for you.
to kind of authenticate one at time T100 if you only know the set of validators at like T50
because it's an entirely different set and you can't rely on the signatures from the first
one to figure out the second one.
So can validate this change like all the time or because if you, I guess if you were at T50
and you say all validators you know they have to bond for at least six months or something, right?
And then, well, you could tell to some accuracy if a list is valid at the latest stage,
you know, to the extent that the same validators, you know, must be included because they just can't withdraw it immediately.
Is there a mechanism like that as well?
Yeah, totally.
I mean, that's totally right.
So if, so it's totally cool for new validators to come in because your validators,
are still in there
and you still have all the security you
did before the new validators came in.
It's when
the validators that you know about unbond
that you need to start to worry.
If we have some
fixed rule about how long you
must be bonded for, then for any
period longer than that,
there's no guarantee, there's no like ironclad
guarantee that the validators will still be
bonded. So they might be
and they probably in practice likely will be
because in practice, if they're profitable,
they'll stick around.
And if they're not, they eventually won't.
And so some of the validators will be around
from one bonding period to the next.
But if say 10% of the validators
and you know about are gone, maybe that's not the worst thing
in the world.
But if 90% of them are gone, then that's quite a disaster
for your security and authentication.
But basically, to keep it simple,
we should require that like clients know 100% of the validators all the time.
There are kind of more complicated protocols that don't have that property where you may know
less than 100% of the validators, but there are perks like more people can bond at any time
and that could be used defensively.
But basically this kind of gets you an idea of how like just like this basic design
decision has some details that need to be kind of thought carefully through of exactly like
how people are allowed to bond and unbond and how clients react to that and how they authenticate
based on how many of the validators that they know about are still online how do they find out about
changes to the validator set you know these are all questions that we have solutions for
but we have multiple solutions for yeah i mean to be honest if you just look at it from a sort of a high
level that it seems like a reasonable and sound solution so I'm curious because a lot of people sort of
old school bitcoins and people who are very you know deeply uh in the space and you're also people who
are super smart I mean and and I would say sort of from a research perspective and and aren't so much
you know invested in Bitcoin a currency uh are very skeptical about proof of stake why do you think that is
um so it's pretty easy
easy to understand why you can have a make use resources that are external to the
consensus protocol as any civil mechanism right so if like these the
assumptions that civil doesn't have as any resources as the honest network and
so if you kind of size up people's resources outside the network then and then
the majority is probably not civil that's kind of like the assumption it's
kind of a little bit harder to wrap your mind around how you can use something
inside the consensus as an anti-civil mechanism to secure the consensus
but I'm actually quite comfortable with this
and I mean one intuition for why you should be comfortable with this
is that actually Bitcoin is only economically secure
so long as Bitcoin has a price in the first place
because the hash power won't be high
unless the people are compensated for their expense
with something that they can sell to pay for their costs
and so really it's you kind of have this phenomenon
that if Bitcoin has a price
then Bitcoin is secure
and if Bitcoin is secure then it can
have a price so there's like a bootstrapping that's required and that same
mechanism could be used for an asset that's inside the consensus being used
to hear the consensus because there's an asset in there and still we assume
the civil won't have a majority of the wealth I mean you know we might assume
we might be just as comfortable assuming that in the protocol than the outside of
the protocol and then because it has a price they're going to be disincentivized
from behaving badly if they place a security deposit, which has a price that could be
removed by the protocol.
Yeah, that's an interesting analogy.
That's the first time I heard of that, but actually it makes a lot of sense to me.
And I guess the challenge here too is, is if you try to explain proof of work to sort of normal people,
quote unquote, you know, who try to understand how this Bitcoin work, I think it's actually possible to do that pretty well.
You know, you can say it's sort of a lottery and a betting game and I think it's, it works fairly well.
But how do you do that with Casper or with proof of sake?
Do you try to do that to like explain it to sort of, you know, people who ask you what are you working on?
How is it here going to be secured when it's just in your system, you know, who aren't deeply from this industry and this ecosystem?
Yeah.
So I mean, surely it's a lot more complicated.
And it is harder to explain.
But people usually find it pretty intuitive that the system uses security deposits to punish bad actors.
That's actually the main thing that distinguishes proof state security to prove work security,
is that you can make the consensus expensive for adversaries only when they're attacking,
which is very different from the Bitcoin system, which makes it expensive for everyone all the time.
and then just in case at some point an adversary would want to spend some 50% or more of that.
So I think that people on a high level find it easy to understand.
On a low level, when you start to get into all the details, it becomes much more complicated.
And that's when people who have a strong preference for simplicity can get turned away.
But the advantages of this kind of protocol, I think, are very promising and very worth kind of going through all that and talking about all of the reasons why we have all these design decisions and, you know, kind of showing the benefit from the complexity.
Yeah, and to be quite honest, I think Bitcoin is easy or proof of work is reasonably or somewhat easy to explain from a high level too.
but then once you start getting into the really nitty-gritty and the game theory and stuff and mining pools and
collusion it gets extremely complicated as well so i think you you sort of have the same problem there maybe
it's even more complicated with something like casper but uh that's not something where bitcoin is sort of like
trivially simple either yeah so that's actually true bitcoin is very easy to specify very hard to analyze
I'd say that Casper is kind of different.
It's easy to specify, but the analysis is actually quite a lot easier when it comes to analysis of different types of adversarial strategies between validators.
And that's something that I'm sure we'll get into through the course of this podcast.
Yeah, so let's get started with that.
First of all, I mean, Casper, the name comes from Ghost, right?
What was Ghost?
And why, how is Casper related to that?
So Ghost is an algorithm for proof of work consensus called greedy heaviest observed subtree
that kind of stretch the definition of blockchain into this like block tree thing
in order to provide much more low latency transaction confirmation with the same eventual consistency guarantees.
And basically kind of intuitively the way it works is that instead of having orphan blocks be kind of lost
and never contribute to the security of the network
and never reward the miner who mine them with Bitcoins.
Orphan blocks could be included as like uncles,
and they would contribute to the score of the fork,
and the miners would also be rewarded for that.
So that allows you to do much lower block times
without compromising on security,
which is great for user experience.
And for Ethereum as an application platform,
having low latency is especially important.
So Casper is the friendly ghost.
It's an adaptation of ghost-proof-stake.
And so that's why it's a ghost,
it's basically designed to provide low-latency blocks
and to kind of use orphaned information
as part of like the rewards,
the reward, like incentive mechanism.
Let's take a short break to talk about hide.me.
Look, when you're choosing a VPN provider, you want to make sure that your privacy is protected.
You know, if a government agency tries to force the VPN provider to hand over some of your traffic or
or browsing information, will they be able to do that?
And as your payment information attached to the account, these are all things that you want to consider when choosing a VPN provider.
With hide.comi, all that's taken care of.
For starters, they're based in Malaysia, and Malaysian laws don't require them to keep any logs.
In fact, Hyde.me has no logs of your traffic or browsing history.
So even if a government agency was trying to force them to hand over some information,
they would be straight out of luck because Hyde.m. has nothing to give them.
In addition to that, they use a third party payment provider,
which doesn't give them any of your payment information,
so they have no way to link an account to like a credit card or a PayPal account.
So even if your payment PayPal a credit card,
there's no way for Hyde.me to know which account paid for what,
And of course, if you're paying with Bitcoin, then you're completely transparent.
So what we suggest is if you're creating an account with Hyde.Me, if you want that extra level of privacy,
just make a fake Gmail address and use that to sign in.
So that way, you're completely anonymous.
You can give Haithomi a try with their free plan.
Their free plan includes two gigabytes of data at unthruddled bandwidth.
You can use any of their free exit notes, which are in Amsterdam, in Singapore, and in Montreal.
And you can sign up for that at Haithomie.
slash epicenter. Now, if you use RURL and if you decide to go premium down the line, it's going to get
you 35% off. And the premium plan gives you a lot. It gives you unlimited data. You can use as
much as you want. You can connect up to five devices. So your whole household fits on the plan
and you can use any of their exit nodes all over the world and they've got like 30 of them.
And of course, you can pay with Bitcoin. So give it a try. We would like to thank Hytop
for their support of WebSend of Bitcoin.
One of the things that I understand least well about Casper is that there's a strange thing going on
where validators are betting on different chains and then they lose money or gain money
depending on if their bet is correct and then somehow this is supposed to sort of lead everybody
to bet on the same thing and provide that sort of confirmation.
you know that with Bitcoin you know the majority might that chain. Well, I guess it doesn't
quite carry over the analogy. But can you explain how does that betting work and why did you, why
did you choose that? Sure. So basically the idea with the betting strategy is that what we're doing
is we're incentivizing them to come to consensus. And the way that we're doing that is by having
them place some of their security deposit at stake in a process, an iterated process through which
they come to agree on which block at a particular height will have its transactions executed.
So they bet on blocks, and they're betting on which blocks other validators will be betting on.
So the way they make the most return is if they bet very quickly with very high probability
on the blocks that everyone else eventually bets with very high probability.
So the incentive is to bet in the way that other people will be.
bet in the future and if you don't bet correctly then you're going to make less returns and if you
do and so everyone's incentive is to quickly quickly converge on the same block at every height and then
once you have consensus on the same blocks at every height then we can have consensus on the state
of every application so how does that work do you each validator can propose blocks or are they
propose to some place and then validate this choose
which one to bet on with what amounts or anybody can propose blocks?
Yeah, so strictly speaking, anyone can propose blocks at any height,
and then the validators will just bet on the different blocks that are available at heights
that they haven't already bet and converged on.
And basically, but it is worth noting that Casper suggests an order
that they should propose their blocks in.
And that's the order in which they make the most return if they actually end up producing their blocks.
If they deviate from that order, they end up making less returns.
And the reason for this is that we don't want them to stop certain blocks from winning based on, you know,
maybe they don't like a transaction that's in that block, right?
If they could engage in censorship by not allowing any blocks produced by some coalition who isn't worth,
who isn't censoring.
ever win. And so while anyone can produce a block at any height, then people can bet at any height,
their incentive is to do it in an orderly manner in order to increase their profitability.
Okay. And then you also have a punishment, right? So if you do bad bets, you get punished. Is that correct?
Yeah. So, I mean, there's a couple of things, right? One of them is if you bet incorrectly, you lose,
or like don't gain as much and that's and that's a punishment.
Another thing is that we have this concept of finality, right?
Where if a threshold of validators bet with extremely high probability on the same block,
say like 99.99% on the same block, then that block will be finalized.
And if any of those validators bet for another block at that height or a bet with a non-negligible
probability after that for any block at that height or even if they propose another block at that height
then they'll like lose their entire security deposit okay so first of all it kind of makes sense to me
why and mine would converge but do they have to could it happen also that they don't converge
yeah i mean they don't they don't like have to converge they're just incentivized to and if they don't
then they are going to be operating at very likely a loss.
It always depends on how many transaction fees they're losing.
Sorry, they're making.
But if this is like their security deposit and because they don't converse,
they only get like this much back,
then the transaction fees need to add up to like a very large amount.
And the transaction fees really play the role of paying interest on the security deposits.
And so very likely if you lose like, you know,
a very large percentage of security deposit due to non-convergence,
you won't be able to make up the rest of it in transaction fees.
That said, an adversary could potentially bribe validators
in order to incentivize them not to converge,
in which cases it might be rational for them not to converge.
But kind of the goal of the protocol is to make it expensive for an adversary
to undermine the properties of the consensus.
So the other thing I read, you wrote somewhere,
was that with proof of work,
that the level of security sort of increases,
yearly. So let's say a transaction has been two blocks deep, two confirmations is roughly half as
secure as one for confirmations. Is that roughly correct? So it's important when talking about
security to discuss whether you mean economic security or Byzantine fault tolerance security. So
in the security of confirmations increases exponentially if you're talking about fault tolerance,
i.e. if you assume that more than 50% of the nodes are correct,
and then the probability of the Byzantine nodes
creating a longer four declines exponentially
as you get more confirmations.
But the economic security is only to do with the cost
that it took to create those blocks.
And so the cost of any block is like the same, right?
I mean, at most 25 bitcoins.
And then like, you know, you get 25 Bitcoin.
for the first block 50 once you have two 75 once you have three you know a hundred
once you have four so it's in the economic sense of security that that it
increases linearly now in kind of this kind of betting approach validators have the
ability to kind of slowly put place only a little bit of a little bit of loss
if they're incorrect at first by not betting with very high probabilities but
then once they see that everyone is betting on the same block quickly going to
bet all of their deposit.
So you can have for a good amount of time, super linear growth in the amount of
economic cloud behind a block being finalized.
Okay.
And that would also be the incentive, for a validator to bet in the first place, because if
they don't, then they lose some or they don't make back as much of their security deposit.
Yeah, that's right.
Okay.
And I suppose that will sort of depend on maybe the risk tolerance or how much
knowledge a validator has whether they say we're going to do a high probability bet on you
know on this one block or we're going to have sort of a wide distribution of bets across a big
range of block because we're not as sure or how does the how would the strategies vary there
yeah so that's that's that's a that's a great question i mean surely risk preference is one
thing that might vary between them um another one is that like you know
Byzantine validators might behave in a way where they're just trying to mislead other people's
bets. But generally, you have an incentive to go earlier, but you're taking more risk if you go
earlier. And so basically, based on their risk preferences, they'll kind of have more or less
aggressive strategies. But in any case, their incentive is to converge. At least they're in
protocol consent incentive is to converge. So what's that actually going to look like? I mean, let's
say now I'm going to be a validator on Casper. Where do I get this software from that actually
makes these bets? I mean, you know, just like a miner has like software that kind of chooses where
to mine the next block, namely on like the head of the fork. Your like validator software,
wherever you get it from is going to have a betting strategy. But this is going to be way more
complicated. Yeah, sure. Because one of the things when I was when I was reading about that
and thinking about it that sort of came up with me was that well how is that going to actually turn out right so presumably you're going to be able to do a lot of analysis if you've sophisticated you know and you basically sort of like a hedge fund today tries to understand the stock market and and does gets physicists and data analysts that do you know models and high frequency trading and stuff and you know that way often they can
reliably do better and it seems here you could end up something similar right so
I would hire 10 people put you know become a big validator and really try to
understand you know when can I place the optimal bets and presumably I'd be able to
do you know way better than someone who just does a little open-zero his GitHub
report to have some sort of primitive betting strategy
Yeah, I don't think that that's at all clear at this point.
So firstly, like, this kind of betting game is relatively simple, right?
As a function of what bets you've seen, it's pretty easy to see what block has the highest probability bets from the most validators.
And you can just kind of...
So there's not that much that you can really learn about the state of the consensus by having more sophisticated.
algorithms what may help is having a more reliable faster internet connection to the
rest of the validators but what we're hoping to do is to show both that if
people pursue more profitable strategies then we don't lose any of the consensus
properties in fact the consensus properties just get stronger I you know finality
will happen faster in that case but you also have a centralization probably
right not necessarily so so
So let's get on to the centralization issue actually.
So I've mentioned earlier that validators don't have an incentive to exclude blocks from other validators.
Now what that means is that even if some validator is kind of slow and lagging, everyone
still has an incentive to include all of their bets and include all of their blocks and
have their blocks win because everyone is punished if they aren't included.
or if their blocks don't win.
And so actually even if some nodes have performance advantages,
everyone has to respect every other node if they want to increase their profitability.
And so, you know, centralization is about power dynamics
much more than it is about relative, like, you know,
whether some node has lower costs or higher returns in another node.
The important thing is that the people who have an advantage,
can't use it at the expense of everyone else and that they have to get along with everyone else even if those people don't have an advantage
But you know, I actually think that this this betting strategy is relatively simple and that improvements
Will not be that dramatic
And if they if they are then everyone will be able to see how these people have bet and
They'll be able to notice that hey, there's a way we can change our pro our programs to that that more efficient
efficiently. So I'm not as concerned about this, but I mean, I definitely hear you. And it is something that could potentially lead to some validators having higher returns than others. But in terms of like that, the impact of that on the consensus properties and on the decentralization of the thing, I think that, you know, it'll actually only improve consensus and that the decentralization won't be affected.
today's magic word is ghost g-h-o-s-t.
Head over to let's-talk bitcoin.com to sign in, enter the magic word, and claim your part of the listener reward.
There was an interesting post by Daniel Larimer, who's the bitcherous guy, about Casper.
And one of the criticisms he made of Casper there, I thought was very interesting as well.
It was basically the point that, because you mentioned essentially transaction.
fees work a little bit like an interest rate. So let's say I put up $500,000 in bond and I run a node now and
that node costs me $50,000 a year. You know that $50,000 is independent of the amount of my bond.
So if somebody else should run a node and you know it also costs $50,000 but they only have
70,000 security deposit, you know, our profitability will be hugely different. And there's a strong
incentive to have, you know, basically centralization, right? To have as much of a security positive
possible relative to the number of notes of how that as you run. Do you see that as a problem
that you have sort of economic incentive for centralization there? Yeah. So it's, it's again,
not centralization in the sense that like the majority gets to have special rights that they
I get to like disrespect minorities over.
But certainly one of the things that needs to be the case for this consensus protocol
to be like kind of fair between people who have different bonds is that the cost of
operating a node should be much less than $50,000 are really like very small.
I mean, you know, like even to get like an email, like a web server for a year cost much
less than $50,000.
It depends obviously on the web server.
I think that we'll be looking at much, much lower cost of operation than that.
So, I mean, like today you can run an Ethereum node on your laptop,
and if we're running on the same transactions per second,
I mean, you can run on Raspberry Pi.
And if we're running at the same rate as we are now with Casper,
I mean, you'll still be able to do it on your laptop.
And then we're talking about a much more negligible kind of cost.
The thing is, though, that you would want to monitor it
and be kind of diligent about whether or not your comment.
whether or not you're compromised.
That's one of the main differences between proof work and proof stake is that if your note is compromised,
you could potentially suffer losses in a proof stake type protocol.
Right, because you have to have basically the private keys that secure your bond,
they have to be online, right?
So you have to also invest in security there.
Yeah, although much more than not being compromised, you need to be sure that your machine cannot publish Byzantine behavior.
So, you know, if an adversary gets at your private key, well, that's pretty bad.
They can try to unbond you.
But as long as you still have access to that private key, you can potentially revoke and change that decision.
But what you really don't want is for an adversary to get into your machine and cause you to double sign.
We're right to try to un-finalize a finalized block in order to have you lose your security deposit.
Right.
So would that be an optimal, let's say I'm as an adversary, I take over your machine.
and then if I double spend on the other chain
and then use that transaction immediately
to prove that while you double spend
or double signed,
then I can steal basically your bond.
I mean, no, you don't steal it.
I mean, the bond gets destroyed
and you make like a small royalty
for discovery and malfeasance.
Okay, but then couldn't I just transfer
your bond to my account?
Not exactly.
I mean, it's not necessarily that simple.
So there's a large unbonding time where basically because you can't just withdraw your bond at any time.
You need to kind of retire from validating and keep your bond there for a time just in case you're malicious,
you were you double signed right before you withdrew it,
and the evidence just hasn't come to bear yet.
So there's an unbonding time.
And during that unbonding time, if you still have control of your private key,
you can revise and say, no, you know, it's not going to go to that attacker.
Or maybe you could just burn it.
There's lots of things you could do to prevent the attacker to actually get that money.
Okay, interesting.
And what about, because you mentioned it would be cheap to run a note.
And I guess that's sort of way the question turns, right, is how much secure do you need extra infrastructure?
How important is it to get, you know, a super low latency thing?
Do you need DDoS protection?
Because potentially then there's a lot of additional infrastructure
maybe you need to put in in order to run at least a validator well and securely.
So I guess we don't know yet where that's going to go.
Yeah, we certainly don't know all the details.
But one thing to know while you bring up the DDoS is that one,
the other validators don't have any incentive at all to DDoS to you.
This is kind of very much unlike for work where miners kind of doss each other on a regular basis.
This is something like it's known to happen because it's like in their immediate incentive if some other mining pool has their block orphaned.
Whereas with in here, the people who would be dossing you would presumably be outside
adversaries or maybe people who are being bribed by outside adversaries.
So it's the kind of environment you see when you think ahead, you know, now Kaspers is implementing.
Ethereum is running on Casper.
Do you see a thousand Casper validator or a hundred or
I presume it's not 10 that you'd like to see or 50,000?
And do you think those would be run by hobbyists at home
or will those be organizations running them?
I guess like in Bitcoin mining, right?
You have professional companies who invest a lot of money and research.
Where do you think that's going to go?
So that's a good question.
I mean, to give you an exact number,
we're going to need to see the precise overhead numbers.
We need to know exactly how much overhead it would cost
for a given number of nodes at a given level of latency.
So if we knew what latency we wanted
and how much overhead it would be for a given number of nodes,
then we could say kind of, okay, we can have like 500 nodes,
or we can have 1,000 nodes, we can have 10,000 nodes.
So basically because all the validators are engaged in betting
on all the blocks,
the overhead, the network overhead increases with the number of validators.
And so certainly it won't be 50,000.
Probably won't be 1,000, but I'm sure we could do at least 100.
So I'd say, you know, somewhere between 100 and 1,000.
And as to the level of sophistication of the validators, that's also an interesting question.
I think that you will certainly need more sophisticated than a hobbyist Bitcoin miner,
which basically just requires you to get a box,
plug it in the wall and connect to the internet.
And so I don't think it'll be like,
you know, if you want to support the Ethereum network,
you should probably, like, build a DAP
or serve files on SWARM or just like sync up and audit
and make sure the validators aren't producing invalid blocks
or betting and maliciously.
So I think it'll be, you know,
it certainly requires a little bit more,
a little bit more skill than it requires to buy in Bitcoins.
But we have to always remember that mining or validation is a service that you're providing to clients.
It's not meant to be that we're advertising mining or validating to people as something that they should do as clients or as users.
This is something like the sort of the validators are actually like the largest source of risk of the protocol.
And the protocol kind of treats them as such.
It won't necessarily be a walk in the park for the validators all the time.
But that's kind of the nature of the thing.
If we want economic security, then these validators really need to be exposed to loss.
And there will probably be a time when they experience loss.
You know, not that miners haven't experienced loss before.
Yeah, yeah.
And what percentage of miners have to be honest for Kasper to work?
And are there different thresholds where it's, you know, I don't know, it's like 30% dishonest minors.
This can happen.
50% that can happen.
Yeah, sure.
So that's a great question.
Basically, we kind of need to talk about it on a kind of case-by-case basis for different types of protocol guarantees.
So first one that we classically talk about in these contexts is reverting history.
So you can never revert a finalized block no matter how many validators you have.
Because clients will just not choose a fork that doesn't include that finalized block.
But you can revert non-finalized blocks if you just have more validators then were used to create those non-finalized blocks.
So for example, if there are 48% of the validators creating unfinalized blocks, then 49, 50, 51% of the validators could revert those blocks.
and it's worth noting that clients will be made aware of whether or not the blocks that they're receiving as confirmations are being finalized or not
and so people won't actually be caught off guard with respect to their transaction being reverted because you know it'll be very clear that their block isn't finalized
so you know mickland we have this rule okay don't ship your goods until you get six confirmations and caspar it will be you know
don't shift until the block is finalized.
If you're worried about double spends,
then you need to be careful that the blocks are finalized
before you consider yourself safe.
And once it's finalized,
there's no amount of Byzantine validators
that can revert that.
And then another important protocol guarantees
censorship resistance.
And as I mentioned a couple of times,
the protocol punishes any indication of censorship
by directly disincentivizing the validators who are participating.
So if like 20% of the validators are being censored,
then the 80% will be losing money very likely,
although it always depends on the volume of transaction fees.
And then also notably, if those 20% just go offline,
then the protocol will have to assume that they're being censored,
and those 80% would be losing money.
So in that scenario, what about those 20%?
Are they also losing money?
Oh, yeah, they're losing money too, of course.
Do they lose more money than the 80%?
This is a great question.
It seems like the answer will be yes for a small number of nodes,
but once a larger and larger number of nodes are offline or potentially censored,
then they will lose less than the majority will.
Right, because otherwise, that could be like,
if I'm, you know, have a majority, I might say like, okay, I'm censoring in minority.
I mean, I'm giving up some money, but they're giving up even more money, and, you know, I'm getting, I'm going to get them to leave the network, and then I get a bigger share of the transaction fees and make more money later.
Would something like that work?
Or are there other ways that a majority can basically cut out a minority to sort of increase control and profitability of the network?
So the only way they can do it is two cents ship, and the amount of time that they need to wait before they will be pushed offline is something that the protocol defines.
And basically the way that we'll define that is to make sure that the current value of the cost of censorship is almost really greater than the current value of the increased returns of censorship that you get once those nodes are taken offline.
So basically, like, you can look at the cost over time of censoring, and then you can look at the return over time from having those people left after, you know, they were dropped out, and you can kind of do like this kind of present value calculation to make sure that this is greater than that.
So we do think about, you know, is it going to be worth it for them to do the censorship.
But also, very notably, there is no other, you know, in public consensus protocol or public
public consensus protocol that has this property that is unprofitable to censor.
For example, like if 80% of the Bitcoin miners decide to ignore 20%, they get a 25% raise
immediately in terms of transaction fees and in two weeks in terms of increased block rewards.
So, like, if you think about, like, two weeks of, like, you know, maybe slightly slower block or it's,
and that's kind of only if you choose not to lie about the time stamps.
But anyways, if 20...
That's not very long, and it's, like, totally worth it.
25% raise for minor is a very significant raise.
You know, the same thing is to evolve, known for mistake systems.
And so Casper is really unique in the regard that it is costly at all for them to censor.
And, you know, censorship is like,
censorship resistance is like one of the most important claim properties of blockchains.
But so far, it's in any, like, majority coalition's validators position,
sorry, interest or minus interest to a center in like every known public because it's protocol.
Okay.
Well, let's talk a little bit about tendermint because we're actually,
we're going to do an episode about tendermain very soon.
And then I've been thinking a little bit about tendament and learning a bit more
about it also because at Euras, we work with tenement.
And the tenement is basically a different proof of stake protocol.
And the way a tenement works, is that a block is proposed.
And basically the validators sort of vote on it.
And a majority of validators have approved it,
you know, that's a block.
Which seems like a very sort of clean and simple mechanism.
Why did you, why was it necessary here to have this bedding
instead of something like just people voting on blocks and then that being the result.
Sure.
So, I mean, the kind of intuitive, simple way to think about it is in tendermint, you kind of
pass around votes before you create the block.
In Casper, you create the block and then pass around votes.
The reason we call them bets is because it's like literally putting like economic stake
as to whether or not the one that you're proposing will become finalized really does.
And basically the reason why we kind of do the betting after rather than before the block is created
is because we favor availability instead of consistency in the event of network partitions.
We don't want to require permission from the large set of validators in order to create a block.
And the reason for that is that it, well, there's kind of two reasons, but the main reason for that
is that it affects the coalition economics, right?
If you require a Byzantine quorum to create a block, then there is a minority coalition that can prevent a block from forming.
And they could use that to, say, choose which blocks form.
They can use that to extort the network.
And by defining the number of nodes required for a block to be created, you necessarily affect the economics of the protocol.
And the other reason is that we can provide lower latency blocks because a validator doesn't need to have this.
back and forth with all the other validators before they provide a client to block.
Right, but then you have a block that may not end up being the real block, right,
because it's not finalized and...
Of course, yeah, but clients are made aware of the state of the finality of their blocks.
But do you think, in terms of a sort of UX perspective, do you think that's a benefit?
Because to me, it almost seems like it's actually a downside.
Okay, so, I mean, firstly, the U.X isn't really immediately the user's experience, right?
It's the application developer who needs to deal with this logic,
and then deal with how they show their user,
the corresponding truth of the matter, right?
Right.
So for certain types of transactions,
there's not going to be any chance that another transaction comes in
and has priority of yours.
So even if your block doesn't win at that height,
your transaction will be included and everything will be okay.
So for some applications, it won't be a big deal at all.
for other applications, notably economic applications where you really need to know that
it'll never be reverted, your application has the responsibility of telling you, you know,
okay, yes, we got the message and here's what we think will happen, and then update that
story if things end up changing.
And now, but another thing to note is that because we incentivize the validators to produce
blocks in a particular order, it will, if the validators are successfully successful,
seeking profit in protocol, you'll have a very high probability of your block ending up
being finalized even well before it's finalized.
But that, of course, depends on the validator's ability to actually gain the incentives
in the consensus protocol.
Presumably, if they're primarily incentivized by that, they will.
If people are also bribing them to affect maybe people's user experience a negative way,
then maybe they won't.
But surely, there are many kinds of transactions.
where this won't be a problem.
And for those transactions or those applications,
it'll be like a uniform positive.
Now, the application developer
might not appreciate having to deal
with the question of whether a transaction
has been finalized.
But I think that they would rather have that
and low latency than sacrifice latency
for the simplicity of their environment.
Okay, so you think,
that with Casper you're basically you know with tendiment maybe you would take in in
a sort of a larger network like you hope to have here with a hundred distributed
nodes maybe it would take a I don't know how long it would take a few seconds or
well depends and maybe if there's some sort of conflict and stuff maybe it would
take longer let's say 10 seconds or 15 seconds so you think it would be better to
have five seconds or how fast is actually Casper gonna be
Well, so Vitelex target is four seconds.
Mine is like one or less.
I'm really hoping to go and like have the latency be as low as tolerable.
Right.
And the reason I say tolerable is because lower latency always increases overhead.
Right, right, right.
So you say, okay, it's better to have like, you know, one second where you have, you know,
some probability of it being secured in, let's say, three seconds where you don't know anything.
And then you have, you have a block.
that really is final and you know you can completely rely on that.
Yeah, I mean and remember that we will get finality eventually anyways.
The question of how fast finality will occur is still in the kind of open
because it depends on how aggressive validators are feeling
and how aggressive they're willing to be and able to be in finalizing blocks.
If the network is really predictable and they can all behave in really orderly way,
really orderly way all the time you could conceivably finalized blocks as
fast or almost as fast as tenement can and still have a low latency likely it'll be
that tenement finalizes blocks faster than Casper does because of the kind of
fragile way that the validators expose their economic kind of stake to the finale
of that block but I you know I think that kind of the pure
economic nature of the consensus protocol,
kind of necessitates that kind of thing.
And the low latency thing is basically a plus
from the economic point of view.
Although it might be the case that validators just
get more transaction fees for providing lower latency
blocks.
And so we can maybe weasel it and call low latency
in economic benefit too.
Ray, can you say that again?
I didn't totally understand that one.
So I mean, presumably, and this is kind of like an indirect argument of there will be more transaction fees if blocks are low latency because the application experience will be better.
And so people will be using more DAPs.
And so in a way, low latency is an economic aim of the validators.
Yeah, no, that makes sense, right?
But then, of course, again, you go into the lower you go with latency, the bigger the advantage is to do, you know, run validators in high.
performance data center etc and you know you're the most pressure there you create to
have or you know the more increased overhead and the costs are running a validator yeah that's right and
um there is an extent to which that may cause validators to uh you know who can't keep up with that
to go offline but in as far as there are validators who say you know don't have great internet
connections and super powerful web servers while everyone else does
they will still be included in the consensus because of the cooperative kind of economics involved.
So if this change happens, it will have to happen gradually because it'll be costly for the validators to leave everyone behind.
Okay, great. So let's talk about the last topic, which is like clients. What's that going to look like with Casper?
Sure. So Casper is super like client friendly because, well, primarily because of something we
mentioned at the start of the conversation, which is the fact that we use security deposits
to authenticate the state of the consensus. So validators having the list of the currently bonded,
so clients having the list of the currently bonded validators will be able to kind of authenticate
economic proofs very easily. They won't need to download header chains, compare work. They can just
download like some logs from like the most recent state and, you know, bounce them off other
validators to make to get them signed or to make sure that they're not incorrect and then they
get quickly have very high economic assurance that their log is correct because if it wasn't then
all these validators would lose security deposits so security deposit based proof stake in general is
much more like client friendly than proof work because economic proofs are much more concise than
proofs because economic proofs are just like hey you know two plus two isn't
four then I lose a hundred thousand dollars who work proofs are like hey two plus two plus
two plus two plus two plus two plus two is four I've got you know six blocks on
top of it and then another thing that's notable on this on this on this on this
topic is that I found a way to do sub network latency blocks
time in a like-client-friendly way so one problem with sub-network latency stuff
or like clients is that they have to do the for choice rule on a very frequent
basis which is not very not very like-client friendly but actually what we can do
is we can calculate these reorganizations inside the state transition function
of a block itself which means that a valid air when they publish a block
will include okay all the new that they learned and then and then and then
which meant that they need to re-execute some transactions,
which led to this state,
which means that you are new transaction receives that.
And so the validator, so client doesn't have to actually do all of that work
of reorganizing and doing this kind of crazy fork slash tree choice rule stuff,
and they can still, you know, have this low latency experience.
So, you know, like client-friendliness is like one of the top priorities in Casper's design.
incidentally I guess I should measure the
I should just mention the other design
design principles so so
so like client friendliness is one low latency is one
and economic efficiency is the main other one
and that means that as close as possible
we want to create a blockchain that
cheap for everyone except for attackers during an attack
we want to have like full coverage of Byzantine
faults with disincentives but to not have
expenses for for people where possible
outside of Byzantine fonts.
Yeah, I know. I mean, I think that's one of the big attractive things about proof of sake,
whereas with something like Bitcoin and proof of work,
and that's going to be a big problem in the future, right?
With the block reward dropping, you know, it's going to be less and less investment in the hardware,
and that means less security, right?
Whereas here you can sort of decouple it.
I mean, okay, you still have the security dependent on at the,
the amount that's bonded, but you know, it's sort of not money wasted, it's just money locked up, right?
So you essentially the real cost of it in a way is only the cost of capital there, plus of course, the cost of operating the validators and stuff.
But that's right. But there's also the cost to the attacker, right?
Right. And the cost to the attacker is much higher, yeah.
So where are we right now of Casper? When is it going to be life?
So that's a good question.
I'd say that all the fundamental research problems
have been solved that we are arguing about specification details,
but we're also working on implementation,
verification, and simulation.
So basically, I'd say that optimistically,
and it'll be done in a year, and a worse outlook,
it could take two years.
I don't think it'll take more than two years.
It really depends on how much work I and others can do on this.
Yeah, so I guess in that vein, what does that look like in terms of funding?
Because, you know, if the foundation running out of money, are you, is it going to be,
continue to be possible for you to work on this?
And I know also you work with a guy named Greg who's working on actually implementing it.
What's that look like?
How is the work on this going to be funded?
Sure. So, I mean, there's kind of like a broader question, which is how Ethereum broadly will continue to fund and govern core development, even as the foundation kind of becomes a smaller part of the community, which is an interesting question, one that I spend a lot of time thinking about, especially kind of more recently. And I'm hoping that we can have a situation where we govern and fund Ethereum using the Ethereum platform and smart contracts.
As far as myself and Greg and hopefully others who will be getting on board to work on Casper,
I'm applying for grants.
Auger was generous enough to give me a grant even though I hadn't applied for it just a few weeks ago.
So thanks a lot to them.
And basically, yeah, I've hired Greg.
So we know, it's not, this kind of research and development isn't free.
We are looking and exploring all options for funding.
But, you know, I'm not going to give up.
I'll live with my parents if I have to.
Great.
Well, that's a dedication that's necessary.
Cool.
And so maybe the last question.
So you mentioned that, to me, that you're developing this in Scala.
Now, at the moment, I don't think there's an Ethereum Scholar client.
So would it be possible to just take Casper that consensus part and integrate it into other clients?
or would it have to be implemented in each client, you know, in you?
Or do you think everybody's going to switch to Scala?
How is that going to work?
So certainly everyone wants to implement it themselves,
and so I think they will.
And but Scala is compatible with, like, JavaScript.
So certainly at least we can use the JavaScript limitation
in with the Scala code.
As far as the other ones, I'm not sure.
I'm not, I don't have much experience as a developer.
Greg does though, and he could tell you.
And yeah, the reason we're using Scala is
because it's compatible with all the JavaScript,
Java stuff, and also it is strongly typed functional.
And so we can verify it and simulate it
in a much more formal manner.
Excellent, well, Blath, thanks so much for coming on.
It was really interesting diving into this.
I think it's a topic, a lot of people
interested in and especially if you think of the future of cryptocurrencies, I'm sure
proof of stake is going to be very, very important.
So I'm glad we could dive into this.
Yeah, I mean, it's my pleasure.
Thanks a lot for having me.
You know, I need to do education in any way I can so that people are ready for this kind
of thing and so that hopefully we can inspire more people to work on these problems.
And so, you know, thank you for your part.
No, thanks for, thanks for coming on for that.
Cheers.
And yeah, thanks so much for listener, for joining you.
much for a listener for joining us. So we put out episodes of Epstein of Bitcoin every Monday.
You can get them, of course, in every podcast player. You get them also on YouTube and
YouTube.com slash Epicenter BTC. And if you're a loyal listener, then you know, you know what's
coming now. Basically, we're still doing this sort of bribery competition where if you leave us
an iTunes to you and you send us an email, a show at Epicenterbikon.com, then we send you a t-shirt.
and yeah you can say bad things or negative things or great things anything is possible
and yeah so just just do that and we do appreciate that lots of people have done that so far
and yeah of course you can always tape the show if you want to so thanks so much for listening
and we look forward to being back next week
