Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Carl Beekhuizen & Trent van Epps: Ethereum Foundation – EIP-4844 & KZG Ceremony
Episode Date: January 12, 2023The recent advancements of layer 2 scaling solutions, especially zero-knowledge rollups, led to a complete redesign of Ethereum’s scalability roadmap. As a result, the initial concept of sharding th...e execution layer was abandoned and replaced by the idea of data sharding. This proposal, named after its author, is know as Danksharding. EIP-4844 is also referred to as proto-danksharding as it sets the foundation for data sharding, through the introduction of data blobs.We were joined by Carl Beekhuizen & Trenton Van Epps from the Ethereum Foundation, to discuss about the upcoming EIP-4844 timeline and its KZG ceremony.Topics covered in this episode:Trent’s and Carl’s backgroundsDanksharding and scaling EthereumData BLOBs and data availabilityKZG ceremonyHow trusted setups evolvedKZG commitment requirementsHow trust is ensuredCombining randomnessHow storing secrets is preventedGeneral contribution detailsSpecial contributionsQuantum vulnerabilityGeneral EIP-4844 timelineEpisode links: Carl Beekhuizen on TwitterTrenton Van Epps on TwitterMorgan Peck's articleSponsors: Omni: Access all of Web3 in one easy-to-use wallet! Earn and manage assets at once with Omni's built-in staking, yield vaults, bridges, swaps and NFT support.https://omni.app/ -This episode is hosted by Friederike Ernst. Show notes and listening options: epicenter.tv/478
Transcript
Discussion (0)
This is Epicenter, Episode 478, with guests Carl Bekhausen and Trent Fun Apps.
Welcome to Epicenter, the show which talks about the technologies, projects, and people driving decentralization and the blockchain revolution.
I'm Frederica Ernst, and today I'm speaking with Carl and Trent, who both work for the Ethereum Foundation and coordinate the ceremony for a trusted setup that is needed for EIP 4844.
That's Protodang Sharding.
Before I talk with Carl and Trent about the KZG commits ceremony, let me tell you about our sponsor this week.
Our sponsor is Omni.
It is your new favorite multi-chain mobile wallet.
Omni supports more than 25 protocols, and you can manage all of your assets in one place across all major EVMs, layer 2s, ZK Sync and StarNet coming soon and non-EVMs.
But what's really special about Omni is that you can do all the most important things in Web3 directly,
within the wallet itself.
Wanna get yield?
Omni allows you to get the best APIs with zero fees in three taps,
be it's liquid-saking, lending via Ava,
or yield volts via yearn.
Need to exchange USDC on ETH to Atom on Cosmos.
Omni aggregates on major bridges and Dexas
so you can bridge and swap across all supported networks
and one transaction directly in your wallet.
Love NFTs.
Omni offers the broadest NFT support of any wallet
so you can collect and manage
your favorite NFTs across all chains, all in one place.
Omni truly is the easiest way to use Web 3, and most importantly,
Omni is fully self-custodial,
meaning you never have to trust anyone with your assets other than yourself.
If you want, you can even use Omni's ledger integration,
so all of your funds stay on your hardware wallet.
Join tens of thousands of users on this next-generation wallet by downloading it today.
It's available on iOS or Android at Omni.com.
Colin Trent, it's a pleasure to have you on.
Thank you.
Thank you for having us.
So you both work at the Ethereum Foundation,
but maybe let's talk about yourselves first,
your backgrounds and how you ended up at the foundation.
So my background is actually originally in architecture and design,
which is very different from crypto and this entire ecosystem.
That's what I went to school for and did that for a few years and then found Ethereum in 2016
and started getting deeper and deeper into the ecosystem, engaging with core development,
mostly just adjacent to it, really obsessed with it, worked at a couple different companies
and then ended up at the Ethereum Foundation doing coordination work, having a ton of fun while doing it.
I initially was studying and a friend of mine told me how he was making crazy amounts of money
out of this whole crypto thing, which piqued my interest.
And so I also started looking into it.
And the more I looked into it, the more excited I got about the actual tech
and seeing what's happening under the hood.
And so my role sort of, well, my interest very quickly changed from that side of things
into the actually wanting to know what's happening and getting more involved.
And then at the time, staking on Ethereum was going to be 1,500 eth,
which I did not personally have.
So I came up with a whole complicated me.
for like splitting it up and staking with friends and whatever.
And it turns out I was solving many of the same problems that need to be solved at the consensus level.
So I ended up transitioning from doing that and working on my own little project to doing the same thing,
but for Ethereum proof of stake and being a researcher working on the protocol ever since.
Super nice. And you brought it down to 32Eth. So seems to have worked.
We still have solutions to do it together with friends, right?
So basically it's like, yeah.
Fantastic.
So what you're currently working on is you are working behind the seems to make dank sharding happen.
So more specifically pro to dank sharding, also known as shard blop transactions as per EIP 4844.
We had the eponymous dank rat on the show a while ago.
I think it must have been about two years ago
to talk about the Ethereum Skating Roadmap,
but let's recapitulate, you know,
all of the various flavors of sharding, please.
So basically, what's sharding, what's dank sharding,
what's proto-dank sharding, and where are we at?
The word sharding comes, I believe,
from the database side of things,
but the idea is splitting up something
that's too big to be handled on one machine across many.
And so in the early Ethereum roadmap, sharding was this idea of taking the work that had to be done
and the amount of data that had to be processed on Ethereum, and splitting up in banks multiple,
in this case it was chains.
So we'd have multiple concurrent chains running next to each other.
And together they would tell you the sum of what's happening on Ethereum.
But we quickly realized that this has a lot of problems where,
the data becomes sort of siloed in these various shard chains and making sure the consensus
works amongst all the shard chains and transferring the data.
There was lots of concerns about interoperability between them at long times to finality,
et cetera, et cetera.
So while it was a technical solution that was going to solve a lot of problems for Ethereum
scaling, it was one that was quite ugly in terms of breaking up the fungibility of
Ethereum into multiple pieces to what we're calling at the time, quote-to-quote shard chains.
Subsequent to that, the idea of dunk sharding came about, as you mentioned, named after Dunkrod.
The idea here being that instead of having multiple chains running simultaneously, we have one chain
with like crazy amounts of data available to it. But to make processing this feasible for
home stakers and reasonably sized machines, we can split up the data that everyone's responsible
for. So it's not separate chains, it's one chain, but you're not responsible for everything on this
one chain. You're only responsible for a small amount as a validator. And this greatly opened up
the design paradigm and also helped a lot with roll-ups who were trying to figure out mechanisms
for speaking between these shard chains. That would be transparency.
to the user, but weren't at the time.
So now by having this one large amounts of data
that everyone has access to, it's no longer something
that each individual roll up would have to worry about.
It's now this massive blob, which everyone can see.
From the transition from sharding to dank sharding,
we kind of lost the compute on the shards, right?
So basically the shards now exclusively store data.
Yeah. But this is a bit of a design philosophy change, which we've had over the years as well,
which is changing from this idea of having one monolithic structure to separating out the roles,
the important components for consensus here. So initially we had part part of the role of validating
was going to be watching for all sorts of different types of computation. There were going to be different ways of computing.
but roll-up simplified this by saying someone else does all the computation
and then you don't need to worry about that
and then on the other side charting is going to handle the data.
So a simple way of thinking about the protodank charting vision of scaling Ethereum
or the dang sharding vision of scaling Ethereum rather
is to the compute, the scaling compute needs.
Those are handled by roll-ups and the...
data that these roll-ups are going to need, this is handled by proto-dank-sharding.
So what's the difference between dank-sharding and proto-dank-sharding now?
Well, as usual, it comes down to a bit of evolution here. So dank-sharding initially is a very
complicated proposal, which is going to be, in essence, very hard to implement. The full
dank-sharding requires a lot of complicated things to happen.
at the networking level plus more cryptographic things, basically because we require validators
to look off the extra roles, which are checking that this data really is available with
cryptographic techniques, there's sampling involved, all sorts of things like this. But looking at this,
this would ultimately take years and years to implement. So in the interest of having a scaling
solution for Ethereum now, we have protodank sharding, which is a simplified version. So it's
strips out some of the extra nice to have and limits the amount of data we can scale to,
but still provides dramatic increases in the amount of data that is available to roll-ups,
and is something we can ship in the short term to alleviate data constraints right now.
And we're doing it in a way that's upgradable.
So we can move on to full dunk sharding in the future incorporating what we've done already.
This is the basis.
So one of the things that's...
being addressed by dank sharding is also the size bloat.
So this is somewhat counterintuitive because we just learn that basically there'll be massive blobs of data that will be on the network somehow.
But they're only being kept around for a limited amount of time, right?
Yeah, that's the idea.
I think everyone knows that state is one of the hardest things about running aethera invalidator,
It requires a lot of space.
And so this was a limitation to how much we could scale.
So the idea behind dank sharding is to make someone else responsible.
It's no longer the validator's role to store this data in perpetuity.
So as a validator, you get assigned some data to look after.
You must download it and you make it available for everyone else.
And then the various L2s are going to go look at this data.
download the portions that they need and then after two weeks or so this data gets thrown away by the validators.
So you have a rolling window of two weeks for what's available and would that be like archival nodes who kind of, you know, store all the blob data?
Yes, it now becomes like a super uber archival node as like another level of archiving if this is something you care about.
but the
important thing here is that individual validators
don't worry about it because the current way we handle
state and Ethereum is a bit ridiculous
you pay a one-time fee to add some
to add some storage and then the blockchain promises
to keep it around for forever which is a little bit insane
and if we add so much scaling this just gets more and more insane
so we now shift the responsibility onto the people
who actually care about the data
So L2 sequences are going to worry about this, and they'll obviously want to know the state of everything.
So they're going to run, they're going to store the data they care about locally that's relevant to give a roll-up.
And you as a user, for example, may wish to also, if you're keeping your stuff on L2s, also download that data for yourself, which contains your own transactions if you don't trust your sequencer.
But it's just shifting the responsibility onto who actually cares about the data.
Okay. And is there any sort of gatekeeping for who can actually transmit data to the dank chart? So basically, if I deploy a new L2, can I just send all my data there? Or is this, is this something that needs to be approved?
No, it's all very open. As per normal theorem things, we try to handle this via economic systems. So you can submit a,
transaction, there's a new transaction type.
And this transaction type basically refers to data that comes along in a blob.
And the blob's not a part of the block. It's in what we call a sidecar.
It's like external thing adjacent to the block.
And you can say like, hey, here's my transaction and I'm referring to some data here that you
can find in the sidecar. And then validators, when they're validating a block,
go look for this data on the side, but no one's regulating that. And it's actually something
that's going to be a bit funny when we, when we, when 4844 launches, is that there
currently isn't enough usage in L2s to even fill up all the space. So there's basically going to be
free, temporary storage in these, in these blobs. So I expect all sorts of creative things to
start happening again and going back to the old days of people trying to store like full images
and that kind of thing in these, in these blobs, because it's going to be so cheap and there's no
constraints to what gets put in there.
Cool.
How did you guys come up with the two weeks window?
So why is this a sweet spot in terms of trade-off between storing unnecessary data for
too long and keeping data around for just long enough?
Ah, so that's actually an interesting one.
And that's that.
I'm actually sure it's two weeks.
Because I've been focusing so much on the ceremony stuff, we'll get to in a moment.
I've lost track of exactly what the latest constants are.
But the trade-off here is basically between having it being a practical length of time
to have everyone be able to look for the data, check it's available,
download it if they care about it,
and having it not be too long that we blow up the storage for validators to keep.
But exactly what that constant is right now, I'm not exactly sure.
Yeah, I think it's still under discussion.
but it's roughly around there.
Cool.
So you guys work on the KZG ceremony.
Before we talk about the ceremony itself,
what is KZG?
So the dunk-shadding requires a commitment scheme,
a way of saying, like,
because the blocks themselves that we see on the Ethereum blockchain,
the validators look into,
don't actually have the full data.
that data is available in this blob sidecar.
We need a way of referring to that data.
So the standard way we do this in blockchains right now is just hashes.
So Ethereum uses Kitchak slash shard 3 for all its hashing to point to the data.
But this doesn't meet our needs and requirements for this solution.
So the reason that's not true is that,
inside of a roll-up, for example,
your roll-up needs to be able to point to the data.
So in a block, the normal Ethereum block,
it would only have a reference to the data,
and you need a, when you're referring to it,
you need to provide the full data that you care about
alongside that transaction.
If you're trying to do a fraud-proof, for example.
But we run into problems here
where the standard hash
features like Shathria or Ketchak are very expensive to do computations around in ZK
Rolups, for example. So it would make things virtually impossible for ZK rollups. So we need
to look to other means. And this also has some really nice additional features, which you
can make use of because there's the second thing we're doing, which is to, when there's a blob
of data. We take this data and we extend the amount of data that's available. We double
the length of this data. So after we have the transactions that are in this blob, we extend
the size of this blob and we put a polynomial through it. And this polynomial allows us basically
to do error encoding on this data. So we can recover some of it later if say 10% is
And by doing this, the system, the, the, the, the, hashes would be a, like, very inconvenient
system because hashes aren't an algebraic structure.
So when we do this extending, it would have to be a separate mechanism and we'd have to
have separate proofs about how this mechanism's working.
Whereas when we fit this polynomial, KZG is a system which leverages some of the arithmetic
that's available inside of elliptic curves.
And this just happens to be the same math we use for extending these polynomials.
So there's this like synergy of the commitments coming for free and linking back there.
The short version of the responses, KZG stands for, it comes from the names of the authors for this specific commitment.
Kate, or is it Katte?
Zeverritsha and Goldbergh.
It's Cate, yeah.
There we go.
But yeah, that's my contribution there.
But Carl got the technical bit out of the way.
So basically, to kind of paraphrase this,
basically there's different ways of compressing data that we've used before,
and they don't need a trusted setup,
but for various reasons, they're unsuitable for this job.
So basically, we have,
we have decided to move to a different one, which as a drawback has the trusted setup, right?
That's a pretty accurate summary.
Cool.
So let's talk about how the, so you said the KZG commitment kind of compresses the data.
Basically, it kind of runs it through a polynomial.
So I assume there's a limit to how much you can compress with.
with one transaction.
So I think compresses here is not quite the right word.
Or unless you want to interpret as a very lossy compression.
Yeah, it's a very lossy compression, yeah.
Exactly, okay, okay.
It really is a commitment scheme here.
But we do have limits on how large as polynomial can be.
And that's actually like a fundamental limit on
like how large you set this polynomial to be affects how much data we can commit to,
which affects like how big this trusted setup that we're going to run needs to be.
It's like this long compounding thing.
So it's like a very important parameter.
So how have you set this parameter?
So the answer is sort of we have and sort of we haven't.
But I'll get into that in a sec.
So the idea is basically taking what?
the limits are to nodes in terms of networking and storage currently,
to validators and nodes, and to use this as sort of an upper bound on where we can set
the amount of data requirements and to sort of work backwards from there.
Of course, we don't know how to do this without, well, I mean, we don't know.
There's no data collection.
So there have been some experiments run.
to try and understand where all of this is.
But the long and short of it is we've settled on this number of 4,096 of these points,
and that will be then the maximum size of a single blob.
But because we haven't fully decided, like, that's what you decided now,
and those are the numbers that will be in 4844,
but in the future we don't actually know what it could be,
particularly with full dunk sharding down the line.
And so we, the ceremony we're running is actually four sub-seremonies.
It's opaque to you as a participant, but there are actually like four little ceremonies inside
where you calculate it with several lengths of this polynomial.
So 4,096, 92.
Anyways, double that, all the powers of two, up to basically two to the 16 total points.
The idea being by having multiple ceremonies, we have, we can change this.
as Ethereum needs scale down the line
and as we have foster internet connections
and more
cheaper storage for validators.
So you can accommodate
up to three orders of magnitude
larger than what we currently
what we are going to deploy
as a first instance.
Yes, yes.
So super cool.
Trusted setups.
It's a fascinating topic, isn't it?
So basically the first time I
I kind of came across this was Morgan Peck's telling of the Zcash ceremony, which I linked to in the show notes,
just because it is so entertaining. So basically people met up in this rundown motel, and they
kind of disconnected everything, and then they kind of used brand new cash wood hardware and destroyed
it afterwards and it's it's um when you listen to it um it sounds crazy but at the same time
it also sounds very irrational um so since then um it's been a while so what has happened in
trusted setup since uh there's been quite a few obviously zcash was pioneering and that's
part of why their ceremony was so exciting and so I've listened to the same story last year
when I was doing my research and getting caught up on this stuff. And it was really, I don't
feel like they dramatized much. It was a really incredible story. And that's what a lot of people
think of when they hear trusted setups or set up ceremonies. But there's been many, many
setups, especially, I mean, in crypto is what we're focused on.
And I think most of them have been powers of tau of varying sizes.
Filecoin did a pretty large one.
Selo did one.
A couple different organizations within Ethereum have done trusted setups for a number of different things.
So there have been quite a few.
Carl, you can jump in if I'm forgetting any.
But I think one thing that's been consistent over time is that people are more and more
comfortable understanding. There's greater education as more and more of these ceremonies take
place and people understand the mechanics and why they're important and how they operate.
And I think Zcash also, because they were so early, they didn't have the benefit of what we
have now is just like, like I said, the awareness, the education and the mechanics for doing
these sorts of things. I don't know the specific technical bits, but Zcash had to do quite a few
things manually, given they were doing a lot of this for the first time, but we're fortunate
to be a couple years down the road, and we can build on a lot of what they've done in the past.
Yeah, I mean, basically, if you can at all help it, you never want to do a trusted setup.
trust and setups add attack surface and complexity in the Zcash instance there were concerns about whether it had gone correctly the first time due to all sorts of fun and interesting things and then we learned later down the line that even the more that even so there was a bug in the the the the the the Zcash setup which which they had to
address in some interesting ways and they have this turnstile and this this whole upgrade process
to handle this and so you ultimately want to avoid it what i think is uh what's what's different here
is the from that and the reason we're less concerned about that is the complexity of the setup
um there the computations and the amount of data you had to handle were gigabytes large you had to
keep the secret around for a while there's the trust the setup is about establishing a secret
and there was a secret that everyone had to keep around for a while while they're doing these computations.
And ultimately, it only scaled to, I believe, six people in their first setup,
which means that it was a very small and very trust-us kind of thing.
And over time, we've seen this transition to, A, simplifying what's needed out of trusted setups
so that you don't need to depend on a small group of people and just scaling out to make this easier to contribute
so that the trust base is a much larger group of people.
So now it's 2023.
When you design a ceremony these days,
what kind of parameters can you tweak?
Or basically what are the design decisions that go into the ceremony?
So I can give a higher level overview,
but we're fortunate in that we have the ability to use
parameters, which are pretty low requirements.
Like Carl mentioned, for the Zcash ceremony, it was gigabytes for file coin ceremony.
I know it was tens or maybe even 100 gigabytes.
These are not trivial to pass around, especially between different countries.
If it's going to somewhere with some sort of national firewall, it's really hard to pass around
that amount of data.
We're fortunate to have a very, very small.
ceremony and sort of everything follows from the very lightweight requirements. We don't have a
ton of data that needs to be passed around. The computation is pretty light. It can be browser-based.
You know, we're lucky in a sense that we can work from such very light requirements.
But basically, what became lighter? So I myself took part in the adstack ignition ceremony.
And you had to download the Docker image and run it. And basically,
check the checksum and kind of make sure you're running the right thing.
And then it still took like 10 hours to run on a top of the line MacBook.
So what's become easier?
So I mean, it's not that ceremonies have become easier to run.
It's that our particular use or what we need out of it is very simple.
So we need the most basic thing, which is, which mostly ceremonies generate,
sometimes called a phase one, and that's powers of tau.
So as I mentioned earlier, the commitment thing relies on polynomials.
And if you think of a polynomial, there's some term X and you have X squared, X cubed,
whatever, up to higher powers.
These are the powers of tau that we'll be referring to later.
There's some secret, and we need to know the secret, the secret squared, cubed, etc.
And fundamentally, we just require very few points here.
we only require, as mentioned, earlier, 4,996 points for the base 4844 setup,
which is really nice from a complexity standpoint.
So that's really helps scale things down, is that file is now tiny.
And then the second thing is that because we only need these powers,
so we don't need further computation,
and there's some additional principles that basically come in
because the requirements of KCG are lower than that of some of these other ZK setups,
that it just allows us to have this very basic computation.
There's like, for example, in lots of trusted setups,
you do the trusted setups,
and then you prove with the zero-knowledge proof
that all the computation you did was correct,
which adds many orders of magnitude of complexity
on top of what can already be a lot of compute to do.
Where in this case, it's so simple that we don't even need to do
a zero-knowledge proof.
I mean, you could view it as a bespoke zero-knowledge proof,
but you basically need a verify
a few pairings to check that this is all correct.
I took part in the test ceremony earlier this morning
just to see what it's like.
And I can confirm that it is really easy.
You kind of you type in like a couple of characters,
you move your mouse a bit, you kind of sign an ethereal message,
and you're done.
It literally takes 30 seconds.
So how many people are you aiming to have participate in this ceremony?
We'd like to have at least 10,000.
I think Carl's goal is something related to the powers or you can be more specific there.
But yeah, we want it to be the, we're hopeful that it'll be the largest set-up ceremony of this kind.
Again, it's not anything special that we're doing.
We're just very fortunate to have very low requirements in terms of compute and bandwidth.
And yeah, so we're hoping that it will be the largest ceremony of this kind.
10,000 is sort of our happy case, and maybe it'll be even more.
We'll see.
We've got two months to fill that up.
But yeah, Carl, what was it related to a number of powers or something you want to be higher than?
I have the silly notion where I would like there to be more people.
people who've participated than the total number of powers in the ceremony, like the number
of points we need to calculate. This is not for any particularly meaningful, it doesn't have a
cryptographic benefit or whatever. It's just, it's such a ridiculous idea to me that we have
this whole trusted setup, that there's all this computation for, and in the end, like, it's
that the file that just refers to the people who've contributed is going to be bigger than
the, like, actual output of the ceremony. Like, there's more, like,
The list of people who've joined are longer than the output we care about.
It's purely a vanity metric.
Absolutely.
But there is a reason that's important, right?
Is that the reason that this name, the name trusted setup exists is that you need to trust that this was done correctly.
And that the secret hasn't been stored because if the secret has been kept around,
then you can do things like breaking the ceremony.
You can, in a ZK setup, you can often prove things that aren't true.
And in our case, you could commit and reveal to data that wasn't initially agreed upon.
So you could change the data that the roll-up see.
So that's really bad.
And what you need of trust is that this ceremony went correctly.
And the way these trust assumptions look is that you need at least one person to have honestly done their job.
And honestly here means you need to have generated some randomness.
done the computation and then thrown away the randomness without storing it or without publicizing it.
And so in the initial, like in the Zcash setup there, if you only have six people, then this is really
hard to convince others if you have such a limited set. So by having more and more people, it's like
you don't need to trust one person or some limited number of people. We have thousands of thousands
of people and you just need to like hopefully have one of them that you can trust.
or maybe a few of them where maybe you don't trust any individual,
but you trust them all of them a little bit kind of idea.
And this is where we build up the security assumption,
which is why we care so much about having many people participate.
So short of everyone colluding at once
and kind of storing the secrets and kind of revealing the, you know, the mega secret,
what needs to go wrong for it to break?
So, well, I can talk a little bit about like the bigger,
picture stuff. So like Carl said, we want as many people as possible participating in this.
Because these things are, because they only require a single honest participant, once you're
past that point, like if you've contributed and you're like, okay, I'm one of 10 people,
I know for sure that I didn't, I'm not behaving maliciously. Once it gets to one out of 100 or
one out of thousand, it becomes degrees of, you're improving the credibility of the ceremony in
degrees. So there's no like threshold you need to reach. It's just more is better. So the more
the merrier, really at the end of the day, these are, like many things in blockchain, these
set up ceremonies are coordinated. They're public rituals about building consensus. And that means
its consensus around the how the ceremony was operated whether it was openly accessible,
whether the output of the ceremony seems credible.
All of these things go into making a successful ceremony.
Like I said, after that, after you get your participation included or, you know, a reasonable
number of people, the security assumptions, there's no like binary threshold we need
to cross over.
it's about convincing both the people who participated in the ceremony and then the future users
or protocols that will leverage the KZG commitments in the future.
So it's not that we're just convincing the single person who participated, but it's also,
you know, 10, 20 years, whenever we do another setup, if we do another one in the future,
that block of time has to have significant and sufficient credibility to all of these people participating that.
The ceremony was conducted openly.
People could participate how they saw fit.
And it really is about convincing everybody together that, okay, do we agree that this was good enough?
Okay, we'll use the output of the ceremony.
Yeah.
And that's where the reference to trust comes in, maybe a little bit different than the original conception.
because a lot of people hear trusted setup and they think of, you know, this romantic cyberpunk Z-cash story, which is amazing.
But it really was a trusted setup in that case.
We're trying to, like I've touched on before, we're fortunate to have a small setup where we don't have to rely on such a small group of individuals.
But the trusted setup is a bit of a misnomer in this sense that we don't have to trust anybody because it's very, very accessible.
You know, it takes, like you said, 30 seconds in the browser.
You don't have to even download software if you don't want to.
Obviously, there will be many different avenues to do this.
But it is a, there is a bit of an education deficit that we have to help people catch up with in that, you know, this isn't six people in a hotel room.
This is tens of thousands of people around the world who can do the computation in under a minute in their browser.
So they're obviously in the same family in terms of set-outs, but they do have significant departures from how they actually get implemented.
So for us, we've been using summoning ceremony because we're summoning this random output, this random number.
Trusted setup is still what a lot of people know.
That's the term they're familiar with.
But yeah, we've definitely leaned on summoning ceremony to communicate how this is.
is, you know, broad participation with the intent of producing this public ritual in the form of a ceremony that goes for a few months.
Cool.
So there is one entity called the Sequencer that kind of moves, you know, from one, that kind of moves from participant to participant in a way.
So they kind of see the contribution and the sequence that sees the contribution and verifies that it's still a correct state.
That does give them an unrivaled inside view, you know, at the state after every commitment.
Is that somehow exploitable?
No.
the sequences role
like its name
kind of suggests is literally just to decide
who's next.
So if we have so many people
trying to contribute and everyone's like
tries to participate,
how do we decide that it's Alice
instead of Bob whose turn it is now
to contribute? Because this is like a fundamentally
sequential thing. It's not parallelizable.
So
the sequencer
you ask the sequencer like, hey, can I participate now?
And if there's a free slot, the sequence will be like, yeah, sure, here's the file.
And then you go off at your own with the file the sequencer just sent you.
You do a bunch of calculations.
It combines your randomness with the randomness of the people that came before you.
And then you send your file back to the sequencer.
And then the sequencer checks that you didn't like try to delete someone else's secret
or do other funny things there,
and if so, then we'll send your file onto the next person.
So the output that the sequencer has is they can basically see what the ceremony
looked like after your contribution.
But the funny thing is that this is what everyone sees.
So it's not like the sequencer has more access to more data that could be used to break it.
All this data gets ultimately stored,
and you can, there's another,
the sequence will just give it to you for everyone's,
like for the entire history of the ceremony.
And part of it's required to verify
that the ceremony ran correctly after the fact.
So the sequencer doesn't have any more insight
into this information.
What the sequencer does have is a little bit more
control in deciding who goes next
or it could theoretically prevent someone from going next.
So you could say like, hey, can I have a turn now?
And they can be like, no.
Someone's busy.
And they could just like,
you and prevent you from participating.
Or counter to that, you could do all your computation.
You get the file, all's happy, you send it back to the sequencer when you're done,
and the sequences like, oh, sorry, you made a mistake, and it just rejects your output.
So that's sort of the additional power the sequencer has.
Which sounds like this is not great.
We don't like having single entities in decentralized systems, which have more power than others.
But in this case, it's a little different in that if the sequencer does something like this, their fault is attributable.
So if they say, like, oh, your file wasn't correct, they'll send you the message saying your file wasn't correct back.
And this message is signed by the sequencer.
So you can then take this and say, like, you can provide this to someone else, and they'll be convinced that the sequencer was falsely rejecting your file.
or if the sequences like saying that won't get you participate in the first place,
then you can do something like use the way you prove who you are to the sequences,
you sign in with Ethereum's, using an Ethereum account to say like, hey, this is me.
I am, in my case, call beak.eath.
Maybe the sequencer tries to censor me, so I could just take another Ethereum account
and try to do the same thing.
and sign in there, something that's not tied to my identity, such as the secret
who couldn't sense of me. So if you're worried about these kinds of things, they're like
are ways of getting around it. And ultimately, if these are concerns that we see, then we want
to stop the ceremony, investigate why this went wrong, and start the whole thing again,
if these are major concerns that we see. When people start complaining that, you know,
they can't participate, I mean, obviously they might be vocal about this and basically they
raise scrutiny.
Yeah, so it depends what that looks like, right?
If it's some crazy person trying to show their coin over Ethereum and thinks
Ethereum needs to burn, then that's a whole other thing.
But like, if it's someone who we can say is a reasonable claim to making this true,
or we see it from a few trustworthy people, or they can, particularly if they can provide
these certificates, which prove the sequences being lying or cheating, then we really
need to investigate what's gone wrong here.
In practice, the way we're trying to avoid all of this is that we are, like, we've put the
sequencer through some extensive audits.
We had the ceremony itself audited once before, and then like the sequences specifically
audited to try, like, hopefully find that there are none of these edge cases.
That could happen.
But ultimately, it's only going to be by actually running at the way we find out.
that it has gone the way we intended.
Okay, so basically now the sequences is Friedrich is your turn and I do the maths.
It's really difficult to talk about complex maths without a blackboard or, you know, slides.
Maybe let's, let's, can I give you a way of describing what I think I have to calculate and you
tell me whether that's correct. So basically, basically, I generate randomness somehow in my browser
by kind of moving my mouse and entering some characters and so on. And it's a number, right? And then
basically I take that number to different powers and put it into an elliptic curve. And then basically
the 4,000 whatever, I mean, that's kind of the powers of tower that you were talking about earlier, right?
So basically those 4,000 numbers, that's the thing that I pass back to the sequencer.
Is that correct?
Yeah, that's pretty much correct.
The stage you described as putting it into an elliptic curve,
what you're doing there is you're mixing it in with the people who previously participated.
So the sequencer is going to give you 4,096 elliptic curve points.
The first one represents like the secret to the power of zero,
secret to power of one, et cetera, et cetera, in increasing powers.
And then you generate your own secret and calculate these.
And then you just multiply them.
And that's how you combine your secret with the other ones.
And that's what your contribution looks like.
Plus an additional point which basically just proves that you did all this correctly.
It allows people to verify that you updated from the previous secret to your secret correctly.
Super.
So I love how.
how Zen powers of powers of tower sounds, by the way. I think it's, uh, sorry, it's nicely named.
Very cool. But that, that kind of leaves the question, how do I then destroy my randomness, right?
Because you guys rely on the fact that people afterwards get rid of their randomness,
because basically if it's stored locally on like 10,000 computers, um, obviously that's a nightmare.
So there's a couple different ways. Yes, of course, we, we want people to not keep the random
One way to safeguard against that happening is by having many, many participants, which we're unable to because it's a small ceremony size.
But we do, yes, we want to prevent people from or strongly encourage them to not keep this around because that could compromise the ceremony if, you know, somebody spins up some bots and tries to influence the credibility of the ceremony.
So there are three randomness components that go into this, which are combined.
Two of them, which you mentioned.
You're moving your mouse around and the browser is taking snapshots of where the mouse is at certain bits of time.
You're also typing something into a little text field.
And we suggest that the users include some random characters.
And we don't show this to people.
So it's masked like kind of how you would enter a password.
So those are the two that the user inputs.
And then the final third one is the browser generates randomness on its own locally.
And then all three of these are combined.
And that's what's used as the entropy.
That's what you do the computation over.
So if somebody were to record all three of these, well, first they'd have to be digging.
into the browser and extract that randoms.
The other two are maybe a little more cosmetic
because the users are entering it themselves
and we already have this browser randomness as a backup.
But they do add a little bit of entropy themselves,
but at the end of the day, it's backstop by this browser randomness
that you'd have to try pretty hard to dig into
and actually extract from the browser itself.
How is it deleted afterwards?
though, because in principle, you could probably save it.
You could.
And that's, again, where, like, that's what we define as participating honestly, is that
you don't try save it.
But I think, like, ultimately your browser, just like, as soon as you've handed the
file back to the sequencer, if you're participating via the client at cemone.e.com.
if you do that when you hand back the file the browser will just delete it itself so it's a little bit different to like
I this this like we need one honest person but it's like in order to do this we need if we have 10,000
participants we need 10,000 people to like open up their their browser like take apart the code
figure out how to get the browser to like save the secret for them and then save that secret
then they all need to get together and publicize or communicate their secret with each other
such that the Uber secret can be calculated.
So it's like we require a pretty decent amount of technical competence
and malice on the part of literally every participant in order for this to be an issue.
You need at least one honest or lazy or stupid person to participate.
Hopefully all three.
The other thing which maybe we'll get into later is we will be running a special contribution period
where we'll have maybe more elaborate participation mechanisms and ways of generating and storing randomness.
Maybe we can get into that, but there will be a verifiable or like more documentation as to how they generated and then destroyed their secret.
but that's a separate from the general contribution period.
Okay, and the general contribution period,
is it first come, first serve,
or do I kind of have to book a slot?
So, yeah, typically with larger ceremonies,
because there's this significant data that they have to pass around,
and you want to fit as many contributions as you can
into a limited time period,
whether it's a month or two months,
Yes, you would have to slot.
You'd have to choose a slot, sign up in advance.
They would send you some sort of token,
and then you would check into whatever hosted interface is passing you the latest version of the computed data.
We do not have slots.
We have a general lobby, and then people or accounts are picked at random from the lobby.
So it's a little bit nicer that you just show up.
The tradeoff is that you don't know exactly.
when you're going to get included.
So the lobby could have thousands of participants and we'll just tell you,
maybe you should come back another time.
But the good thing is it'll go for at least two months.
So hopefully, you know, if the lobby is full for two months with thousands of participants,
that's both, you know, that's a good problem, I guess, because we're going to have, you know,
I think we'll definitely get to many, many tens of thousands.
But some people may not be able to fit in.
So maybe we'll get to that problem when we get there.
but it's just the lobby, Carl, you probably have some more specifics on what the lobby actually is and how it's implemented.
Yeah, so it's basically you check in with the sequencer first.
You're like, hey, I'm Carl or Frederico, whatever.
The sequence is like, oh, welcome.
And then you're like, oh, hey, can you give me the file?
And the sequence is like, nope, sorry, someone else is busy with it.
And you're just like, everyone's just asking like, hey, is like, what about now?
Is the file ready?
And then as soon as someone's like, as soon as the file is available, the sequence is like,
like, oh, yeah, sure, here you go. So it has this like random element where it's just like,
if you keep asking the sequence, like, hey, can I have the file now? Eventually, the file will be
free and you'll get allocated. So this is quite different in terms of if you don't have time slots,
you have to meet. It's also like one of the things is if there are time slots or it does take,
like if you have a queuing mechanism, that queue is six hours or 10 hours, like is your computer still
going to be on then or like all sorts of additional questions where here we can sort of avoid all of
those because if you're asking like your computer's asking hey can I have the file now and that that
means your computer is online and ready to do these computations so it like works really well from both
the like simplifying things standpoint and from just like allowing many many people to to contribute
what about the special contingent that you just alluded to Trent the special contribution period
Yeah. So given we only need to trust one person, this is, or we only need a single honest participant, this is one way that we can take the sort of theatrics and the public perception of the ceremony to another level.
So we're running a grants round, which will support people who want to either write through an implementation, but as you mentioned, do some sort of unique or special contribution.
One of the famous ones is, I believe it was for the first ZECASH ceremony, they took an artifact, like a piece of cloth from Chernobyl.
They took it up into an airplane and used a Geiger counter to measure the radiation that was coming off of it.
Recorded that, and because they were in an airplane, you know, it's, I don't know how you would compromise the data they're recording while they're in the airplane, 3,000 feet in the air.
but this is sort of the elaborate
fun
data contribution that we would
like to see in the future. Maybe not
obviously not everybody in an airplane. Everybody is
not going to be able to get
radioactive material from Chernobyl but
this special contribution period will happen after
the first two months. We'll take a bit of a break, turn off
the hosted interface and have
people who you know
they want to use
some sort of special event that they
planned with their local Ethereum community or someone who is, they have a specific niche interest
outside of crypto that they can somehow incorporate. These are the kinds of things we're looking
for. And so, yeah, applications for this are now open and we're giving grants for people
who have interesting ways of generating entropy and then storing it. And like I mentioned earlier,
this is also part of this part of the project is recording.
it and or documenting it, not recording the secret, but documenting the entire process of,
okay, here are the steps I went through to generate the randomness.
Here's the way I discarded it and here's the way, you know, destroyed the computer or something.
But the average member will just contribute through an interface.
They won't have, you know, it'll just be in the browser.
But we'll have these very elaborate, fun contributions that we can then see.
you know, we have a documentation for actually how they planned it, how they set it up,
how they generated and recorded and discarded the secret.
Super cool.
I look forward to kind of seeing what kind of entertaining things people come up with.
One last thing about KZG commitments, they are quantum vulnerable, right?
So if I had a quantum computer, I could break it.
Yeah.
So what happens when we build one?
it's a matter of scale right now most most of the cryptography we use in the blockchain context is
quantum vulnerable we have many components we can swap out the like you take some some some
some cryptographic scheme that's that that would be broken and you can replace it with one that
wouldn't and the same would be true for for kcg setups unfortunately the
like all these really nice properties
I alluded to earlier, those
fall away.
That's like we don't have any post
quantum solutions that allow
us to like really easily prove these inside
of SNACs or etc.
But I guess an additional problem is that
most SNARKS will SNRSAX will break as well because
they also rely on these arithmetic assumptions.
So like there are like many
cascading levels of problems and then ultimately
we would have to like switch to a
hash base system
to provide these commitments.
So, like, there are mechanisms we could put in place, but they would be very ugly.
And, yeah, possible, but hopefully we still have a few years before we reach that point.
Yeah, I think it's like 20 years in the future or so.
But, I mean, it's always good to kind of think about these things they had, right?
So how soon, so, I mean, KZG is only, um,
a part of 4844.
So there's different things that also have to go into it.
So I assume it's a prerequisite,
but as soon as it's done,
4-8-44 might not be done.
So how long after the ceremonies over,
do you think 4-8-44 will be live?
I had unmuted to answer the question,
but actually, I don't know if I can answer that specifically.
Rough deadline possibly or like what people expect.
So let's see.
So we're hoping to start very soon.
As you said, it's a prerequisite, but it'll be done.
Let's just say, I'm pretty confident that it'll be done to our satisfaction before 4A4 is ready.
Hopefully, sometime mid-year, maybe that's, is that general enough or specific enough?
That's probably as specific as I would guess, roughly,
around that time.
The, like, it basically comes down to the, how much time devs have to work on implementing
4844.
So, like, where are we in all of this testing?
There are withdrawals, the withdrawal fork, which is, which is going to be shipped
beforehand.
And that's, that's the next upgrade.
And then after that, we'll be 4844.
And the question, like, the, the, we, we.
It is very hard to answer exactly when that will be, but as Trent said, something like the middle of this year, I think is a reasonable answer.
And then the idea is to have the ceremony done.
We've mentioned this two-month time frame, which is where we have lots of contributions.
And the idea there is that's long enough for everyone to participate, and it's about as reasonably short as we could expect people to, like, expect 4-84-to-ship in some, like, magic world where somehow we have infinite dev-power.
and whatever, I think, look something like that.
Realistically, that that's not what the world looks like, but we want to be prepared for
that in case.
After those two months, we'll have the general contribution period where we'd like have all
these fun, weird, wacky contributions.
Special, special, special contributions.
Special, sorry, special, special contribution.
We have all these weird wacky contributions.
And then after that, we will just go back to allowing normal contributions again until 48444 is
ready to ship, because this is like the kind of thing where we can say, okay, cool, like,
4844 is ready to ship in a month.
So there's like, in a week's time, we'll shut down the ceremony kind of thing.
It's all rough timelines, by the way.
But like something like that, you're like say, okay, cool, we're like now 4844 is ready.
So we can shut down the ceremony.
And ultimately, like, the ceremony can run as long as 484 is in progress to try
have as many people participate as possible.
Yeah, exactly.
So how can people learn about all of this?
So we see if people want to build a client for the ceremony or just participate,
in the ceremony or be a special contributor and video themselves.
Where do they go to find out about all of this?
Easiest place to start is probably ceremony.ethym.org.
That'll be sort of the home base for a lot of different bits of information.
And then the hosted interface can also be gone through there.
We'll also have some IPFS versions.
But yeah, that's where I would direct people.
If they're interested in writing and implementation, there are some links out from there.
you can also go directly to the Ethereum org blog that has the post and the explanation of what sort of contributions
or what sort of grant applications we're looking for, whether you're writing your own implementation
or doing a special contribution, yeah.
Cool.
And what's next for the two of you once this has all, you know, gone smoothly and 4844 is deployed?
I don't actually have a reasonable answer to that.
this has been fulfilling so much for my time
just like trying to like ensure that this
runs really smoothly and that that
this is all secure
that I don't know what my next project is
I kind of specialize in doing these
research projects which are very much related
to the core protocol of Ethereum
but also a little bit adjacent
and so I will probably
see what the next
project is alongside Ethereum that needs to be
for me it's probably I mean there's there's always the the general coordination work of getting people stakeholders engaged in network upgrades and
helping people understand how the network is evolving or where it should go so that that work is like always ongoing so I'll probably just keep doing that but another major project is
protocol guild which is a collective of individuals who are working to fund the core protocol outside of traditional funding sort of
That's probably what I'll start to focus on once this is done.
Or at least, you know, kicked off.
That sounds super interesting.
Where can I find out more about the protocol guide?
The Twitter handle is Protocol Guild, so that's probably a good place to start.
There's some links you can dive into, and then I can probably just, I'll message you some stuff after this.
Cool.
Fantastic.
Thank you both for coming on.
I look forward to participating in the real ceremony.
and let's see whether I can find, you know, a special enough way to kind of get through the grants process.
Yes, definitely.
We would love to have you apply and maybe do something fun.
Thank you for hosting us.
I really appreciate it that we get to share this project we've been working on for so long since like mid last year.
And finally, it's almost ready by the time this comes out.
I don't know what your editing turnaround is, but the ceremony will be.
probably be live. Oh, it'd be out this week. Oh, okay, maybe not.
What part of the week? But it'll be very close. Either way, this is great for just getting the
word out there and letting us share sort of the framing of the ceremony and sort of the stuff
that went into it. So thank you again. I appreciate you having us. Thank you for joining us on
this week's episode. We release new episodes every week. You can find and subscribe to the show on iTunes,
Spotify, YouTube, SoundCloud, or wherever you listen to podcasts.
And if you have a Google Home or Alexa device, you can tell it to listen to the latest episode
of the Epicenter podcast. Go to epicenter.tv slash subscribe for a full list of places where you can
watch and listen. And while you're there, be sure to sign up for the newsletter, so you get new
episodes in your inbox as they're released. If you want to interact with us, guests,
or other podcast listeners, you can follow us on Twitter. And please leave us a review on iTunes.
It helps people find the show, and we're always happy to read them.
So thanks so much, and we look forward to being back next week.
