Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - David Schwartz & Jordi Baylina: Polygon zkEVM – From Circuits to Mainnet - Part 1
Episode Date: March 10, 2023With the upcoming launch of Polygon's zkEVM mainnet, L2s are undoubtedly a huge narrative that is unfolding in 2023. It is time to take a step back and recognise the incredible amount of effort that h...as gone into researching and building L2 scaling solutions in general and zk rollups in particular. Tech-wise, we barely scratched the surface of general purpose processors that can handle zk circuits, so there is still tremendous room to grow. As layer 2s become more efficient, so will the applications that are dependent on blockchain throughput.We were joined by Jordi Baylina (tech lead) & David Schwartz (CTO) from Polygon's zkEVM, to discuss the different milestones & challenges they had to overcome in designing a zk rollup, from circuits to mainnet.Topics covered in this episode:Jordi’s & David’s backgroundsThe idea behind Hermez NetworkzkEVM scaling & validity proofsHow different ZK rollup models functionWhat challenges zkEVM encountered during developmentEvolutionary pathways in the realm of ZKPsAuditing zkEVMTrusted setup requirement for zkEVMEpisode links: Jordi Baylina on TwitterDavid Schwartz on TwitterPolygon zkEVM on TwitterSponsors: Omni: Access all of Web3 in one easy-to-use wallet! Earn and manage assets at once with Omni's built-in staking, yield vaults, bridges, swaps and NFT support.https://omni.app/ -This episode is hosted by Friederike Ernst & Meher Roy. Show notes and listening options: epicenter.tv/486
Transcript
Discussion (0)
This is Epicenter, episode 485 with guests David Schwartz and Jordi Bailina, the co-founders of Polygon.
Welcome to Epicenter, the show which talks about the technologies, projects and people driving decentralization and the blockchain revolution.
I'm Friedrich Aarns and I'm here with Meher Roy.
Today, we're speaking with David Schwartz and Jordi Bailina, the co-founders of Polygons and heads behind the Polygons ZKEVM.
And we have so many questions.
It's going to be a very meaty episode.
But before we talk with Jordy and David about Polygon,
let me tell you about our sponsor this week.
Omni is your new favorite multi-chain mobile wallet.
Omni supports more than 25 protocols
so you can manage all of your assets in one place.
But what's really special about Omni is what you can do inside the wallet.
Once you get yeared, Omni allows you to get the best APIs
with zero fees in three taps.
Need to swap.
Omni aggregates all major bridges and dexes
so you can bridge and swap across all supported networks
in one transaction directly in your wallet.
Love NFTs.
Omni offers the broadest NFT support of any wallet
so you can collect and manage your favorite NFTs
across all chains all in one place.
Omni truly is the easiest way to use Web3
and is fully self-custodial,
meaning you never have to trust anyone with your assets
other than yourself.
And they support ledger.
Give Omni a try at Omni.com.
David and Jody, it's such a pleasure to have you guys on.
The pleasure is ours.
Thank you for inviting us.
We are very happy to be here.
Yeah, thank you so much for coming us.
This is one of the podcasts.
You know, when I started in blockchain, this podcast already was there.
So that's a honor to be here too.
So that's, thank you for still being here.
Thank you.
This is our 10th year.
So I think very few years have actually made it to 10 years in blockchain, but yeah, we're still around.
So, Judy, you also have a super long Ethereum history.
So I remember you telling me that you kind of entered the scene right around the entire the Dow situation and the hack, and you kind of join the white hat hackers.
Tell us about that.
Well, that was a long time ago.
for me it looks like prehistoric pre-historic times but this was my my introduction to blockchain so
I was just learning very excited about Ethereum at that point very excited about the Dow and
discovering the community discovering the things that you could do in Ethereum I was at that point
I was writing a liquid democracy smart contracts for the Tao well I learned a lot you know I
mainly learn how this works on that time and then all the all the hack in the Tao happened
I was one of the persons that checked more of the code and was very involved at that point.
So, almost without even realizing, I just get involved in white hacks.
And this was my baptism in blockchain.
It was very, very, you know, very exciting times.
For me, it's a story, you know, I don't think it's the topic of this interview,
but for me it was like a movie, you know, being part of a movie and, you know, white hackers,
the hackers, try to attack the hacker, the soft fork, the hard fork, and then returning the phones.
And, well, I mean, just was a fast learning on the, on the, on the space.
But yeah, good moments at that point.
Yeah, trial by fire.
Yeah, exactly.
Kind of, yeah.
I'm sure there's going to be a movie about this at some point.
So who do you think is going to be cast as Jody?
Who's going to be? What the causes?
Who's going to be cast as you?
Oh, yeah, I don't know.
I don't know.
I don't know.
Richard.
Yeah.
David, what about you?
You have, you are super difficult to Google, by the way, because you have a very common name and also the same same name as the Ripple CTO.
So your background to me comes, now comes to me as a complete surprise.
You can tell me anything and I believe it.
Yeah, no, no.
I joined the space five years ago.
Jordy convinced me to start together a project on identity,
and we are still working on it.
And more recently, we just started the CKBM.
Before that, it was Hermes Network.
We joined with Polygon two years ago,
and it was like a big experience.
Now we are working together in the CKBM.
I basically am leading the project
in terms of execution,
the coordination of the team, the follow up and also the product side.
So Jordy is doing the magic and trying to make this happen in every other aspect.
I'm in your boat.
So I also try to make things happen in every other respect.
And this is, it's a very hard job.
So with our respect to Jody, but yeah, David, don't don't understand yourself here.
So you guys co-founded the HMS network.
At the time, what, what were you setting out to do?
with it? Well, we were just trying to scale, let's say scale with blockchain. So in this case
was so the first version of a Hermes was just an only payment system, but was quite good and
scales quite well. And this was the Hermes one. It was a very good experience on there. And from there,
we just realized that the important was scale Ethereum and
and we just focus on the ZKABM since there.
The story is that, so we were doing the identity project.
We were doing the identity project at that point.
And we were building a lot of zero knowledge technology
and a lot of the technology there.
So at that point, we realized that this technology
was very useful also for scaling.
Actually, we were trying to build a kind of a roll-up for identity.
And say, okay, we can also do a roll-up for payments.
You know, it's something that's interesting to do.
And then we started to do a kind of a...
proof of concept internally in a side project inside the team and we realized that this
work quite well and at some point we just decided to okay let's move gears and let's
focus on this for a for a while and yeah that's when that's how we built Hermes 1.
Yeah I would say that Hermes 1 was an amazing experience for us also to you know
make use of this 3k technology. Jordi was one of
the pioneers in this field creating tools like Sycambs and NALjs that are
very common and very used across many teams the Hermes 1.0 was payment network only
which was basically what we feel it was possible at that moment but this field has evolved a lot
and two years ago this this KVM seemed like impossible or feasible other teams already said that
because our approach was very ambitious.
From the first step, building a payment network to the full, let's say, smart contract execution,
it seemed like a big deal.
And we are super happy that today we are very close to launch the main net and everybody can see the test net.
So super exciting times.
Also hard work behind this, you can imagine.
But the story is that we got there by a lot of hard work.
and being ambitious and trusting that along the way we will find solutions,
which we can still a little bit more, but it happens.
So very exciting times.
What is ZKEVM?
ZKEBM is mainly scaling Ethereum.
That means that ZKVM is a roll-up, is a layer two roll-up,
that works exactly the same that the layer one roll-up,
they're exactly the same that Ethereum,
But instead of running on a layer one,
or standard of running in Ethereum,
we are running in this layer two.
So it's a smart contract that emulates
or that has this chain by itself.
And we are just taking from layer one the consensus.
So layer two does not include a consensus mechanism.
So we are leveraging on the layer one consensus mechanism,
but we are building a new layer,
so a new system on top.
And the difference will be mainly,
It should be mainly the throughput, the quantity of transactions that the network should accept
and the price.
If you have more throughput, then the price should go low.
That's the concept of the ZK EBM.
So it's, EBM is the Ethereum virtual machine and the ZK is just, we are emulating or we are running an Ethereum
virtual machine with zero knowledge in a roll-up.
mainly what it is.
I brought to this that we are in the ZKBM, you are basically changing the trust model.
We have this kind of layer two execution, let's say blockchain that's doing off-chain
processing of Ethereum transactions.
But here instead of having a consensus protocol that provides the security to the users,
we rely on the layer one security to verify some kind of validity proofs that are created
into this layer 2.
And this layer, this validity proves basically certify to users that the execution is correct
according to the rules of Ethereum or the rules that this VM has.
So the prover is the key, let's say, key element here because it's where all the trusts,
you know, of the users are deposited, because the, this prover is where the CK circuits
enforces the behavior of the network.
If the network is behaving according to the rules of Ethereum, then you get the proof, otherwise you don't get the proof.
And this validity proof is kind of the new model of trust.
And this enables to compress in some way.
You get a lot of transactions that are proven by a succinct, a small proof, and this is where the scalability happens.
Cool. So in a typical layer one, if I as a, let's say I as a business, want to be sure that
the current ledger is correct. I have to run all of the transactions from
from basically zero or the first instance to now verify that all of the transactions
were correct and the accounting was correct and then I come to the current state and
then I know that over the current state is correct. In this case what
yes the essence of what you're developing is that you will have a network
where the network can all you can almost imagine the
network as emitting kind of certificates or proofs, which attest to the fact that some current state,
the state that has been reached currently came as a result of correct accounting done in history,
and that correct accounting is certified by these zero knowledge proofs.
So I only need to process the certificates or the proofs,
which is much lighter than actually running all of the transactions in the history of that network.
Exactly. You explain it perfectly.
Is it the case that, you know, when Ethereum launched,
there was the Ethereum network, and then there was like Ethereum virtual machine.
And of course, the Ethereum network used the Ethereum virtual machine,
but the Ethereum virtual machine is kind of like an abstract concept really
that was adopted by other networks.
So is it similar in your case as well that there is a ZK EVM network that you're trying to build?
But there's also a ZKEVM, like some kind of abstract computational machine that you're building
that can also be in the future used by our own.
other networks. Is that right?
Not exactly, I would say.
So what we are doing here is, so from the conceptually we're just copying, emulating the
IBM. So we are just, so that what we are trying to solve is, okay, is you are doing the same
in one thing and you are just doing in the VM. So from the user perspective, there is nothing,
nothing else. If we check internally and we check the architecture that we are using to
build that, it's like a kind of a double layer. So there is, so we have like a soup processor.
We have a kind of a sub processor that's, uh, well, it's executing with a specific language.
And we are emulating or we are just, uh, running the eBM. So the program that emulates
a VM on top of these ZK processor in the bottom. But this is, I would say this is more, uh,
so it's more than in the architecture part. It's not in the, it's not the final, it's not the final. It's not
purpose. It's not the final idea. This is a private thing. It's a specific processor to do a
specific job in this case is emulate the the eBM. Well, yeah, here there is, we could have other
processors here, but this is not a public process. It's not, it's not, we don't plan to,
to people run programs for this processor, users to write for this processor. So we just,
It's a specific solution for having ZK VM ZKR ball up.
Yeah, basically the mission of Polygon is scaling Ethereum to bring
an Ethereum mainstream and this is what we are trying to build.
Mimic essentially what the Ethereum is doing so users doesn't have any friction.
The same thing we did with the POS so that we want to replicate this kind of experience.
for users, but with the security of Ethereum, because we will be deploying a layer two.
But we are basically following Ethereum. If Ethereum introduces new changes, we will catch up.
As you all know, there's different kinds of ZK roll-ups. There's also optimistic roll-ups.
Let's not talk about those for now. So let's just concentrate on ZK roll-ups. And basically,
if you zoom out and explain in like normal people terms, how,
they differ. My take on what the ZKEVM is would be that the ZKEVM just transposes every
op code that the EVM has into a ZK version of that opcode. And that means it becomes trivial
to kind of transpose any kind of smart contract that you already have into the smart contract
that can be compiled by the ZKEVM because you just replace every op code that you've used.
used with the equivalent of the ZKEVM.
Is that high-level understanding, correct?
Yeah, but it's not even that, there is not even that transposition.
You just can take a smart contract,
you don't even have to recompile,
and you just deploy this smart contract in the ZKVM.
You don't need to have special compilers,
you don't need to have a special version of Metamask,
you don't need to have a special version of Harthat,
or you don't need to translate any upcode in other upcode.
You are just taking Ethereum transactions.
You are just taking Ethereum smart contractor.
You're just using the same way that you are using Ethereum.
You are using the, you are just,
you just put Metamask to the ZKBM and you deploy there.
So it's as easy as, as easy as direct as that.
And this is the magic, is this easy, this easiness on that.
This is the idea of a ZKBM.
I mean, that sounds, I don't want to say straightforward, but it seems like a very natural approach, right?
Seeing that you already have the EVM, you already said like 10 minutes or so ago that it was that this approach would be too hard, that it was inconceivable that this could be built within like a reasonable time scale.
What were the limits or the challenges?
I mean, I think we surpassed the limits.
Maybe before that, we can go back to the previous question is,
there are different approaches to see KBM.
I don't think we can just say all are the same.
Vitalik explained this very well in a recent post,
or maybe it was like last year, but there are different times.
We are targeting EVM equivalent.
which was basically what Jordan was defining.
So no friction for users.
You are deploying the same thing that you do in Ethereum in our model.
There are other approaches that were, let's say,
where I started probably before we did, like K-Shing or Starware.
They were doing kind of building a VM, which was more S-K-friendly,
which is basically the internals of the engine of the CK Prover
is more simplified.
Let's say they are not supporting the same set of codes
that we want to deliver.
So the users need to transform the code
by recompiling in some way
or even in the approach of Starware
they need to use a new language.
Or they also have some kind of compilers
and these kind of things.
But when we said it was difficult,
I mean that these teams started before
and they felt this was the approach that was feasible at that moment.
To build, let's say, VM that was CK-friendly,
in some way they felt that in terms of it is like this.
The computation of proofs is a very intensive calculation,
and you need to prepare this correctly so you can be efficient.
So they were following kind of more simple strategies that paid, let's say,
theoretically better outcome in terms of feasibility of proof calculation for for this kind of
processing of transactions and we were following our approach which was more ambitious
that basically was a bm equivalents that and anyone could use ethereum as we have today
but it seemed more complex the internals were more complex and in fact they are probably
since we are all the complexity we are avoiding for users we are putting in the
inside of the system. That's the general concept.
So this is the reason why we are so happy that we are able to deliver this so soon,
and it works because it seemed like impossible two years ago.
And today we are in a position that we are super happy to be.
Can you give us an idea of the challenges that kind of you needed to,
that you were up against in kind of building this?
more complex technology that kind of abstracts some of the complexities or most of the complexities away from the user?
You can enumerate some of them. But for example, if you want ABM, for example, you want that the signatures are the same way.
So you need to use the CDSA the same way that Ethereum does. So that means that you need to do a circuit or you need to do a zero knowledge circuit that
validates normal Ethereum signatures or Ketchax.
You know, you need to also have a circuit that emulates Ketchax.
These are not, so ECDSA or Ketchak is not something,
it's not something that's prepared for doing efficiently in the ZK.
So we had to do invent and do things just to optimize and to do that things faster.
But other more, I would say,
other more trivial, let me tell you some other more trivial things.
For example, the transactions in Ethereum are encode with RRP.
So we need an RRP decoder inside the, inside the prover.
And this is, it's a stupid if you do it in the normal processor, but doing in the, in a
EVM, in a ZK circuit, that's much, it's harder than that.
But I mean, a lot of things.
You know, for example, the IBM has this memory alignment, you know, it's a byte-related,
but then you read 256 words or 256 arithmetic.
You need to deal with that in an efficient way.
It's, I would say it's during a year and a half that we have been working for this project,
almost two years right now.
It has been like one to challenge per week at least and solve it.
that has been a huge team effort to solve all these engineering challenges that compose the ZKBM.
Do you guys know that it's always possible to kind of transpose this?
I mean, is there a mathematical proof that you can kind of take any smart contract and kind of
transpose it into an equivalent ZK second?
I mean, did you know it was an engineering challenge?
Or was it also questionable whether it was doable at all?
Possible.
So mathematically speaking, possible it is.
So there is no restriction for not being.
The question is more how efficient this is.
Just as a number, just as I tell you one anecdote, one thing.
When we started the system, we were discussing with a bit.
and we were talking about proving computation.
We were talking in, we were measuring the first estimations.
We were measuring in hours and the computation power
where we were measuring with lots of servers.
You know, when we were talking about data centers at some point on that.
At this point, building a proof of 10 million gas,
it takes less than two minutes.
This has been the improvement of,
this, but the challenge was not in the single server. Yeah. So that's, so that's, that's,
that has been the the improvement that we made in these two years. This has been mainly a fight
about optimization. Optimization and here is all the, all the possible things that you can optimize,
you know, from the mathematical perspective, the engineering perspective of how the way you do it,
the programming perspective, if you are using vector, vector instructions, assembly, we try it also with
GPUs, we trade with, so it's like taking in account the cash, but taking also in account the different
protocols, we went to the Goldilogs. This is one of the things that make us improve a lot. It's, it's a lot,
a lot of things and a lot of improvements and a lot of small details that when you're putting together
is when you can have these times. But it's not, this is a result of, it's not a result of a simple
thing. It's a result of hundreds of engineering decisions and, you know,
and in sharing the details that when you put them together
and you were just fine-tuning all of them
is when you are getting these,
when you are getting these numbers.
But we started, you know, the first expectations were huge.
We were in the limit of feasibility when we started.
Okay, so literally grinned hard work and, you know,
kind of optimizing like every single thing that you can optimize for.
Exactly.
That's been my job.
Actually the job for all the team for the last
the last two years.
How big is the team?
How many people do you have on the team?
And what are their backgrounds?
Are they mathematicians or cryptographers, computer scientists, all of them?
We have a team of engineers that work with Jordan in the Pruber, with the CK part,
that's probably five, six people.
We have something working in the contracts protocol, like four people.
like four people.
We have like seven people working in the client of the network.
But we as a polygon, we have contributions from many other teams,
like Polygon Zero, Polygon Myden, also broad polygon teams.
We have a big family working on this with different profiles from
cryptographers to engineers.
But on project, I would define, as I really was saying,
before it was you're doing a lot of research in the ck and the feasibility of this
implementation we had clear ideas from a long time ago probably one year and a half but it has been
one year and a half of improvements of optimizations and we are still working on it so basically the
team the team has been composed by engineers and they are doing an amazing great job we have some
mathematicians but this is an engineering project mainly i'm trying to understand the difference
between the approach ZKVM is taking versus something like what stockware is taking through with CairoLAM.
And what I understand is that ultimately any kind of virtual machine is essentially defines a set of operations that it can do.
These might be operations like add, subtract, multiply, right?
Like simple operations, all of us understand.
But in the Ethereum machines case, there are also complex operations which are like
take a hash of certain data or verify a signature or there's something which called the
jump instruction which does something which basically allows for loops to exist in the
Ethereum virtual machine.
So the Ethereum virtual machine has a set of operations
that were chosen in the history of development
of the Ethereum virtual machine, maybe 2014.
And the difference between your approach
as Polygon versus something like Starkware is,
you're saying, OK, Ethereum virtual machine
defines these, I don't know, 128 or 150 set of op
and we're going to take these op codes and we are going to develop a system that can emit zero knowledge proofs for a chain of operations that are that are using these op codes.
Whereas StarCware is saying that actually let's define a different set of op codes which are not exactly the same as that used in the EVM.
And let's define those opcodes in a way that it becomes easy to write zero knowledge proofs for them.
But it's a different set of opcodes.
And because now it's a different set of opcodes, ultimately they need a different programming language,
Cairo-Lang, to also run on top of their virtual machine.
Whereas in your case, because the set of opcodes is the same as Ethereum virtual machine,
developers will be able to write in solidity or piper or any of them.
these languages that they are used to.
Is that kind of this core strategic difference?
Exactly.
Yeah, exactly.
That's a different approach.
The approach you explain that is Starboard approach or even Polygon Zero, for example,
started also these, well, no, sorry, Polygon Maiden started this approach.
It's okay, let's build like a new virtual machine.
virtual machine is this one approach there is another approach that's for example the one
that's was used by polygon zero or that was is used for for matterlats for example that's
okay let's try to be compatible but not compatible at top code level but top compatible at
language level so let's take solidity and then you just transpile it to our own virtual
machine okay or to this is another approach it was probably was the best approach you could
choose two years ago or three years ago when it was started because you know was
doing an op-code compatible a bm-cote compatible had a lot of challenges in there so
this was another approach and then the third approach that is the one that's okay let's
let's not not be compatible at language level but be compatible at open-code level so that
means that this is the one that's closer to Ethereum and this is this is the approach that
that we took or from a scroll people also took this approach and there is other
other teams that took this approach there are different ways different ways to
approach the same problem with with different implications for the end
users you know the or approach is is theoretically it should be less efficient
but it's more but's more easy for the user what's interesting and here is the
maybe the main thing is that it should be more, it should be less efficient, but actually
it's even much more efficient than the other systems.
And here is, I think here we are very, because, you know, all the, all these improvements that
we made were not like specific for what we were doing.
It were specific to the how we build role apps itself.
So the optimizations that we make, I think that this creates us a lot of a big difference
between the competitors, because we have the best of awards.
we have the easiness for the for the users and at the same time we are very very very very
very optimal even more optimal that we don't have the numbers because you know the competitors we
don't we don't they never publish a prouber that it's yeah it's difficult to look hard but but
i mean it looks like that we are much much more optimal that the competitors
But this is actually quite a little bit counterintuitive
that it should be the case because normally
in the history of programming and software engineering,
you have languages that are like closer to English,
easier to write, like a JavaScript or Python.
And then you have languages that are much harder
to learn and harder to write like a C.
The advantage that usually the harder to write thing
gives is that it is efficient and uses less computation.
resources and the disadvantage of JavaScript is it will use more computational resources.
But I mean, one possible explanation would be that the competitors just haven't seen the
optimization to its end, right? So basically if we kind of put Jody now on, you know, the
competitors and say Jody, please optimize this, do you think you could get it to a more efficient
point than where ZKEVM currently is?
Absolutely. Yeah, I'm sure. No, yeah, because
It's a lot of things.
For example, I don't know how we are writing a storage.
This is not related, this is how we are computing the storage.
This is not related to being IBM compatible or not compatible.
It's just how you structure the storage
and how you create, how you define the storage
in an effect way that's efficient.
This is a lot of knowledge and testing and improvements
in how we are writing the storage.
And this is, this can be.
be used in any in any so in any of these architectures on that and this is an improvement that we
made uh on that but it's not but let's let's talk about the arithmetic optimizations again this is
transversal so this is not like a specific uh to a theorem or who we are computing the ICDSA well if you
want to if your system wants to have ECDS then this is hard and we solve that so if you see all these
improvements that we make and we put them all together so all these can be used in other systems that
don't necessarily be Ethereum compatible we created for Ethereum because this is what what was our goal
was was or specification as engineering but all these work that we make it can be used
in any most of the works can be used in another project in in transversely and this is the cool thing of
open sourcing the code is like everything is in there. So all the knowledge is in there.
Of course, it takes, it's time to be absorbed and to be understood and things. But there is
a lot of knowledge that's in there that I'm sure that other projects will use it in the,
in the Ethereum space and outside Ethereum space for sure. It's a lot of innovations.
But it's the engineering. The cool thing of engineering is that you can advance. There's
There is like two things.
One is the balance and when in engineering sometimes you need to choose between a balance.
That's what you think.
Or the good engineering is when you don't need to be in this balance, when you are advancing
in both fronts and this is the real progress, when you are creating things that are doing better,
no matter what's the variable that you are measuring.
So Charlie, like you mentioned, okay, in this space today, all like three approaches exist, right?
one is okay let's define a new virtual machine downside it will be a new language
which is like kirolang and let's try to build ZK Provers that can do general
competition that way you have the approach that okay you have the EVM let's
keep the EVM awkward compatible and then let's build ZK proving systems on top of
EVM that's the second approach there can be something intermediate which is like
okay, let's build an eBM which is,
let's build a virtual machine which is similar to the eBM,
but not exactly the same,
but still in that kind of virtual machine,
you will still be able to write in sanity, right?
That kind of approach also exists.
So this is happening because, of course,
much of the intellectual energy of the ZK fee is going after these crypto problems.
But will it be the case that in the future
there's a zero-knowledge-proving system
for the JVM Java virtual machine as well?
Do you think in the future there will be a ZK-prover system
for a language like Go or Rust as well?
Or is there something fundamentally hard
about building ZK-provers for JVM and Go-Lang?
Definitely.
Well, you need to distinguish
between the low level of codes and the high-level languages.
Okay, so in general, there are some intermediate representations like VM or some
Yule or, you know, other intermediate representations that are in the middle.
But the interesting thing is that what we did is we created a processor that's equivalent to the
EBM.
The same way that we did a processor that's equivalent to the EBM, we could easily write a processor
that's equivalent to WASAM, or that's equivalent to RIS 5, or equivalent to ARM processor somehow.
And then you can reuse the stack.
Actually, for example, there is a very interesting project.
It's called RIS 0.
It's a RIS 5 emulation.
And they are just, you know, you can have a program that's right for RIS 5.
You can write that in RAS or in C and you compile and then you compile for RIS 5.
So you are using all the compiling stack, and then you are running a stack.
and then you are running there.
These things are, you can definitely do it.
Actually, we did that for the ABM,
for a specific processor, this Ethereum processor.
But the same way that we did for a processor,
we can do it for any other processor.
This, of course, here again is how efficient it is,
but it's definitely possible.
And this is the way it's, I think this is the way it's going,
the you know the ZK space so every time writing ZK circuits is it's going to be every time
that the thing is going to be written more in C or in Rast or in high level language
that in or even solidity that in that in low level circuits you know with
circom or or things like that or noir or even you know this is like I would say
this is kind of bright so
doing electronics and the other is just doing real programs.
So until now, the ZK was about doing electronics.
We were just doing some gates and putting them together
and writing, you know, just doing some electronics
that do maybe a specific calculator or do something that's a specific.
Now we are creating processors, we are starting creating processors
and then you are not going to, you will not need to write electronics.
You will just read programs for these processors.
And this is how is, who is, who is,
how it's evolving okay and the zvkabm is a clear milestone in in that direction
also we like to add that all this technology to build this processor that jord is describing
is also a contribution we have in the repositories of the project all of this tooling is open
source also as earlier was saying before we had this kind of a big work of optimization
to build our system like feasible system to prove Ethereum.
All these ideas are just out there and probably other teams will just reuse or catch up,
which is fine, perfectly fine for us.
The point is that we get from the situation that we had initially
estimating data centers to prove this protocol. We have now a single server
and the costs we have today they are so irrelevant that we feel that this is going to be
come kind of a super mainstream technology in many lines. So the optimizations we have implemented
and the ones we have in the pipeline and the backlog to continue doing, I feel very, very soon there
will be more processors running on CK and we are super happy about this too.
So basically the entire development is a little bit reminiscent of the history of chip design,
Right. So basically if you think back to like the, I mean, not literally think back because none of us are old enough to think back that far.
But basically, if you look at how everything kind of evolved, we had like these integrated circuits.
And in the beginning, the integrated circuits, they always had one very particular function.
The program was kind of hard coded into the chip itself.
And only with the advent of general purpose CPUs that could kind of do anything that kind of opened up this entire space for building in software, kind of in order to kind of make a program.
You no longer had to build a new chip socket.
You could just kind of reprogram a general purpose one you already had.
And obviously it's not optimized for any particular thing,
but it still works well enough that you can kind of use it
and kind of the advantages you gain and kind of speed and execution and agileness and so on.
Those kind of trump the fact that it's not optimized for any one particular thing.
And if you look at kind of how this entire space has evolved,
like in the last probably like decade or so,
high-end tech companies have actually started building single-purpose chips again.
Right? I mean, so basically the entire GPU thing that kind of, I mean, that's, I mean,
obviously that was kind of for gaming and shaders and whatever.
And kind of this has been taken over by the AI crowd to a large extent.
But even kind of if you look at CPUs, for instance, Apple now designs its own chips.
And I mean, this is just so it can optimize for very specific things.
Samsung does the same thing.
And I think other high-end tech companies do the same.
Do you actually see this happening in the zero knowledge space as well?
Or do you think this will happen that there will be certain sets of applications that will run on very specialized ZK infrastructure that is not general purpose, but purpose built for this particular?
So things I'm thinking about, for instance, you said that one of the very first things you built was payments protocol.
Because obviously payments are simple.
It's kind of like you could do it on, you know, state channels or something.
It's pretty that you don't need like a generalized state or anything.
Don't need, you know, smart contract computation.
Do you think we will see the rise of, you know, ZK machines that can only do very, that are optimized for very,
very specific sets of applications.
I mean, definitely yes, but I think you explain it very good.
But we can say that we're in the 60s right now in the processor, CERA.
And you are talking the GPUs that's in the 2000s, at least, the thing.
So we are still, right now we have a lot of specific circuits,
but it's not because we want to optimize them,
because sometimes it's the only way to do it.
It's like in the 60s.
So you want to do a calculator,
and you either do it an electronic or it's impossible.
You cannot do a processor, okay?
So here right now, for example, in the Starks,
I can compare Stark with the gate arrays.
So it's like, okay, it's not that you are writing processor.
We have a gate array, so you can have, you can write, you know,
just hardware in Gator Reds terms and there is a kind of a compiler and inputs that.
Okay, so this is stark and pill and all this technology we create.
Okay.
We created a processor, but you know, the processors is that we are very early in the technology still.
So the processors that we are building right now, they cannot do things.
So you, for example, then the quantity of clocks that you can run in a proof is not that much.
So it's like we are running a million clocks or something like that.
A million, you know, a million clocks, how many things, how much computation you can do in
8 million clocks.
If you have an 8 megahertz processor, this is just one second of processor in a proof, okay?
And right now are 3, 4 gigahertz.
So just to see where we are in the processors, in the processor, Cera.
So when you have these, you know, these limitations in the electronics, because we still have
these theoretical limitations in these, in the space.
We are much better that three years ago, but we are starting to do this.
And that means that you need to do things that are specific.
For example, if you are trying to build a ZKBM with a normal risk zero processor,
for a risk five processor with a thing that you will, you will, the problem is that you can do it,
but the program will be too complex and will not fit well.
You know, it's like you are doing a huge program and you are trying to run in a very,
very, very old processor. It's like you are building, I don't know, a Linux operating system and
you are trying to run it in a Z80 processor. It means it just doesn't fit. You need more memory. You need
more, you need to achieve more things. So we are in this stage still. So, okay, we start having
processors. Right now, brady processors is possible. This processor are doing amazing things. If we compare
of what we had until the point, but it's still a long run to do.
It's like, I'm going to a generic, you know, this commoditization of the processing power.
We are not there yet, okay?
And so at this point, I would say, yes, it's going to be, it's still going to be a specific
processors, but it's not, but that's because, you know, if you want to have a lot of payments,
it's difficult to do it in a normal processor.
So we probably will see this phase,
that everything goes to a generic
and maybe at some point in some specific,
but we need to wait,
in the kind of the processor,
we need to write like 40 years, okay?
So for that to happen.
Crypto, okay, things go faster,
but it's still a long run.
40 regular years are maybe like seven crypto years.
Also, in my opinion,
besides technology,
we are in a very, let's say,
intense exploration phase.
probably will need more stable use cases to make these specific purpose circuits be worth it.
Because if the generic, as you described, this wave of generic processes will happen,
and probably will be covering a lot of use cases and will have flexibility for the apps.
So probably with some of the apps being stable, then will make sense again to come to this application specific circuits.
I want to cover one final topic regarding the ZKEVM itself, and that's kind of audits.
So basically, if you look at the different approaches that we have for zero knowledge roll-ups,
what are the implications for audits and how much basically,
you said that a lot of the complexity you kind of abstract away from the user.
Does this also mean that, you,
you abstract a lot of the auditing requirements away from the user.
Because I mean, most of us don't read bytecode, maybe yawdy you do,
but probably no one in the entire world reads ZK circuit bytecode.
I mean, this is just how do you audit this and how does this compare
between the ZK EVM and the other ZK roll-ups?
Look, the nice thing of this is that, or approach of this architecture,
approach is based on layers. You explain it very well. We have the hardware layer, you know,
we just, we have the, but we call it the pill, then we have the processor, then we have the
ROM and then we have the program that's, so the, so we have the problem the ROM. And this is our
like different layers and this is, this has been very good also for developing because we can
have like different parallel teams, some of them doing hardware, so others doing software,
you know, just see, you can see it that way. Of course it's not hardware, it's, is the
certain meditation on these things.
But this is again, happens with them with the audits.
In the audits, you can assume that the low layer is okay,
and then you are just checking these basic layers.
So this allows us also to structure the,
and divide and concur from the audit front.
We have been auditing the system for the last three months right now
with different teams, different auditors,
that they went really deep.
I'm even very surprised on how deep these auditor teams just went there.
But again, said that, it's a new, you know, it's a new technology.
It's a new stack.
And there is a lot of code that needs to be checked because you need to check the hardware,
the software, and we check out the pieces.
And if there is just a single piece that goes wrong, you know, everything can go wrong.
Also, again, said that, we are also putting the measures in the smart contract level and the higher level so that if there is something wrong in this Prover system, we can fix it.
Or at least the users don't lose phones.
And this is, here is a, I would say, deployment or security technology that's put on that without losing decentralization or without losing.
too much centralization on that.
It's a balance here.
But it's important and it's also important that the systems to start and see how they work
and see in real in the real world how they perform and the issues that may happen and so on.
But this has been the work of the team for the last three months.
Yes, we are doing the audits.
in our system. But the advantage of our approach is that the users they don't need to
the applications. Then using the same smart contracts and technology they are running on
Ethereum or Polygon POS or other chains. So our approach is kind of a clean for them. And
there's no transformation of smart contracts anything. So once our audits are good to go,
users are let's say not implied in any of it. One final question.
less verification. So in the early days of zero knowledge proving, we had systems that required
trusted setups, the most famous of which was probably Zcash, where there was a trusted setup
ceremony. I think 30 or 40 people came together, generated some randomness together, and you had to
trust at least one of these 30 or 40 people would destroy their part of the randomness for the system
to be secured.
The first one was nine persons.
The first one was nine persons.
Okay.
And the second one I think was like 84 or something like that.
84. Okay. So you have to trust one person in nine to behave correctly for the first one and
one person in 84 to behave correctly in the second one. Is there anything like that in your system?
Let's see. Yes and no. Okay. So we are planning, we are planning to put the, so the, so the,
so the, it's a flunk, so the last part of the prover.
So well, first, so all our provere is based on the starcs, okay?
Starks don't require to set up at all.
Okay, and this is like 95% of the proof.
There is a last stage in the proofery that we convert a stark into a snark.
Okay.
So here is just a normal circuit, the circump program, and here we can use either,
we have three options here.
We can use Gro 16, Plunk, and Flunk.
Okay.
We are going to try to run with flunk.
We are pending because flunk is a very new protocol
and we are running an audit on the implementation of flunk
and on flunk itself so that we are sure that this is correct.
If there is nothing strange in the audit
and we see that everything is okay,
we are going to run it in, we are going to put it in flunk.
If there is something, we have a kind of a backup plan,
which is run it in gross 16.
Okay, let me explain you what are the difference, okay?
Plunk and Flunk they require what's called a universal setup, universal trustee setup.
That means it's a ceremony that's run once and anybody, and then you can use it in any
circuit on that. That's why it's called universal. This ceremony, for example, in Ethereum,
It has been a ceremony that there is more than 100
persons, more than 100 contributions,
and very trusted people here from Vitalik Kobi,
Barry Whitehead.
So you can see the list, which is like a lot of trusted persons
and it is like a, you know, it's,
in Hermes I think was like something like 70
and currently I think it's something like 100.
You can, you can check it because it's a public,
it's a, it's a public ceremony.
This is, I would say this is quite except that this ceremony is okay.
The problem is that growth 16 requires a specific ceremony.
So for the edge circuit, so every time you do an application, you need to run a ceremony.
You need to run a trusset, et cetera.
And this is a little bit annoying, and this is, I mean, this is not, it's a pain running these ceremonies.
So our plan here is we will try to run with flung.
If everything works with flung is flung, and we will use this universal,
this universal ceremony that's already run from the community.
It's the same one.
So and if we have some issue here, maybe we step back to to Grossstein.
Maybe we'll run a small, you know, a small ceremony with some
drustadt with auditors and internal people and maybe some members of the community,
but a small ceremony because it's going to be a temporary ceremony.
It's going to be a temporary ceremony until we fix or we change to flow
or plunk in case it doesn't work.
So this is a little bit like a backup plan.
But the idea is to go to Flunk,
which is universal ceremony.
And also the good thing of Flunk is that the verification cost is exactly the same.
Or it's very similar to the gross 16.
So we have the best of upwards.
The only problem of flunk is that the proving time is a little longer.
that it's
but it's just the last stage
so it's not that much
and it's okay
for us to spend this extra time.
So this is the
this is the
thing.
Thank you for joining us
on this week's episode.
We release new episodes every week.
You can find and subscribe to the show
on iTunes, Spotify,
YouTube, SoundCloud
or wherever you listen to podcasts.
And if you have a Google Home
or Alexa device,
you can tell it to listen
to the latest episode of the Epicenter podcast.
Go to epicenter.tv.tv slash subscribe for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter, so you get new episodes in your inbox as they're released.
If you want to interact with us, guests or other podcast listeners, you can follow us on Twitter.
And please leave us a review on iTunes. It helps people find the show, and we're always happy to read them.
So thanks so much, and we look forward to being back next week.
