Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - David Schwartz & Jordi Baylina: Polygon zkEVM – From Mainnet to Mass Adoption - Part 2
Episode Date: March 17, 2023While Part 1 (#486) focused on the technological advancements that allowed proofs to be generated in a practical manner, lowering both the time and the hardware requirements, in this episode, we take ...a closer look at the use cases of different types of zk rollups and how they could promote blockchain mass adoption. Polygon's zk EVM equivalence, coupled with low transaction fees, promise a frictionless user experience. As with all L2 rollups, sequencer decentralisation remains a pressing issue that needs to be solved.We were joined by Jordi Baylina (tech lead) & David Schwartz (CTO) from Polygon's zkEVM, to discuss the relationship between Polygon's multiple zk rollups, their use cases, as well as network statistics for zk-EVM.Topics covered in this episode:The relationship between Polygon and zkEVMThe differences between Polygon’s multiple zk rollupsNetwork participant roles in zkEVMTransaction submission stagesSequencer decentralisationTransaction costsUse cases for different zk rollupsOpen-sourceness, decentralisation and the licensing of zkEVM’s proverThe decentralised identity project (DID)Roadmap after main-net launchEpisode links: Polygon zkEVM – From Circuits to Mainnet – Part 1Jordi Baylina on TwitterDavid Schwartz on TwitterPolygon zkEVM on TwitterSponsors: Omni: Access all of Web3 in one easy-to-use wallet! Earn and manage assets at once with Omni's built-in staking, yield vaults, bridges, swaps and NFT support.https://omni.app/ -This episode is hosted by Friederike Ernst & Meher Roy. Show notes and listening options: epicenter.tv/487
Transcript
Discussion (0)
This is Epicenter episode 487 with guests David Schwartz and Jordy Baileena from Polygon ZKEVM and Polygon ID.
Welcome to Epicenter, the show which shocks part the technologies, projects and people driving decentralization and the blockchain revolution.
I'm Frederica Ernst and I'm here with Meher Roy.
And today we're again speaking with David Schwartz and Jodi Baileena of Polygon for part two of our episode on the ZKEVM.
We did an intensely technical episode last week, and today is more about the business and adoption side of Polygon ZKVM.
So before we talk with David and Jodi again, let me tell you about our sponsor this week.
Omni is your new favorite multi-chain mobile wallet.
Omni supports more than 25 protocols so you can manage all of your assets in one place.
But what's really special about Omni is what you can do inside the wallet.
Want to get yield? Omni allows you to get the best APYs with zero fees and three tabs.
Need to swap?
Omni aggregates all major bridges and dexes so you can bridge and swap across all supported networks
in one transaction directly in your wallet.
Love NFTs?
Omni offers the broadest NFT support of any wallet so you can collect and manage your favorite
NFTs across all chains in one place.
Omni is truly the easiest way to use Web3 and it's fully self-costodial, meaning,
You never have to trust anyone with your assets other than yourself.
And they support ledger.
Give Omni a try at Omni.com.
And without further ado, let's go to the interview.
Let's talk about the network itself.
So there's already a test net live and the main net is going to launch next month.
That's in March.
So first of all, super easy question.
What's actually the relationship between Polygon ZKVM
in Polygon because Polygon is kind of,
is kind of the standalone side chain kind of thing.
And Polygon ZKEVM is going to be like a proper layer
to on top of Ethereum, right?
So what's kind of the, what's the relationship
and what are the plans for kind of chain architecture
and planning for the future?
Well, Polygon POS is the product,
Polygon is running now.
This is the chain.
Polygon is providing service today.
The relationship is clear because we are shipping a new network.
We are building this CKROL-up model, which is a complete layer 2.
And we are in the beta version in Maynett next month, so it's the first release.
And we have plans to just let these networks, you know, be, you know, more together in parallel.
because there are different services.
One is as I chain with a cost model and a running service,
and we will have a different technology with a different cost model
and a different characteristics.
So the plan today is to start this new network
and we'll be providing, let's say, this whole vision very soon
from the polygon perspective, how this connects together,
or the POS connects with the CKBM, with the supernet, with Miden, with zero, all these projects working together.
We have a strategy to connect all of them and to provide, let's say, a good portfolio of solutions and options for users, depending on the requirements of the application.
But at the same time being connected and being, let's say, useful in terms of composability and, you know,
deployment of apps that can work together in different networks.
So when I go to the Polygon website, the main polygon website,
and I go to solutions, there's three things listed under,
under, you know, like ZK, one is Polygon Zero, one is Polygon Midden,
and the third is Polygon ZK EVM, which is your project.
the project we are into right now.
What's the difference between Midden, ZK, EVM, and Polygon Zero?
Yes.
Well, we have a Polygon was investing in ZK teams in 2021,
and this was the announcement of the merge with Polygon, sorry, for Hermes Network,
which is our project.
There was also the acquisition of the MIR protocol, which is Polygon Zero now.
and also the setup of the protocol, sorry, the project Polygon Midden.
The approach was kind of a strategy of diversification in different approaches,
because as we discussed before, one year and a half ago, two years ago,
it was not clear which was the good approach to follow.
And with this model, Polygon was basically hedging on the three approaches,
which we discussed today.
So it turns out that we were the first team to ship,
and the first team to get the first team to get,
to the position of shipping a mainnet, which is amazing, but the other teams are also working in parallel.
And we had, during this time, we had this contribution, mutual contribution, because
you explained before, we were targeting big computation pools to make this feasible.
And in terms of course, we had a lot of doubts.
But with this internal contributions, especially Polygon Zero, we were able to accelerate the
grouber like 40 times so we have today the high performance and the low cost we
have so we are a combination of different approaches together with internal
collaboration and we are getting to the point where we will you know
define defining well which what's the let's say a role model the model for
deployment of polygon networks and as I said we have this type 2
VM solution which is ours. We also will be targeting type 1 and we also have a new VM approach.
So all of these projects make sense for for us and still as we discussed today, we feel like no single
solution will fit for everyone because there will be different needs for different applications.
Our intention is to connect all these solutions as we are able to finish these projects and
you know, get this kind of internal collaboration.
And we are learning a lot.
We are getting a lot of experience on this UK role-up development.
And this is kind of, you know, the commitment of Polygon is to scale Ethereum
to build the best solution for, you know, the Ethereum space.
And we are following this strategy that's a little bit, you know, diversification
to be, you know, leading this in many ways.
So what I can say is that this strategy will be explained very soon.
And the idea behind this is that we can provide connected ecosystem in Polygon
so we can have different tires of service for any application.
But CK will be kind of the basis for all this strategy.
since we're getting a lot of experience and let's say learnings and we have a very strong team on this aspect
CK is going to be the driver for all this evolution of the portfolio of solutions of Polygon.
Cool, cool, that's really interesting.
So there will be like the Polygon POS chain, which is the currently operational chain.
And then there will be the super nets.
So there might be a lot of different supernet.
nets, then there'll be a Polygon ZK-EVM network, which is the network you are launching specifically.
And then there might be other ZK-based networks, maybe it's two of them, that might come into
production later on.
And of course, as in the future, the Polygon team may launch even other networks as well.
So, yeah, it's a very large vision, right?
Well, but at the end is a scaling Ethereum.
We provide more clarity soon in terms of how this connects together.
But as you see, we have several initiatives that we want to just connect.
So the polygon ecosystem becomes, let's say, simple and clear.
Today, with this new network we are preparing for launch,
is clear that this question makes total sense.
But very soon will be just explained and clear because today we are focusing basically on this shipment and we have just, you know, all the forces put us just together on this initiative to be successful.
Okay. So let's then zoom into this network that you'll be launching. What are the kind of the different roles in the network? Like for example, if I think of Bitcoin, I imagine, oh, the miner is a role.
mining pool is a role and node full node is the third role to understand your network what
are what are kind of like the important roles that that we that we need to grasp we have we have
we have two roles okay one is the sequencer call it centralized sequencer but uh just to
just to just to be clear the network is going to be decentralized or with a lot so with a lot with a
sort of decentralization. Sequencer, the only thing that can do is just select which transactions
are inserting in the network. But for example, the network is going to be censorship
resistant. That means that if a user wants to send a transaction and this centralized sequencer
don't want to include this transaction, the user can always do an L1 transaction, including
these L2 transactions, and then the sequencer will be forced to include.
this transaction.
The sequencer have two options.
Either include these transactions or do nothing.
But if you don't do nothing,
then it is a time mode
and then anybody can be a sequencer.
So this is not,
I mean, if you are a little bit purists,
this is not fully decentralization
because a sequencer has the right
to kick people out of the network.
It's like it's not a universal service.
But the sequencer cannot steal the phones
or even lock the phones of a user.
So that's the model that we have.
It's not perfect, but it's a long way to, so it's a lot of the properties of the centralization
you have it with this network.
And the only, the other role is the prover or the provers, you know, and here in the
Provers, the idea is, well, at the beginning, we are going to be the main Prover, but at some
point it's going to be a kind of a market of the Prover, but this Prover, they cannot do much.
They are just taking a state that's already, so the transactions are already on chain, and
And what you are doing is just generating the proof.
What is is converting an implicit state to an explicit state
and pushing that state on change
so that the people can withdraw.
It's like consolidating the state and change.
But the state is already defined.
The transactions are already there.
Anybody can compute this state.
So it's the state is already known.
So the proofer cannot modify the state
can not change the network.
They can, you know, they can delay this.
this publication of the state, but that's the maximum that can do. And I mean, we are always
going to run a prover, even just as a backup. Okay. So that's the, that's the thing. And it's going to be
a here market for that. So it's going to be some money that's going to be in the network and
first camphers serve. So if you some money, if you have a prover, you just made the proof and
somebody will, somebody will prove it. There is a full mechanism. You can maybe read more on that,
but it's going to be a market for the prover.
In the midterm, at the beginning,
we are going to be the ones that,
so the idea is that in the beginning,
in the smart contract,
we are going to be the only prover
as far as we are following the next,
as far as we are creating proofs.
If we stop creating proof,
then anybody will be able to generate proofs.
This gives us a lot of warranty.
For example, this approach allows us,
for example, if there is an issue
in the soundness of the,
of the proof.
I mean, we are going to, so, okay, we could generate a malicious proof,
but we are going to generate the right proof.
So we are not going to prove something that's wrong.
Okay, so this gives us, so if the user trust us,
so that means that even if the provor is bad, if the system is bad,
we are not going to generate a proof that's symbolic.
So this gives us some confident, so some confident,
if you have this trust, you can get this confidence.
using the on using the network. It's not the final goal because the final goal is to be fully
decentralized so that you don't have to trust us that we are going to run a malicious
prover. That's why it's important that the provor is okay. That's why it's important to audit
the proof and that the provor is okay. But at least for the launch, we have this warranty that
the users that are going to use the launch, if there is something wrong in the proof and we are
not doing crazy things, then they will not lose the fonts.
So if I imagine it like this, let's say you know, I am a user on a mobile and I have some assets on this network.
When I create a transaction, I can use whatever mobile wallet I am already used to for Ethereum.
I send the transaction.
The difference to Ethereum will be in this case, my transaction is going to go to a sequencer and there's only one at launch.
So it goes there.
That sequencer is kind of receiving transactions from users,
and it's like creating the block.
It's batching them and creating the block and publishing, hey, here's the block.
Then there is a prover, which there's only one at launch,
but in later there will be many.
And these provers are taking the block,
and they are then running the ZKAVM Prover,
and then they're creating sort of a certificate saying,
hey, this block, these transactions are genuine, and this block takes the state from S to S prime,
and here's a certificate for that.
And then on the other side, if, when I'm on my mobile, I can essentially ask the polygon network,
hey, tell me your state and give me a proof that that state is correct, and the polygon network
can do your network can do that on the other side.
There are three stages.
You describe two, but there are actually
a three stages thing.
It's like if you are the user, you create a transaction.
You send this transaction to the sequencer and the sequencer will create the block
immediately.
So here you will have a finality maybe of one, two seconds.
This is, I mean, this is as far as you are trusting the sequencer, this is going to be final.
You know, this is going to be the, so the sequencer is going to, it's promising you.
that's going to publish that transaction.
Of course, it's a centralized model.
You need to trust a sequencer.
If you don't trust a sequencer,
you cannot do anything with that information.
But if you are trusting the sequencer,
at least you know that this is the first stage.
Okay?
The second stage is the sequencer will put this transaction on chain,
but we'll not generate the proof.
It will put this data availability.
At this point, the transaction is final.
You don't need to trust a sequencer.
You know that this transaction is going to be executed
So actually because you know that that transaction is being executed, you can compute the state.
You can actually, you can manually and any user can check that the state is the current one.
The only thing is that the network, the chain don't know, which is the state.
So the only thing that you will, in this stage, the only thing that you will not be able to do is withdraw funds.
And it's a third stage, or maybe once every half an hour or whenever, okay, it's going to be a proof that will prove all the blocks that are in the middle.
and we'll put this okay the current state at this point is this one okay this is the
consolidated state so the state is going to be that in the network and is at that point where you will
be able to withdraw the to withdraw the funds okay but actually is this three three three
stages one is finality of one two seconds that's the sequencer you trust then another that's
going to be i don't know the numbers yet but it's going to be every few minutes i don't know
maybe two three four minutes that's going to be a uh a transaction with
all the, with the data availability, with all the transactions that will need to be sequenced,
and a third and a final, that's only half an hour, that it's like this,
consolidate the state that will allow you to withdraw. This is all the three stages.
Let me kind of dig in here a little bit. So I'm not so worried about the provo, because in principle,
anyone can run a provo. Right. So basically, even if there's only one at network launch,
I could build a prover and I would be economically incentivized.
incentivized to do so. So to me it's kind of akin to kind of having people who liquidate under water
loans or make or something. Right. So basically this is kind of, to me, a very similar proposition.
I'm a little bit more skeptical about the centralization of the sequencer because, I mean, you said that
I can always kind of force being included by sending this to L1. This is correct. But in a
I'm still economically locked in if it's not economically viable to actually send this to L1.
It's kind of like saying, okay, you are on an island and there is one airline that actually
services the island, say it's Lufthansa.
And Lufthansa kind of refuses to give me a ticket, right?
And Lufthansa says, and I say, well, I'm stuck here now.
And Lufthansa will say, well, you could always rent a private jet.
And I mean, obviously, this is true.
I could charter a private jet, but it would be very expensive,
much more expensive than just being included by Lofthansa.
And for many transactions, it wouldn't actually be viable to kind of escalate to layer one.
So technically you're not locked in, but economically you are, right?
Well, I mean, yeah, but you know, you will have to pay this ticket,
but you're probably, you are not going to go to that Iceland anymore.
Okay, but at least you are not going to be stick there.
So it's like Lufthansa warranties you.
So it's like, it's not like a private, I would not say a private jet.
It's like a Lufthansa.
You have a way, so you have on this case is the layer one,
but you have a like judge that can force Lufthansa to sell you a ticket for a limited price,
which is the layer one cost.
So I mean.
But I need to pay the judge, right?
You need to pay, say, yeah, exactly.
Yeah, yeah, it's going to cause you.
But so this is, so this is why it's not.
perfectly decentralized. This is not the perfect. But at least you will not get a stick in the
Iceland. You will leave the Iceland. And you probably will not come back. Agree.
Oh, definitely. We'll not come back. Definitely. Not a good island experience.
But it's not, but the important is that you are not going to get a stock in the, in the
Iceland. So that's the important part here. So that's the...
Yeah, the idea here is to preserve the properties of a censorship system.
because we are launching this network, it was a trusted sequencer, it's a single one.
So what we want to provide is, you know, this property.
So everyone is just, you know, relax about, I have the option to do it.
Of course, the sequencer will process all transactions,
but at least we have provided this mechanism that you don't need to trust anyone here
because you will be able to do it in some way.
But the next step will be to decentralize sequencing also.
So we have a more decentralized sequence.
Yes.
Exactly.
So I assume Jody, this is going to be your next job after all the optimization.
I don't know if it's going to be my job, but for someone in Polygon for sure.
But yeah, but it's important.
Yeah, this is an important piece here.
You can, an interesting way to see a sequencer is a consensus.
It's a consensus. It's a consensus. It's a consensus system.
Of course, it's a centralized system.
I would say it's a dictatorship consensus system.
So the consensus is what the centralized side.
It's a consensus.
It's a very trivial consensus way.
But you can substitute the sequencer with any consensus mechanism you want to put here.
So you can put proof of authority.
You can do the proof stake.
even you know you can you can put whatever whatever consensus
mechanism here and it's I mean then you have the limitations of the consensus
okay you will not have the same finality and so on and you will get in that
dilemma in the dilemma balance but perfectly doable and here in in Polygon we have a
lot of experience in consensus in creating consensus consensus networks
Let's talk about the gas.
So obviously all of this is not for free.
So you need to pay the sequence of the Prover.
And you also need to pay the layer one, right?
Because you have these periodic check-ins where obviously kind of data stood on L1.
So how does the gas model work on ZKEVM?
So from, let's see, we got two things.
So from the end user perspective, the model is going to be exactly the same
that layer one is going to be gas and you pay for,
for, for, put a gas price and so it works exactly the same.
And we are using and the way that smart contracts work,
it's very equivalent.
So it's like, so all the code is going to cause you.
So the in, in gas quantity is going to cause you exactly, exactly, exactly the same.
Okay.
Oh, this is super interesting.
I would have, I would have imagined that there are up codes that are more expensive
on ZKEVM just because they're difficult to implement.
And I mean mispriced upcodes is really,
it's a danger for the network, right?
If you actually offer up codes that are more expensive
for you to actually process than people are paying for them.
But we can regulate that with a gas price.
So maybe the idea is that depending on the transaction
that you are doing, so in general,
the transactions are going to cause you the same
because at the end is a kind of an average,
okay, so the sequencer will work.
But if you are doing exchange transactions,
for example, a transaction that's
doing a loop of catchax or a transaction that's putting a lot of that availability in there.
So if you are doing this kind of transactions, what you can see is that the sequencer,
they may ask you for a bigger gas price.
Okay, so basically it's not a fixed price. So basically it's not a fixed price.
It's going to be a fixed price for if you are doing crazy things. If you are,
you know, just doing normal transactions should be more or less the same.
And the sequence that decides what's crazy?
Yeah, well, it's if it fits.
in the blocks or doesn't fit in the block on how it occupates that yeah if you are look for
example if you that availability the cost of that availability is going to be the same so a call
data it's the same if you want to put the data in the call data that means that the cost is
going to be the same for all one and l2 here you will not have savings okay uh so if you are putting
transactions with a relatively normal uh call data okay um which call data in general is like uh less than
1% of the cost of the transaction.
So it's not going to be what the cost would.
But if you are creating a specific transaction
that puts a lot of data,
then maybe the sequencer has the right
to not include that transaction or to charge you more.
But we are expecting that normal users will not see any difference on that.
Oh yeah, absolutely.
I totally agree that basically within normal use,
this is not an issue at all.
It's just that basically,
Basically, if you have, I mean, so in many respects,
kind of governance minimization at these layers is kind of a great goal.
And basically saying, well, if anything's too crazy,
the sequencer can throw it out or not included or charge more.
It kind of, it opens a floodgate in a way.
So basically, I, yeah.
It's a centralized sequencer, yeah.
Yeah, maybe it's not.
not that big a deal. It's just, yeah, I mean, if there's a clear, if, if there's a clear way of
attacking the network and it could only react via like some governance mechanism, yeah, but okay,
yeah, I mean, I mean, kind of sequence a decentralization from your roadmap anyways.
There are some of the Niel of service attacks there to the sequencer that this is one of
the important topics to deal. So there, so you're right here that the, you need to, you need to
But, you know, I mean, a normal web page, it's like a normal, like any other centralized service.
You know, a normal web page is also exposed to the Nilo service attacks.
Okay.
And they need to be protected.
So that's, it's a centralized system and they will have protections like any other centralized system.
So it's a centralized sequencer.
This is what it is.
When we talk about the centralized sequencer, then things just, the rules change.
Okay.
But sorry, I kind of interrupted your.
explanation about the gas in the so basically kind of you kind of pay a proportional amount of gas
for each upcode you use and that's kind of what the user sees but basically what the network does
is kind of it has to pay like different actors so the sequencer the prover as well as the L1 as a service
provider and so how is that handled well the sequencer just the just well it has an economic
an economic engine somehow, but the idea is how much I will get.
It's the same like a miner, you know, it's like how much I will get if I include this
transaction and if it's profitable, they will include it.
And if not, it will not include it and it will take just the most profitable transactions,
as much profitable transactions they can in the sequence.
It's, I mean, as easy as that and if you want as complex as that, because, you know,
how much profitable is a transaction, then you need to check that transaction.
and you need to, you need to, and this is where the, all the complexity of the sequencer comes.
But the data is very simple.
You have a back pool of transactions, you select the ones that are more profitable.
And you need to pay all the costs.
You need to pay the gas, the provor, the data availability.
You need to pay all the, all the things in there.
You need to take that in account.
So the gas costs for the user on Polycon, Polygon ZKVM, they kind of,
they rise in lockstep with gas costs on L1, right?
Because basically if committing these things to L1 rises in price,
there's no way for you to not pass this on to the user.
Do you see that as a limiting factor?
Well, somehow, it's going to be a direct correlation.
So it's going to be a direct correlation between the price of L2 and with the price of L1.
Definitely it's going to be there.
especially for this, you know, for that availability, the cost.
So a lot of the costs, most of the costs actually are related to the layer one.
So yeah, it's like, yeah, the cost of transport depends on the cost of the oil.
But yeah, transport, you need, you need soil.
So that's it's the benefit.
The benefit of having data availability layer one is also, yes, has this problem.
Because the security of, you know, putting all these transactions in layer one is higher and, you know,
the cost of layer one needs to be, you know, included in the cost for a user layer two.
So you talked about this network of different networks earlier, you know, within the Polygon family.
Will there be like a flow chart on the Polygon website where basically I say,
I want to deploy a DAP that kind of has these and these requirements,
and this is kind of the security guarantees I would like?
and you guys will tell me,
okay, then do not deploy to Polygon ZKEVM,
deploy to, I don't know, Polygon POS.
Yes, excellent.
This is the objective we will have for this year.
So what kind of applications do you see living on Polygon ZKEVM
in the mid to long run?
Well, we have this a path to,
organizations that you were saying we have a lot of in the backlog we closed in some way a release to to launch this first version but the ckbm has the constraints of the proof in layer one which is the ckerlap normal model and also we have them the constraint of data availability in layer one for us the the proving cost is becoming little part of the the cost for users every time we optimize is lower and lower
So basically it's data availability.
Most of the cost will be that.
Also, there will be some fees for the proven.
This is normal things.
The network needs to get some kind of benefit for the participants in some way.
But at the same time, we have a plan also to optimize data availability cost.
And the idea here is to just specify which are the benefits in terms of finality,
throwput, cost for users.
We feel like the applications that require
better security in terms of data availability,
like for example, Defi,
they are very, very interested in CVM.
Because we are bringing this higher level of security,
both in, you know, is a layer two,
but you have so much smart contracts,
but you have data on chain,
and you have this proving that provides
super fast finality, because we are talking about
finally the under one hour more or less.
So this for this kind of applications is very important
to have this, you know, fast movements in liquidity.
Probably applications with lower values and sections,
like gaming, I don't know,
this probably will be a better fit in the POS or other networks
that have data off-chain.
But we see that there's a big market of high-value transactions
that are very interested in this period.
NFTs is also they are having a lot of interest in that you know for all these
NFTs markets and the movies that's real value you know it's an NFT that has a specific
some some value in there and you want to warranty because you can have the NFT in the
layer one so you don't need require the bridge and and then you are moving and and
working it in layer two that's another important application that looks like it's
coming but you know there are many and it's it's not open it's
it's a chain that's a wild world chain.
I'm sure it's going to be a lot of applications that I don't even
I'm aware of.
One of the interesting topics in the whole ZK space is the question of licensing.
So the novel part of the system is the provor, right?
Like the thing that will essentially take the blocks and generate the proofs
for the correct state transition.
What is your licensing approach as Polygon ZKEVM for your approval?
Yes.
Well, this is something we are discussing,
but our approach in the long run is going to be open source for sure
because everything we build is open source.
We are building with source code available since July last year.
As we said, we are sharing a lot of ideas and the code with the whole community.
for us there's a path towards you know decentralization also that we want to accomplish now we are
involved in the audits it's about security also but then we have a plan for back bounties we are
launching this beta magnet we have a path of decentralization and we have also to include
a valuable formatic token into this ckbm so the project is not over in terms of the full
product and we are just you know thinking on the license we want to put because we will put a
license that kind of specifies the use you can do with this code before the magnet so it's something
we will do very soon but the spirit for us is that this becomes available eventually because we
want to cover with our roadmap and we also want to make this as public available good as
We are just using some other technology from other teams.
We are happy other teams can use our technology too.
In fact, the only repository that doesn't have a permissive license is the provoer.
For the rest, as I said before, the whole tooling, the client, all of this is open source.
And anyone can just use this code.
So this is something that's important, eh?
Because first, so that the code is available is probably the most important thing.
Because if you want to trust the system,
you need to understand what's going on.
And you need to see, you need to verify.
So anybody should be able to verify that what you're running is what it is.
So the first thing is that the code needs to be available.
We open all the repos in July.
And this gives us a lot of confidence.
During this six months or eight months,
many developers have checked in that and now have
issues and for checking backs or for even for doing proposals.
So this has been available from there and this is very important from the security perspective
and this is a must, you know, the system must be open.
So you need to see what's moving and it needs to be very fire when you need to check that
what's in the network is actually what's in there.
So this is the first step and this is already there.
Okay.
The second thing is all about the knowledge.
You know, all that, you know, all the, so we are showing all the knowledge and but not only that,
all the tooling, right now all the languages to write that,
all the tooling that we use for write that,
all the verifier, the Pillar Start, the Star Generation.
So all the tooling that's usable by third parties, you know, for other projects,
this is already in MIT and Apache license.
So anybody can use that.
So the only piece is that the Prover.
And here what we're just trying to protect is just having some opportunities,
person just creating their own token, just couldn't pasteing the, the same way that happens
with Uniswap or that happened with Buba networks or happen in this space. We don't want to, we don't
like this model. We don't want this to happen and this is what we are trying to avoid. Besides that,
if you are doing a project, that's a serious project, we want to use that. I mean, this should be
open and the spirit is that. So we are against just the opportunistic people that are just taking
profit of others things worked by that. That's the only thing that we want to protect. But besides
that, everything should be open and visible and learning and that. This is the spirit that's
behind the licensing that we are trying to achieve. But again, we are deciding that and it's
going to be available very soon. Yeah. So from my perspective, yeah, it makes sense that
you're trying like one thing that you're trying to do is prevent somebody else from launching
a roll up with your technology before you do and getting the press attention and that makes
complete sense right i would even go one step further i mean it makes conceptual sense for me to
say you know six months after for the first six months after your main net is live
The proofer is not open source.
You're trying to get some kind of competitive lead on the market before it becomes open source and other people can use it.
That is also actually to me understandable.
You've put in a lot of effort.
You want a competitive edge.
Right.
Like the question I mean, the central element of my question is the time horizon beyond the six month where I see projects in the ecosystem.
which are trying to say, okay, the Prover is open source,
but the license is such that if it emits a proof,
it can only be verified by a white list of approved verifiers.
There are projects attempting a licensing strategy like that,
which is existing beyond the six months scale,
beyond getting the early competitive advantage.
Do you have plans in that direction?
No.
No, we will not do that.
I mean, once the licensing says applied, it's going to be full of control.
So no plans for limiting, you know, this kind of thing.
Our model is not a license. It's not a license based.
So the business model of the way is not a license base.
So we are not selling licenses or, I mean, or charging for royalties or things like that.
This is not our model.
And this is not what we want to do.
We are just, so the thing is that so we are based, we want to open source, we are want to contribute, we want to contribute to the Ethereum space,
want to contribute to the blockchain space.
We want this to be open.
It's already open.
You know, all the tooling is already open.
And we are just trying to protect, you know, this has been a huge investment in resources and in money, in effort.
And it's a lot of people that have trust in, in Polygon.
And we are just trying to protect this investment for a while.
so that nobody just take profit, I would say, unfair profit of that.
And, you know, if this never happened in the space, I would argue that, okay, why are you doing that?
But the problem is that we have, we have, I mentioned some, but there are a lot of examples
in the space that this has happened.
And this is, I think, is a bad practice, is something that we need to avoid and we need
to have to respect that.
So that's this kind of licensing.
I mean, other projects has been forced to do that.
Probably for me, it's hard, because it goes even against my principles having this.
I would love to do everything open source on the first day and also that.
But I understand that there is these things and we need to protect against these things.
And we need to have to do something.
And I think this is the less the best thing that we can do for that.
That's just a decision and in the case it's going to be a temporal thing.
In the very beginning, you guys told us that all of this kind of started as a side project
while you were trying to create a decentralized identity. Tell us about this DID project and
where it's at right now because it never really stopped, right? I mean, it's still going on.
Yeah. It never stopped.
and probably we will never stop because it's something we wanted to do from the beginning.
Jordy can explain better.
I would say now is inside Polygon.
Also, we have Polygon ID as part of the portfolio we are developing.
We are creating this kind of infrastructure on identity, self-sovereign identity, blockchain-based.
We will be focusing a lot on the Web3 space because every time we understand,
better that the apps need this identity layer and basically what we want to do in this project
is polygon style, provide a public infrastructure that can just enable the development of the
ecosystem of participants on this identity infrastructure like issuers of trust, consumers of trust,
users, the apps, all of this around polygon and probably around polygon and that would mean
around EVM, to be honest, because it's beyond Polygon.
We are in the Ethereum space,
and this probably will, the idea is that it becomes a public good
that every project in the IBM space can reuse.
But yes, this is how it is today.
Probably, Jeremy can explain better how this started
or why it makes sense for us.
Well, so sovereign identity is an important part of decentralization.
I mean, way to see it is like the login.
So any application needs a kind of login.
I would say most of applications needs logging.
Maybe a payment application don't need a login.
But that's probably the exceptions.
You know, if you want to do any application,
you need some sort of login, reputation, identity.
You need to prove something to do things.
And this is what the identity project was about.
And this is an important part.
It's an important layer that needs to be solved.
And in IDN3, I mean, we started that.
We realized that the privacy in this layer was very important.
And so we started with zero knowledge.
Then we moved to roll-ups.
But the spirit of the identity has been always there.
And right now, said Polygonry is an important team
that's continuing on working in that direction.
Right now, yeah, they are doing a lot of progress right now on that.
I'm a little bit aside on that because I'm focusing this KVM.
But I know that there is a very amazing team that's pushing hard on that and having great ideas and yeah, following this project.
So obviously building a decentralized identity is super hard unless you wanted to be completely public.
So, I mean, it's kind of, it's also ZK based.
there's a lot of questions as to kind of how do you go about the civil problem,
who kind of gets to a test what and so on.
How do you think about these questions?
Because it's such a large problem space that is difficult to even know where to start.
Let me explain you how I see identity,
because we are talking about in a list of how I see identity is a very,
infrastructure layer, a very basic layer. So you can understand identity as identities,
you know, users. If you want public-private key, if you want to do it super simple, okay,
so you have your identity is represented by your private key and your public key. That's the
easiest way to see it. Then you have a database of claims. So a claim is, or attestation,
it's something that you say, in general about some other identity, but it can be anything.
Okay. This database is a decentralized database. You hold part of this database and some of these
are part of this database may be on chain, but most of them should be off-chain. And it's going to
be maybe if you are doing a claim over David, you will hold part of this database and David
will hold this part of this other database. Okay. And it's so, and the idea is that each identity
owns their own data. They owns their own information. Okay. So that's the idea of us are
sovereign here. Okay. And with this database, then there is a proving system that's running on top of
that. If you want it's a query system, it's like you can ask for a specific information,
and the other can prove that this specific information is valid. So we have this proving
system that's running on top of these satellites. This is what it is, the basic, the infrastructure.
And with this, you can build anything. You can build a reputation system. I mean, you can
do a centralized logging, or you can build a proof of individual, you know, maybe if you hold
10 people with reputation.
You can do everything you want.
But of course, building the specific reputation system
or application system or how you are going to use
this basic identity, this is going to be dependent.
So it's going to be application specific or user specifics.
And some, but for me, the basic layer that needs to be solved
is this, that universal, that anybody can create an identity
or as many identities as they want.
And identity is nothing more that like a public-private key, or maybe a little bit more.
But that's the basic thing.
Anybody can do claims on any other identities.
And there is a notary authorization system.
So this is database anchored in the blockchain.
So that when you are doing a claim, you know that it's claim at this time.
And you can always prove that.
And this is the, you know, this is the basic infrastructure, the basic idea of identity,
and at the end, the original identity project was to build this basic protocol,
and now we are leveraging, we are using this identity protocol just to build Polygon ID,
which is all these services and all these libraries and all this tooling to work with this protocol.
Yeah, our idea here is to connect all these issues of trust.
There are many amazing projects working on solving the problem,
for example of civil resistance or proof of uniqueness this kind of utility that actually the
applications need to operate especially in the blockchain space so our as juret was saying we were
building this we're building this infrastructure layer but we have to interesting properties to trying
to attract all this ecosystem of development around identity which is uh let's say connecting the
the format of credential verifiable credential that's based on a
standards. We have a model of presentation of these credentials that's
zero-knowledge-friendly and this concept we are able to use SID-K to prove attributes
about users in a private way. And we have, from the beginning, we took this privacy
as the main property of the protocol and this is the origin of SIRCOM, all the SICK works
that Jordi started. But we have this as an embedded property with a specific language to
connect between issuers of trust, users, and applications.
And also the second properties that we have solved the on-chain interaction.
So the identity protocols, we have a lot, even based on blockchain, but basically
they operate off-chain because the way the communication happens between users, wallets,
and applications and issuers.
The on-chain interaction is complex.
So we have managed to connect this format of revival credentials with the engine interactions.
So here's where we expect that we can trigger a lot of impact in the ecosystem of Web 3.
As Polygon, we have a lot of applications.
The apps are already using Polygon network, and we expect that we can create some kind of reusable reputation built on top of these credentials.
So users can reuse this reputation across many applications.
That makes it so much easier for the user, but it's also a great value add for the network because it creates the stickiness that otherwise you might not have.
So I think we're probably doing an entire episode on a decentralized identity system.
This is probably what we'll have to do at some point.
Let's kind of stick with your predictions for the ZK ecosystem for this year and next year.
So what are your plans for the rest of this year after Main Net launch?
And how do you see this space progressing over, say, the next two years or so?
Well, we have a very clear plan.
Actually, I'm a little bit nervous even because, you know, right now I have like the team stop.
It's not that stop because we are preparing the launch and we have a lot of work in the audits on lunch.
But we have a lot of things to improvements to do and we have a lot of things in the backlog that we want to improve.
We want to improve the costs.
We want to improve the compatibility.
We have to implement.
We are going to lay a type one.
So we want to have some pre-compiled smart contracts that we need to implement and we need to do some to grow in this compatibility.
There is also this decentralization.
There is moving to decentralization, decentralized sequencer.
and so on.
So this is a clear roadmap for the end
for the next year.
It's very clear,
very specific, I would say,
on that.
David, maybe you want to complement the polygon side.
No, no, you already said,
we are closing a release because we went to,
as already was saying before,
we have been auditing for three months already.
So this means that we froze the code a lot
and we have many interesting things in the backlog to implement.
because in the end, this project was about scaling Ethereum.
So we need to follow this path of acceleration of, you know,
providing more TPS, providing better costs and providing, let's say, more equivalence.
Also, other objective is to complete decentralization in many lives
and to provide valuable automatic token, as I said before.
So for us, this is the whole year in the perspective we have today.
Fantastic. And where can people go to find out more about Polygon ZKEVM?
So to learn how to build on it, to join the community, to hear about updates, where should they go?
Well, you go to Polygon web page, and there's a Polygon Wiki site with the documentation for all the Polygon products.
if you go to the sub-page of Polyconcequivian,
you will find also the information of the product,
documentation, and how to interact.
For us, the team is excited about this launch
and happy to just answer questions
or respond to inquiries.
Yeah, there is also the repositories,
and all the repositories are public,
so checking, especially if you are a developer,
you know, just checking the code
and the repository is a lot of information there
that you can check.
And there is also, you know, you check, in general, I used to give a conference or a talk or something like every month or something like that.
So there is a lot of conference that you can also check.
Just Google me and it's a lot of information on there.
I love this.
David's take on how to get in touch on his website and go to the contact us.
section and draw
the, oh, get up,
pull requests.
I mean,
that's the easiest
way.
You heard it here first.
Thank you guys for coming on.
It was a super fun,
very long episode.
And I feel like I
learned a lot about
how the ZKEBM works
and how it's going to
complement the Ethereum ecosystem.
Thank you very much.
Thank you so much.
Thank you for joining us on this week's episode.
We release new episodes every week.
You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud,
or wherever you listen to podcasts.
And if you have a Google Home or Alexa device,
you can tell it to listen to the latest episode of the Epicenter podcast.
Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter,
so you get new episodes in your inbox as they're released.
If you want to interact with us,
guests or other podcast listeners,
you can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show,
and we're always happy to read them.
So thanks so much,
and we look forward to being back next week.
