Bankless - The zkWars | zkSync & Scroll
Episode Date: August 3, 2022The zkWars are heating up. With 3 different teams recently announcing their zkEVM, we’re joined by Alex Gluchowski of zkSync and Ye Zhang of Scroll. Optimism’s Ben Jones cohosts as we explore why ...zkEVM is such a big deal, the respective roadmaps to mainnet, and of course… wen token? ------ 📣 Forta | Help Make Web3 a Safer Place https://bankless.cc/Forta ------ 🚀 SUBSCRIBE TO NEWSLETTER: https://newsletter.banklesshq.com/ 🎙️ SUBSCRIBE TO PODCAST: http://podcast.banklesshq.com/ ------ BANKLESS SPONSOR TOOLS: 🚀 ROCKET POOL | ETH STAKING https://bankless.cc/RocketPool ⚖️ ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum ❎ ACROSS | BRIDGE TO LAYER 2 https://bankless.cc/Across 🦁 BRAVE | THE BROWSER NATIVE WALLET https://bankless.cc/Brave 🌴 MAKER DAO | DECENTRALIZED LENDING https://bankless.cc/MakerDAO 🔐 LEDGER | SECURE STAKING https://bankless.cc/Ledger ------ Topics Covered: 0:00 Intro 3:15 Crypto 101 10:00 Alex and Ye 13:25 zkSync 18:20 Scroll 22:30 What's Different about zkSync 31:10 What's Different about Scroll 37:15 zkProof Performance 41:30 Decentralizing Provers 46:30 Approaching Developers 57:07 Challenges to Upgrading 1:03:40 Vibe Check 1:09:20 Centralized Keys 1:17:45 Closing ------ Resources: Ben Jones https://twitter.com/ben_chain Alex Gluchowski https://twitter.com/gluk64 Ye Zhang https://twitter.com/yezhang1998 ----- Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research. Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here: https://newsletter.banklesshq.com/p/bankless-disclosures
Transcript
Discussion (0)
Hey, Bankless Nation, welcome to another live edition of State of the Nation. Today, we're going to talk about the zero-knowledge EVM, the ZK-EVM. David, got us through. This is going to be formatted a little bit differently in that after the intro, I am actually stepping out. So who do we have on the show? How are we formatting this? And who is helping us out today?
Oh, we got my good friend Ben Jones from Optimism. And Ben is going to be our technical co-moderator here to
help us unpack the ZK EVM.
Ryan, you and I, I think we're pretty smart,
but there's way smarter people in this industry,
and there's way more technical things in this industry
that we just kind of need some help unpacking.
And so we're bringing in some extra help from the optimistic roll-up world
to help us unpack the ZK side of things.
So coming in in the second half of the show,
once we get there, we'll have Alex from ZKSink and Yeh from Scroll,
and Ben is going to help us guide us through this conversation
to understand a little bit more about the world of the ZK.
Yeah.
It's a world that's heating up.
I think the title of this episode is the ZK EVM Wars.
And I think, you know, appropriately, it was like three weeks ago.
All three ZK EVMs announced something big the exact same week.
So the ZK Wars are heating up.
David, also got to tell them a little bit about our friends at Florida.
And I think this is really important that we talk about tools like Florida right now,
because since, did you know, since 2012, 138 Web3 projects have been hacked, and that's cost victims
a total of $2.3 billion. And of course, yesterday, David, we just saw the latest bridge hack,
$190 million out of the Nomad Bridge. And so we reset the clock. It's been, now it's been zero days
since the last incident. Tell us a little bit about how Florida is trying to fix this.
Yeah, we can all do our best with smart contract audits and formal verification and blah, blah, blah, blah, blah, blah.
And everyone should be doing that.
But you can also do an additional layer of protection, which is real-time mempool monitoring.
And so the way I like to explain this is there's that game in like the 90s where like asteroids were coming in and you would shoot the asteroids before they hit the Earth.
This is like that, but like smart contract exploits.
So you like zap the malicious transactions before they actually execute.
And this is what Florida provides for the world, real-time monitoring of your.
your DFI app, your NFT project, your Dow, your treasury,
anything that's at risk that's in the world of Ethereum,
this is a missile defense system for your smart economy.
That's a cool analogy, David.
Look who they're projecting here.
$36 billion, compound balancer,
liquidity, instap, maker, Lido,
all of these folks use FATA.
So if you want to learn more, go check it out.
The Forda network at forda.org.
There's a link in the show notes,
bankless.cc.c.
slash Florida. All right, I'm going to ask you the question I ask before every state of the nation
David. Before we get there, Ryan, I want to talk about a little bit about just some intro stuff
because we got Ben here. And so before, there's going to be a bunch of questions that I think
we don't actually necessarily have to ask every single participant here in this stream.
So we're going to get some beginner questions out of the way. And Ben's going to help us with that
here. So Ben, I want to start with this very first question, which is, what is the EVI?
and why is it important?
And then we'll get to what does it mean to zero knowledge EVM?
Oh, good question.
And thanks for having me on y'all.
Okay, what is the EVM?
What the heck does that mean?
Okay, basically, it is a way to interpret Ethereum programs
or Ethereum smart contracts.
So basically, a virtual machine is this notion
that you map a bunch of basically numbers,
right?
Everything in a computer expresses a number.
And you map certain numbers to certain instructions,
instruction like add or like divide or like call
when you want to go and call another smart contract.
This is the basis for how you construct smart contracts on Ethereum.
Here in virtual.
Write some solidity code.
What happens behind the scenes is that's taken from text.
It goes through something called the compiler
that turns into a bunch of numbers,
which is all the instructions that implement the program that you run.
That's the EVM.
Very important.
I would say crucially important.
It is one of the things that makes Ethereum
Ethereum.
Okay, and so we have that.
That's what the Ethereum layer one is.
What does it mean for to have a ZKEVM?
Why are so many people hyped on a ZK?
Yes.
So I think they're hyped for scalability.
It's very interesting because ZK EVM, right?
What is the ZK there?
It's zero knowledge.
And interestingly, it does use these things
called zero knowledge proofs.
Arguably ZK isn't the most important.
part of the zero knowledge proof, right?
So when you think of a ZK Snark, right,
might be this zero knowledge,
succinct, non-interactive argument of knowledge.
A really key letter in that acronym is the S,
is the succinct.
Because the point of these proofs
is that you can basically prove something
in a very short manner.
So a ZK EVM is about taking the EVM
and converting it or running it
inside a zero knowledge environment
that lets you prove things succinct.
So what does that mean?
Basically it means you can take the EVM,
and you can write a proof that says the result of these 10 transactions is state X,
state Y, right? You prove the results of the Ethereum virtual machine.
But what's interesting about this is you could make that not 10 transactions, but 100 or
1,000, and still the proof size stays the same. So you can see why this might be very compelling.
Can I just bake this down into kind of layman's language here?
So the EVM thing, that's the thing that's the thing that,
turns Ethereum from a calculator into a computer. And the EVM thing is the thing that Bitcoin does
not have. And the reason it functions more like a calculator. You can't run programs on top of it,
right? And then the ZK part makes the EVM thing, the computer part of Ethereum, much more scalable
because it compresses it into this very tiny size. Yeah, I think that's a good way of thinking about it.
There's a little more nuance in terms of, like, you know, for example, if you're posting 10 transactions versus a thousand transactions and you're still rolling those up, right, if we're talking about a ZCOL up, right?
Then there's still some costs there that fundamentally can't be compressed in quite the same.
It compressed quite a bit, but it doesn't like disappear.
There's more nuance, but that's absolutely the case.
What would you say?
Why are people so stoked on a ZKVM?
Why is this such a important thing to fight over it?
Why are there so many teams like racing to gain that?
Right, yeah.
I mean, so there's a few reasons.
The core of it, though, is that the EVM is what powers Ethereum,
and it's what has all the network.
So I'm a co-founder of optimism, which is optimistic role of protocol.
We spent a lot of time making the optimistic pull-ups work with the EV.
Why did we do that?
It's because that's where all the applications and all the developers live.
So to build a good scaling solution, we want to do that.
And it's definitely been a limitation of ZK scaling solution to far.
that they can't go ahead and take advantage of this.
I think people are excited because there's potential with the AVM
build on that network.
One last question before we get to panelists.
Ben, what are you hoping to get out of this conversation?
What should listeners pay attention to listening to this?
What are you also hoping to learn here?
Ooh, good question.
So it depends on who the listener is, right?
I think that one of the things, obvious listeners would be,
like a user of these protocols, right?
So I think you want to listen to what are the security properties
like that you're interacting with?
What is the roadmap of this thing that you're acting with?
I think if you're a developer, then really what you
want to be thinking about is what does it actually
mean to practice for my application?
There's different ways that you can go about implementing.
There's different levels of support for different things
that you'll have access to.
And so I think that's two bits of framing.
As for what I want personally, oh man, everything
under the sun, really, I feel like, but including those questions.
I'm secretly most excited to hear how we can integrate into.
That's a whole other.
Can you guys also go over?
So I'm going to be grabbing the popcorn here and just watching as a bystander.
But just throwing my one question in is, can you guys talk a little bit about bridges?
I know that's not, like, exclusive to ZK, but it's kind of like a roll-up type technology.
and I'm, I think a lot of people listening are probably increasingly concerned about the security of bridges from one chain to another or from the main net to roll up from a main net to an alt L1.
So I'd love to hear a bit more about that.
And guys, we're going to get right to the episode.
We're talking all about ZK EVMs.
But before we do, we want to thank the sponsors that made this episode possible.
Rocket Pool is your decentralized Ethereum staking protocol.
You can stake your eth in Rocket Pool and get REath in return, allowing you to stake your
ETH and use it in D-Fi at the same time.
You can get 4% on your ETH by staking it with Rocket Pool, but you can get even more by running
a node.
Rocket Pool is the only staking provider that allows anyone to permissionlessly join their
network of validating Ethereum nodes.
Setting up your Rocket Pool node is easier than running a node solo, and you only need 16Eth
to get started.
You get an extra 15% staking commission on the pooled ETH that uses your node to stake.
You also get RPL token rewards on top.
So if you're bullish e-staking, you can boost your yield by adding your node to the decentralized rocket pool network,
which currently has over 1,000 independent node operators.
It's yield farming, but with Ethereum nodes.
You can get started at rocket pool.net.
And you can also join the rocket pool community in their Discord.
You can find me hanging out there sometimes in the chat, so I'll see you there.
Arbitrum is an Ethereum layer 2 scaling solution that is going to completely change how we use DFI and NFTs.
Some of the coolest new NFT collections have chosen Arbitrum as their home,
while DFI protocols continue to see increased liquidity and usage.
You can now bridge straight into Arbitrum for more than 10 different exchanges, including
finance, FTX, Whoobi, and Crypto.com.
Once on Arbitrum, you'll enjoy fast transactions with cheap fees, allowing you to explore
new frontiers of the crypto universe.
New to Arbitrum, for a limited time, you can get Arbitrum NFTs designed by the famous
artist Ratwell and Sugoy for joining the Arbitrum.
The Odyssey is an eight-week-long event, where you can play on-chain activities and receive a free
NFT as a reward. Find out more by visiting the Discord at Discord.g.g.
slash arbitrum. You can also bridge your assets to Arbitrum at bridge.arbitrum.io and access all of Arbitrum's
apps at portal.arbitrum.com. In order to experience defy and NFTs, the way it was always meant to be
fast, cheap, secure, and fiction-free. Maker Dow is the OG Defy Protocol. The first Defy protocol
to ever exist, even before we called it Defy. Maker Dow produces Dai, the industry's most battle-tested
and resilience stable coin. Using Maker, you don't need to sell your collateral if you need
liquidity. Instead, you can spin up a Maker vault and use your collateral to mint die directly.
With Maker, the power to mince new money is in your hands. And there's something new in the Maker
Dow ecosystem. Every time a new MakerDAO is opened, the owner can claim a POAP, which contributes
funds to one tree planted, an organization with ongoing global reforestation efforts, creating a world
where digital participation and the health of our environment can live side by side. Soon, Maker will be
present on all chains and layer twos, bringing the biggest and best Defi credit facility to
everywhere there is Defi. So follow Maker on Twitter at MakerDow and learn from the oldest and
most resilient Dow in existence. In the top right corner, we got Alex Skilkowski from ZK. Sink.
Alex, welcome to the show. Hello, everyone. Very excited to be here. And then in the bottom half,
we got Yay from Scroll. Yay, welcome to Bankless. Hi. Thanks for having me. Okay. Third time's the charm.
Before we get into some of the technical details, I just want to go around into the background of each
of your respective teams, just to get a little bit context about to where each one came from.
So, Alex, we'll start with you. Where did ZK.A.S. come from? Like, what's the Genesis story?
What's the background? Can you kind of walk us through that?
Absolutely, very habitous. So I'll start with my personal background. You might know that I was
born in Soviet Ukraine and I grew up kind of like after the collapse of Soviet Union.
And I was very impacted by the thing, economic and social collapse and all the things that
were going there. And it brought.
me to conclusion that there is nothing more impactful you can do in this in this world today
than increase freedom, increase freedom of societies, increase individual freedom. And that is
what brought me to crypto. Part of that is the, you know, the potential of crypto to enhance
freedom, which I think is unparalleled with anything else. And the second part of my motivation
was the technological challenges. And I have a software background, I have a CEO of for a last
a couple of years before moving into the space.
And I just was looking at, like, it was three and a half years ago.
Ethereum was just getting started.
All the protocols were just being built.
There were a lot of issues around disability and security and, like, everything was missing.
I looked into kind of what's going to be the end game?
I was always interested in the end game.
Like, how do we get this thing in the hands of everyone in the world?
And it was clear that all the problems that are very apparent are going to be
sulfur assumed by some teams, except scalability.
That seemed like a really big black box.
So I looked into what's going on in scalability space, and there was plasma.
And Ben, I know you are working on plasma.
My co-founder, Alex Flasso, was also working on that.
So I met him at DefCon, 3, I guess, in Prague.
And we came, like, from, we both encountered the idea of zero-knowledge proofs, of 16
zero-knowledge proofs.
And we both had an immediate thought that, oh, you can apply that to plasma and solve most of its issues and actually get something that will work and bring us to mass adoption.
And back then, crypto protocols were not, like ZK protocols were not as mature as they are today.
But it was pretty clear that over the course of next two, three, five years, we will get something that is workable in production.
But we actually got there a lot sooner.
a year later, Sonic appeared, and two years later, we got Plunk, and that was something very, very useful.
Starks appeared around the same time, and all the protocols are now getting converged towards, like, very, you know,
everything is going in the same direction, and we will have incredibly fast protocols, provers,
you know, all the mature tooling around that to get us to scale Ethereum with ZK in no time.
And before I handed over to Yeh from Scroll, Alex, ZK Sync has been around Ethereum for a while.
I remember using ZK Sync to like pay for Gitcoin grants in like 2020 or something.
Can you just like speed run us through how ZK Sync has integrated itself into Ethereum over the last few years?
Sure.
So we when we started out the project, we built the very first working ZK roll up on Ethereum.
It was like we called it back then Ignes, but we had to rename to ZK Sync short of the words.
conflict and it took us two more years to build the the fully productive mainnet version
for simple payments and swaps it's live on mainnet for for for two years like no sorry it was
it took us two one year and it's now live for two years and after that it was very clear that
most people who need smart contracts and simple application specific ZK roll-ups or like roll-ups or
scalability systems are going to be very, very niche and you really need this generic
programmability with your incomplete programs. So we set out to build what is now known as
EKABM, the generic EVIM compatible framework that is scalable under SDK conditions.
We launched the internal test net over a year ago,
We opened it to the public with Unisync,
for a Uniswop demo last fall,
and the test set is open to everyone,
to everyone since over half a year.
And we just announced a few weeks ago
that we will be live online.
100 days, and it's now the 87 days remains.
I'm sure, Alex, that you wake up every single morning
and be like, there's 87 days left.
We don't have a huge counter, and it just goes.
All right, let's turn the conversation to Yay from Skrull.
Yay, can you explain a little bit about the background behind Scroll?
Scroll is newer on the scene.
And so this is something that's new for a lot of listeners, including myself.
Can you just explain a little bit about the background of Scroll?
Where'd Scroll come from?
Yeah, yeah, sure, happy to.
So, scroll started one and a half a year ago.
We have three co-founders, including me, High Chen, and Sandy.
We were actually introduced by our mutual friend in the East community.
So before that, I was doing ZK research.
It's pure about crypto and ZK stuff about math and crypto not.
You are curious to cryptocurrency.
And I was working on the proving algorithms and hard work saturation for the owner proof.
Years ago, the proving is the biggest bottleneck for using the owner proof in practice.
So that's the problem I work on, like how to make proofer more practical and faster and how to support a larger circuit.
And Hicheng is an expert in building robot systems.
He got his PhD from University of Washington.
He has years of experience working in Amazon, building very complicated systems,
based on compilers, programming language, and GPUs.
So he has tons of experience of how to, you know, build a, make a system more practical and running in real world.
And Sandy is more, you know, like, because three of us, like, diverse a lot in our background.
Sandy has been in the broader crypto space for many years.
She has been doing investment in crypto since 2017.
She has incubated many application-level projects and also institutional-facing products.
She was attracted to Ethereum more strongly due to the rapidly growing community, the ethos of
ESUEM, and also innovations from dev developers.
And when we met, it was instantly clear that we share the same vision on what was important,
and there was nothing more exciting than us to scaling the base layer of ESR and onboard
the next billion of users for Ethereum's.
Because for a long time, it wasn't very clear that whether the key VM would even be
technically possible.
And previously, ZikiRop can only support like payment, swap, whether simple applications
using some fixed circuit.
But recent innovations, because we are working on in this area, we know that recent
innovations have made that finally possible.
And we all want to build something which is truly impactful for the whole Ethereum
community and also we were joined by our common vision to use this, you know, combining with
Ziki research, advanced technology, who really solve the scalability issue of the Ethereum.
So that's basically the genesis of growth.
And although like we are new, but we have grown really fast, we have 40 people in our
team now and 30 are engineers and researchers.
So most are just strongly technical focused.
And we have an incredibly strong technical team with mostly people have a strong,
mass and crypto background. And this background is immensely useful for understanding the backbone
of the key VM, which highly relies on zero-na proof, basically, on mass and crypto stuff.
And many of our members of our team had years of blockchain development experience and
has been active contributor to a lot of open source reports like foundry and stuff like that.
And because our region is aligned with Ethereum of being decentralized, many members of our
team are also like, you know, quite global and decentralized. We work remotely. We have people
across Asia, U.S. and Europe, like, including like China, Singapore, New York, Bay Area,
Portland, Ukraine, Australia, a lot of other places. So I'm very proud of, you know, our current team,
and we are super focused on building, like, although we are installed like previously, and
we want to build the best, like, user and developer-friendly is the QM solution. And we work well together,
as a remote team.
And also, we have been very careful
in how we grow this team by bringing on the colleagues
who are highly value aligned with high in creativity
and also the right motivation to work with us in this space.
Nice.
Yeah, beautiful.
All right, David, I want to ask some questions.
Can I get into it?
Go for it.
OK, OK.
OK, OK.
So I think the first question that I would like to ask
the both of you is it's very clear that ZKBM, to your point,
in your intro questions to me, David,
it's like, you know, it's like been a problem on the horizon
that we've been thinking about for a very long time.
And like when we first started the ZK scaling stuff,
we did not start with the EVM, right?
Because it's a very, very hard problem.
So obviously there's been huge strides made to your point earlier, Alex,
and I have a suspicion that there are different approaches
being taken to solve this problem, right?
Usually when there's a hard problem,
you get a few different strategies or ways.
Yeah.
So what I'm really curious to hear from
the folks on this panel are, what do you think are the unique things about the approach that you take?
And what are the things that are shared between the different approaches?
And notably, right, we have two panelists here.
There are a few other teams building ZK. EvMs as well.
So I'm really curious to hear from you guys what you think the lay of the landscape is
and what your strategy in particular is to tackle this in some unique way.
And Alex, we'll start with you.
Sure.
So the approach we are taking goes
back to how we think about the strategy of building Zika Sync, and that goes back to the mission.
Our mission is accelerate the mass adoption of crypto for personal sovereignty. And accelerate
means move faster. We believe that it's coming, no matter what, but we wanted to arrive not in 10
years, in two years. And to move faster, you need to be very pragmatic. So what we're like how, what we
we learned from building the first version is that you're much better of launching something
that works and then gradually iterating over than like waiting for a perfect solution trying
to construct this abstract beauty in vacuum. So like we're I have this picture which you might know.
Like this is how we operate right with building gradually and not trying to get perfection.
And how we started actually we initially there was no really, there was no real
way from the limitations of Provers, from the protocols themselves, to build a Turing
complete version of EVM. So we tried to build a non-Turing-complete version of smart contracts,
and we had to construct a new language called ZIN. So we had to build a compiler team,
and we took Rast as a base, not the solidity, because we thought, like, if you have to learn
a new language, which is going to have a lot of this internal limitations, it doesn't really
matter, you need to learn from scratch.
So let's take something that is more familiar to, like, broader audience.
We quickly learned that it's going to be very problematic to force people to re-learn the
language and, like, to adopt the developer tools, everything.
Like, that's just going to be a mess for adoption.
That's not what engineers want on the one hand.
On the other hand, there were some breakthroughs in the cryptography, like in plonk,
recursive flung in improved efficiency for certain gadgets,
for certain cryptographic functions that are required to build something like EVM,
that we thought, okay, we're going to take on the challenge
and actually build a full ZKVDF, something fully compatible with Ethereum,
where you can take existing smart contracts,
written in native solidity or viper,
and just pick them and launch them on this thing.
And we looked, again, like what's going to be the fastest way to get there?
So our head of engineering, Anthony Rose, is actually coming from SpaceX,
used to be in charge of a satellite factory there.
And we borrowed this concept of the critical path from SpaceX.
Like, what's the shortest possible way for us to get to the goal?
And it turned out that we can reuse our compiler skills
and build something very similar to what we now know,
Stark was doing with Cairo, namely to create a,
a virtual machine that is optimized for probability under zero knowledge
proves. It's specifically optimized to be very efficient under SNARKS.
And then create this virtual machine with all the conventions of solidity and
the VM. So that all the calls, all the interactions, all the interfaces are exactly
the same as EVM, but you just have a different set of open codes underneath.
And then we can use the compiler to take the code written for AVM and bringing it over here.
And this is the approach we're taking as far as they know,
no other ZK EVM team is working in this direction.
Stack we're doing this for Kaira, but you have to relearn it like the Kiro language.
They have some to transpile code from Solidity to Kairo called WERP,
but you actually have to maintain the code in Kaira.
there is no way for you to keep solidity as a source of truth as your basic code.
And it would be very, very hard to keep the same code base between layer one and layer two or between different layer two.
That was something that was not acceptable to us.
We wanted the code base to be exactly the same executable as well in Maynett as in any layer two.
That's our approach.
I'll pass over to Yale.
Ben, does that generate any follow-up questions or should we throw out to Yay?
Okay, wait, let me just, okay, so let me just make sure I'm following, right, Alex?
So basically what we've said is we talked about in the intro of the show that we have this thing called the Ethereum virtual machine.
It's how you interpret, you know, smart contracts in Ethereum main net.
Basically what you've written is a virtual machine that's very similar that's meant to be like very mappable and like close and related that is optimized for ZK proving.
and then you can basically relatively easily take solidity code that is meant to go to the EVM target
and compile instead to this other target. Is that right?
That is correct.
So we are actually not just, we don't have a native compiler from solidity.
We are using the solidity compiler to produce the intermediate U.S. representation of the code.
Exactly.
And then we're going from Yule to our virtual machine, which is a lot easier to make that step.
And in between, we're using LLVM as our compiler framework,
which is a very mature, very well-tested, bottle-tested framework with a lot of tooling,
a lot of optimization.
So our code is actually a lot more optimal that what solidity compiler natively produces for AVM,
we have three times less op codes in the final result.
And yeah, so this approach is the fastest to bring,
to production.
And yeah,
we started working on this for
since two years. So we have something
that's very, very mature in compiler.
It works in,
like, we're covering all the tests
from Ethereum test suits.
It's running
in a very stable way on our test net,
which is public. A lot of teams
are deployed there. We have hundreds
of teams already working on test net.
Everything works really, really well.
But the most important
thing is it will produce the resulting virtual machine bytecodes that are very optimal.
The execution, the proof generation for the ZKVM, for optimized ZKVM is going to be orders
of magnitude less than the approaches that try to mimic EVM at bytecode level.
So we are talking about like very, very low cost, part of the amount.
transactions, which we can scale to both support the high lot of defy, NFTs, but also like
very broad use cases, just gaming and like, all the new things that are coming to blockchain.
Once we can actually support the scale of tens, hundreds of thousands, TPS.
And in those cases, it really matters.
Like, if your transaction costs like 10 cents or 0.1 cent, it matters a lot.
You cannot just, like, this is the quantitative step that unlocks qualitative difference
in what you can actually build and execute and run in this networks.
Beautiful. All right. Let's turn to Yay from Scroll.
Yay. Can you talk about the competitive difference that Scroll is bringing to the table?
How is the approach to the ZK Evm from Scroll side of things unique from the others
that you would find in the same landscape?
Yeah, yeah, sure. I think technology-wise, there are like Manly two.
two differences. One is on the Zicub
side and the other is on the
infrastructure side. Firstly,
on the ZQM side, so our
goal is that we want to have the deepest
level of compatibility with
the ECRM down to the client
implementation. So by saying
that, like we are not only being
compatible with the solidity at the
language level, like either you are
view or like any error.
We will like be compatible with
EVM itself at the bytecode level,
which means anything as far as you
can use the existing compiler to compile down to the EVM bytecode, and we can prove that it's correct.
And also, like, by saying client implementation, we are reusing the existing Ethereum node implementation
called gas to generate our layer two blocks. So this is pretty similar to what optimism is doing,
like just try to reuse the implementation from Ethereum to inherit the performance, the security,
and also, like, you know, in the long upgrade, like we will be more aligned with Ethereum.
So there is no compiling, there is no interpreting. There is no interpret.
in between. So it brings us another level of compatibility more than just the RPC interfaces,
but it's a deeper level of, you know, like compatibility on the implementation side. And for users,
it means they can do whatever they can on Ethereum using the same U.S. and U.S. And for developers,
they can reuse all the Ethereum toolings, even include some debug toolings, like where you definitely
need to down to the bio code level. Like for example, you need to like look up some stack
information and things like that. And they can migrate their code to scroll without any
modification. And also, like, you know, this brings us another benefits, like where our
implementation will be the most closest to the end goal of Ethereum, where Ziki VM can eventually
be used to prove for layer one magnet blocks. So in that sense, we are not only building for
ourselves. We are not building, like, just for layer two purpose. But we are actually co-building
this for the future of the Ethereum. Because,
you know, it's very beneficial for ESM in the long term.
So that's from the compatibility and the VEM side.
And another thing I want to mention is that on the infrastructure side,
we have designed a decentralized power network.
So there is both a technical difference like innovation there and also a strategic difference
because, you know, a big problem when mentioning like by maybe optimistic growth or other
teams is that the proving cost is large because, you know, generating proof is really slow
and consider to have, you know, very high cost.
It's very expensive.
But we can actually solve this problem
by allowing many people to generate proof in parallel.
And also this can drive the efficiency
of proving hardware to the extreme.
And eventually even have Zikaa6 to support this proof generation.
Like anyone can just use this type of hardware
or use general proper GPU or whatever they can
to generate proof for us, for our platform.
And it's very, you know, we are prioritized.
this because this is also like not only the technical like advantage but also strategic
differences because you know more more immediately especially with the east merge miners could
potentially just be be our provers they can they can reuse their GPU machines to
join our ecosystems if they don't want to you know switch to another work chain so that's
you know both like technical innovation and also a strategic move for for us oh yeah let me ask
Okay, so I want to do the same thing and make sure that it's following.
So basically what Yays said, I think, correct me if I'm wrong,
is that you effectively have a program that is written,
I would assume that's written in a,
that's running in some sort of lower level ZKVM,
that is this Turing complete machine.
And then you're running an EVM interpreter on top of that, basically.
Is that right?
So we are not even having this, you know, middle layer.
We are directly, like, so for example, if you receive a transaction, you execute it on gas.
And then you output some information, like the execution trace, like which upcode you are executing.
And then you use this trace as a witness, like to fit into this circuit and prove that it's a valid trace using a very certain proof.
And then this proof, like as far as you have this proof, it means, oh, this, this trace is valid.
So this, you know, the transaction is correct.
So there is no, like, middle stuff.
you just reuse every information from the existing client's invention.
Yeah.
Got it.
And that circuit, I guess what I'm getting at is that circuit.
So when you feed in, okay, so it makes sense to me that you run the transaction on
Geith, like vanilla thing.
And Geith has a wonderful thing called the transaction trace, which will give you sort of
step by step set of these instructions of the Ethereum virtual machine that we talked about,
right, Geth will like break that down instruction by instruction.
It went here and then it went there and then it went there.
So that all makes sense.
But then what is actually running, when that information,
is transferred to the Prover, what is the circuit running?
Is there some sort of turn-complete thing that is then running the EVM on top of it?
Have you actually written an EVM like circuit for all this?
What is that part of the stack?
Yeah, yeah.
So basically, what we are doing to handle this is that for each upcode, we will implement
some sub-circuit to prove it's correct.
For example, if you add upcode and we prove that this number, add to this number,
equal to this number. So we have a specialized subcircuits for each upcode. It's a one-to-one mapping.
Got it. And then like because, and then like in EVM circuit, we can pull those sub-circuit together
and then open or like, you know, or like using some selectors. We call that selector in circuit,
but basically if you meet with this upcode, you open this certain constraints. And you,
if you meet with the other upcode, you'll open the next. And then, like, you know, you prove
that this is, this is correct. And by constraining each out of code. I'm interested. I'm really
interested to hear from you, Alex, on this. It's interesting. I know I've been on a panel with you,
and you love parallelized proof generation as well. I think all the ZKers do it. I totally do as well.
But it also seemed like earlier you said you had a bit of a different approach for performance
reasons. So I'm just curious to hear what each other's perspective is on these things. Like,
is the performance of ZK proofs an issue or not? I feel like I get conflating answers sometimes.
That's a great question. Yeah. That's a really great question. So I don't see that we have a different
strategies here.
All ZK teams will eventually work towards decentralizing the
Prover.
We are working on this definitely from like,
this is very important to us.
We don't want to be any single point of failure.
We don't want to run any operator that controls like the validation of the
transactions going to layer to own or the group generation.
So it will be decentralized.
We can reuse Ethereum miners indeed.
We've been working on the GPU Prover.
hardware for the last two years as well.
And we have some pretty amazing results, like the GPU Prover that we have on, that runs on
ordinary consumer GPUs is something like 50 times faster than, like 50 times cheaper than
the proofs produced on CPUs.
And for the product concept, it's really important to understand that whenever we talk about
the Prover efficiency, we always are talking about the,
Aromatized cost per transaction.
Prover is always distributed.
It's always done in parallel.
No one is running a proofer on a single machine.
If they do, it will take them hours to generate a simple proof.
What we're doing, what probably other teams are doing as well is, like, we run it on as many machines as we need in parallel.
And since the structure of the circuit is that we can have recursively combine many circuits,
together, our latency to generate proofs with GPUs is going to be like less than a minute.
So we'll be able to like really, to get the blocks really fast.
So what really matters at the end is like what is the cost of this group generation divided
by the number of transactions?
What is the cost of transaction?
And that is where the differences will materialize very significantly with our approach or like
what Starkwurst building versus what scroll, like this very, very important.
ambitious goal of scroll to make EDM circuit, like circuit level compatibility.
That is going to be like several orders of making do higher.
We're very curious to see what the numbers actually are.
But that is very complex.
And that is very complex also to maintain.
So huge respect to the team who are trying to make this.
Like we didn't trust ourselves.
We want for a much simpler approach because we know that the more complex that you add really,
it grows non-linear with the number of systems you add and with the number of layers
you add to each system. It just like explodes at the end. So this is with regard to GPS.
Sorry, no, finish up to Alex and I'll ask my next question. So this is with regard to decentralizing
the Prover and GPUs. And you just want to add that we decided explicitly not to reuse GF as like,
not rely on standard notes for a transaction, for the block building, for deciding what goes into
transactions, because Gath is known to have very strong limitations on throughput that it will bring
with itself. And we decided, like, if we're building a system that should be capable of
running tens of thousands of transactions per second, we should redesign the node from scratch.
We're writing it in Rust. It's highly optimized where we also have a very strong engineering team
working on optimization of the node because we don't want the node to be the bottom
that goal.
I think Alex,
you just opened up the conversation to the EVM compatibility versus EVM equivalence
conversation.
But before,
I wanted to go back and just make sure we really knock down the decentralizing
the prover for the layman,
because that part of the conversation made me feel like a dog driving a car.
Can you,
Ben,
can you walk us through why is,
what does it mean to decentralize the prover?
Like,
Why is that significant for just like the average user to pay attention to?
Sure.
So I'll talk about this.
I'm not the ZK experts like these folks, but I have a rough sense.
So basically, when you construct these zero knowledge proofs, right, that is an operation
called proving, right?
Unlike the operation called verifying, which is like in a ZK role of what the smart contract
on chain does, what all the nodes do then downstream of that.
So in general, what you have to do, basically, when you produce the zero knowledge proof
is a bunch of extra cryptographic moon mathy computation.
that will give you some fancy numbers
that allow the verifier in this succinct manner,
in this short, you know, constant or logarithmic size manner,
to check whether or not the proof is valid, right?
And so this is a computationally intensive operation
to do it in the zero-knowledge,
succinct manner that we're talking about.
You basically, if you want to prove something,
it's kind of like you don't just run the computation
because that's not a proof.
That's just you doing it.
You have to run the computation
in some sort of circuit, zero-knowledge, math,
context that gives you some cryptographic steps along the way that you can sort of combine and aggregate and get a and get this succinct proof. Okay. I'm guessing that's the doing the ZK stuff is like adding like an order of magnitude of difficulty upon a computation. Yeah. These guys can talk about those numbers much better than I can, but it adds overhead for sure. For sure adds overhead. And is that like the cost for like a train when a transaction fee happens on like a ZK rollup is that is that is that the cost of this thing? Is that related? Are these related? So it.
will be related, right?
I think I haven't looked into the details of either these folks, fee markets,
but that is, I think, what would be a very reasonable thing for a fee market to do.
I think probably a requirement.
There's additional costs, right, because even in a ZK roll-up, you're still rolling up that data,
which means you're posting it to Ethereum or whatever, right?
I know at least Alex has, like, other more plasma-type things that don't require that.
So it's going to be a chunk of the cost, absolutely.
Okay.
Okay, cool.
Is it anything that Ben and I just said to stand out to you guys that you guys want to add on a
comment to?
That's good to me.
Cool.
Yeah, yeah.
All right.
So we definitely got to talk about EVM compatibility and EVM equivalence because I think
that's really, Alex, you said that Geth is really just not optimized for some of this parallel
proving magic.
But, yay, from what I've gathered, Scroll is really going after what Alex call, like, the
very ambitious task of figuring out how to make Geth and a ZK role of work together.
So I think that's where we want to take this conversation next.
But before we do that, we've got to talk about some.
some of these fantastic sponsors that make the show possible.
There is a brand new staking feature in the Ledger Live app today.
We all like staking the assets that were bullish on,
and now you can stake seven different coins inside the Ledger Live app.
Cosmos, PocaDot, Tron, Algarayans, Tezos, Solana, and of course Ethereum.
With Ledger Live, you can take money from your bank account,
buy your most bullish crypto asset, and stake that asset to its network,
all inside the Ledger Live app.
Through a partnership with Figment, Ledger also lets you choose which validator you want to
stake your assets with. And Ledger is running its own validating nodes, offering a convenient way to
participate in network validation, and it even comes with slashing insurance. Ledger Live is truly
becoming the battle station for the bankless world. So go download Ledger Live. If you have a ledger already,
you probably already have it and get started securely staking your crypto assets.
The Brave Browser is the User First browser for the Web3 internet, with over 60 million monthly
active users. And inside the Brave browser, you'll find the Brave Wallet, the secure multi-chain
crypto wallet built right into the browser. Web3 is freedom from big tech and Wall Street.
more control and better privacy, but there's a weak point in Web3, your crypto wallet.
And most crypto wallets are browser extensions, which can easily be spoofed.
But the Brave wallet is different.
No extensions are required, which gives Brave browser an extra level of security versus other
wallets.
Brave wallet is your secure passport for the possibilities of Web3 and supports multiple chains,
including Ethereum and Solana.
You can even buy crypto directly inside the wallet with RAMP.
And of course, you can store, send, and swap your crypto assets, manage your NFTs,
and connect to other wallets and defy apps.
So whether you're new to crypto or you're a season pro,
it's time to ditch those risky extensions,
and it's time to switch to the Brave wallet.
Download Brave at Brave.com slash banklists
and click the wallet icon to get started.
The Layer 2 era is upon us.
Ethereum's Layer 2 ecosystem is growing every day,
and we need Layer 2 bridges to be fast and efficient
in order to live a Layer 2 life.
A cross is the fastest, cheapest,
and most secure cross-chain bridge.
With a cross, you don't have to worry about high fees
or long wait times.
Assets are bridged and available for use almost instantaneously.
Across's bridges are powered by
Uma's optimistic oracle to securely transfer tokens between layer two's and Ethereum.
Across its critical ecosystem infrastructure and Across V2 has just launched.
Their new version focuses on higher capital efficiency, layer two to layer two transfers, and a
brand new chain with Polygon, all while prioritizing high security and low fees.
You can be a part of Across's story by joining their Discord and using Across for all of your
layer two transferring needs.
So go to across.to to quickly and securely bridge your assets between Ethereum, Optimism, Polygon,
Arbitrum, or Boba Network.
Okay, welcome back, y'all. We're back in action. So, okay, we just had a really interesting discussion on, you know, some of the differences in these approaches. And one thing to your point, David, that came up just before the break is understanding the role of, I think the, I, so I take some guilt for creating this term, which could be very confusing, but EVM equivalence is like a wonderful term that we use, optimism, to describe how we kind of try to move towards using geth as much as possible. And this has been a matter of,
of debate and there are differences between our two panelists here and what they do.
So I'm really curious to hear from you guys how you think about the developer experience.
We talked a lot about the approaches sort of from a proving architectural cost, GPU type
of a perspective, but there's also the side of developers and what they're going to experience.
So I'm curious to hear from you guys from the panelists what your different approaches to
this developer experience are.
What does it really look like for a developer to be implementing on one or the other?
And how do you think about that long term?
Is the other question.
Are you at a place where you're comfortable?
Are you trying to move in a specific direction?
What's the take?
So maybe we'll start with, yeah.
And developer experience, which means we will provide
exactly the same execution environment as the E-CM layer one.
So developers can pretty much reuse all the developer
toolings around, including the debugging toolings.
Like if you want to go into the stack and look up
some information, you can still use that.
And because as far as I know, like many
gas developers, many even application level developers, they are very familiar with the gas
implementation so that it's easier for them to debug and have some index security analysis.
So I think we have some advantage on that because we are using a fork of gas to generate
our blocks.
And also like, because you are compatible with all the developer toolings.
So developers are very comfortable with, you know, every toolings they use on layer one.
They don't need even any extra plugins or to link to any external compilers.
They can just reuse whatever is there.
So in the long term, I think it's more like secure way because firstly,
EVM has to test of time.
Like what, like the design philosophy of each upcode, this stack based virtual machine,
its security has to do the test of time.
If we, we just reuse everything from EBM, it's very easy to audit our code.
Like because, you know, our execution environment,
will make sure that the circuit will behave exactly the same as EVM.
Like, if you make mistakes there, then you can make mistakes here.
But if it's correct there, then like, you know, because they behave exactly the same.
But I think that's important, like in the long term, why remaining this EVM you couldn't say is important.
And also secondly, as I mentioned, like, if you are more aligned with this implementation,
for the further ECM upgrade, like, you, like people are doing,
experiments over gas for EIPs, for different improvements, you can directly use the same
codebase to improve your layer to design and also like even sometimes give back. I think that's part of
the motivations why optimism is going into that approach. So that's why like, you know, our our design
philosophy for this part. And again, like because we are actually like standing on top of the
giant because when we appear like the the briggs are already happened, we know like the overhead is
not affordable. So that's why we directly go into this bicycle level, like, compatible
approach instead of building some other virtual machines to make the pooling or had more
manageable. I thought it was. Makes sense. Yeah. Let's hear from you, Alex.
Sure. So we are also on the mission to make the developer experience as easy and pleasant
for developers as we possibly can have an absolute top. And it's important.
to see what is the actual developer experience is, like what what what you expect as a developer
from from the environment. What you expect is that you don't have to modify your code and you can
use the existing tooling. So you take your your whatever you have written in contracts and front end
and test, test use, et cetera. And you port it and it just works. Just just work like one click
know, stripe like experience where you don't have to do any movements. That is exactly what
we're going to bring. We're not going to bring it at the
op-code level, but we're going to bring it at all the interfaces.
The code does not need to change. You do not
need to do any re-audits. Your Web3 API works
out of the box. Your all the coding conventions work. All the
two links on chain work. We have the graph or chain link,
a lot of other projects already integrating, already running
on our test net, ready to provide this experience.
So things that will not work because the opcodes are different are like low-level debuggers,
but those tools can just integrate.
We can work with them.
There are not many of them.
You can work and make them be as compatible.
And then you use the same.
You have your tenderly experience.
You have your remix.
You have your common line-based debuggers.
And they provide the same output, the same experience to you.
Like you as a developer do not need to do anything.
And like the surface of changes is really minimal.
But what is ultimately important to you as a developer is that your code works smoothly,
but everything is fast.
So the one way to think about ZK EVM equivalence versus CVM compatibility is like what would
be the analog of like the of these things in the real world, in the normal computing.
Imagine you have some piece of software written for one operating system for specific architecture.
I don't know, like a Photoshop running on Windows.
Now you want to run this Photoshop on Mac with M1 processor.
You have to options.
Either you recompile the code for M1 and it runs super fast and you like the experience.
Or you run a Windows simulator on your Mac and then you run this Photoshop in the simulator.
You can already sense like if you run just some programs written for the older version of Mac
And then you try to run them on M1 without optimizations, you already feel the difference, right?
It's huge.
But if you run it in an emulator, it's going to be a lot, a lot slower.
So your developer experience is tightly coupled with your user experience as well.
That is what matters.
Very, very interesting.
Yeah.
I want to take a careful swat here, David, to be your co-host because I'm very opinion.
As the folks coined the term EVM equivalents, like, I feel like,
I feel like I'm at an interesting position here.
I'll just say I'm very curious to see how it plays out.
I definitely very much vibe with many of the things that you said there, Alex.
Some of them I became less of a fan of once I had to run a non-EVM equivalent system practice.
You guys are also building in a different space, right?
You've got these ZK requirements.
It's a little bit of a different world.
So, yeah, it'll be very interesting, David, to see how this plays out.
I'm a huge fan of developer experience, and I think that EVM equivalence is the way to go there.
but we'll see.
Alex, any comments on that?
Or not?
I'm really happy that we have multiple approaches competing.
We have our bet, like, based on our analysis,
what's going to be the optimal for developers at the end?
But it's really great that we have this race
and let the thousand flowers blossom.
Yeah, any comment you want to add
before I turn the conversation to something else?
Yeah, yeah. I think it's, it will be just like, you know, win-win, like, eventually, like, all good for ETHM, like, different tries and, like, you know, EVM equipment or this compiler-based one.
But I think, you know, in the long term, it's, you know, like,
like if you have extra need for compiler, you have to also upgrade, like, you know,
your compiler and both your compiler and your circuit at the same time,
which actually adds the overall complexity because you need to, you know,
those two parts are coupled together and you need to upgrade like with each upgrade.
But for the Kian circuits, all the compiler stuff, you know, those can be reused.
And those have been types of, you know, years of time.
So that's why, like, you know, we want to handle this more complex stuff on the circuit
side, but we don't need to handle anything besides that.
Oh, no.
Actually, yeah, I want to hear from Alex on that.
Yeah, that's a really interesting point.
I wanted to add that indeed, like the, what we're building here is the future.
And like, let's imagine, is EVM going to be ossified and be this standard that will be there
forever and like will not undergo any modifications?
Like imagine that we have the very first version of PC with like 80,
86 architecture, is this thing going to be immutable and never change?
That is hardly imaginable, right?
So we will have some of the solution, in my opinion.
You feel free to disagree.
But with the compiler, one thing that gives you,
a separate compiler, specifically LLVM-based compiler,
gives you, is an option to have developer experience
beyond just CVM.
You can port in any code written in,
in all the modern languages that support LVM,
like RAS, GOLAN, Python, you know, C++, whatever.
You take that, you compile it, even JavaScript.
You compile it to LVM, and then you can use it as libraries,
or we can also define some contract interfaces
and you eventually can write smart contracts in those languages.
And I think that is going to be incredibly powerful,
because it just opens this massive code basis,
already written in this very expressive languages,
with generics, with all the cool.
cool stuff that is not yet possible in Solidity, and that have stood the test of time for many
more years before Solidity was even created.
So I find that really really fascinating.
Okay, wait, Alex, I've got to dive into this more, though.
So, okay, first of all, totally agree.
If the reader has not heard of LVM, it is one of the modern marvels of computer engineering,
absolutely fascinating.
Go check it out.
With that being said, there's a key question here that I don't know if I heard an answer to,
which I want to dive into Alex, which is, since you have developed a VM that is the ZKVM that is related to the EVM and maybe has some of the same semantics, but is different, how do you deal with upgrades, right?
So, like, I totally agree we're going to, like, have upgrades to the L2 VMs.
In this, are you basically, like, is implicitly there a commitment to this particular ZKVM that is the, that, like, your current compiler outputs?
and is it your goal to maintain that going forward
and just build more solid circuits around that?
Or do you have the ability to improve that VM
in the same capacity?
That is a great question.
Yeah, I think we will have iterations.
So in the initial versions of all the roll-ups,
we'll have to work a lot on upgrades and do changes.
It will be important to work with the source code
and not with bytecode.
I don't think we will have like a nossified bytecode
in any near time.
So eventually we'll have a new version,
and yes, you will need to redeploy your contracts there.
Okay, I see.
So you're basically going to enforce a requirement
that you have source code accessible
so that if you upgrade the underlying ZKVM,
that is not EVM, you can recompile and regenerate the bytecode for it.
That would not be a hard requirement.
So like, whatever you launch, once we have a stable main net,
we will guarantee that this mainnet will run for a long time.
and then most likely you will have a, like, well, we need to say.
I don't know exactly how this process will work, but, like, yes, I believe that there will be progress in both the EVM world and SKVMs,
and eventually we will converge more and more with the world of generic computing.
We will reuse the tools like LVM, static analysis, all the debaters that the traditional systems can bring.
and I just think this process should be gradual and smooth.
It should not be like a breaking change where you have to stop immediately
and you cannot support your previous code.
You can always emulate.
Like it's quite easy for us to add bytecode compatibility in ZK sync.
Now, like imagine, we have the LVM compiler.
We can write code in solidity and rust,
and it compiles to this very, very efficient low-level virtual machine.
can just write a contract that executes native EVM bytecodes.
We write it in Solidity or maybe we just compile our Rust implementation and we put it on ZKVM.
That will give you the same overhead or maybe even slightly lower overhead than what scroll
or Hermes are building.
But it's a gradual process.
You don't have to wait for this final step.
You can already start importing your contracts to ZKSync written in solidity because you read
compile them. In the moment, we have this contract life, you can just start porting your bytecode.
If you're willing to take this massive overhead into account, if you want full efficiency,
you just compile natively to ZKVian. Fascinating.
Yeah, can I add more comments on this? Like, you know, I think there are two points.
Firstly, I agree that EVM may not be the end-go in maybe 100 years.
Like, we are not always stick to an EVM model.
But the reason why we firstly go into this EVM-equino approach is that because we know that
the urgency for Ethereum is scaling, right?
It's not for, you know, finding some new computational model as extension, but it's more
for migrating all the existing apps securely to a layer two so that the congested problem
can be solved.
So that's why we want to provide the...
EVM liquidity environment in the first place because you know, you have to handle with those
problems and then you can think of the further problem of a new VM or any features.
So that's one thing like, you know, that's just our starting point and we will also definitely
consider more like developer friendly, you know, VM or something like that. And the second thing
is that to build this, so there are some arguments around like EVM versus VM, but they're
probably that they, I think the concurrent most likely that either you'll be,
you build a totally new virtual machine,
or you just reuse this EVM and the tree equivalency.
Because if you modify part of that, it's nearly a new VM,
but you can't just benefit from a totally new design from this virtual machine.
So that's why we are thinking of two approaches.
One is that adding more feature to this EVM.
For example, we can upgrade according to our community design,
like some new pre-compiles specifically to our layer two,
and using the existing EVM structure securely.
And secondly, that we are in parallel,
we can explore some more efficient VM to open this design space for more developers.
So that's something like we are also exploring, but we believe that that's also driven by the
developers' needs instead of, you know, just reusing the compiler and just reusing the same virtual
machine because it's fundamentally different. I think the last point is that using the
LVM is very ideal, like to support all the programming language. But, you know, if you dive deeper
into the LVM for very high-level, like, language, like Rost, Python, those,
those touring complete language with many features,
you will find that the LVM IR in the middle layer
is very complicated.
It's even much more complicated than building the key EVM,
like adding support for all those opcodes,
because they have very complicated type,
they have very advanced features.
If you want to support all of them,
you need to support a very complicated IR layer.
It's not just the solidity or anything like, you know,
yeah, so I think, you know, even if you have this L-EVR port,
it's still like, you know, take a very long way to go
to support all the features of Ross.
But if you only support the semantic of Ross,
then I think it's still less useful for developers
because they still need to change a lot of code.
And so that's maybe three points from our perspective.
So we're also exploring, but that's like why we win this at our first step.
I'll take that challenge.
Let's see how fast we can support us.
Yeah, yeah, yeah.
Keep forward.
Awesome, guys.
Well, like I said earlier, I really like this metaphor.
I feel like a dog trying to drive.
of a car. So I'm going to zoom out this conversation and get something I think the users can
understand that I think a little bit better. And every single layer two team has their sort of like
vibe, if you will. It's their culture, it's their branding. And that's often really how
users ultimately come to like determine whether they feel comfortable with a particular
ecosystem or not. It's like what are the values or the ethos that each team appears to exude,
even though they can't really comprehend
some of the very technical words that are being said.
So I'd just like to get each of your guys's perspective
as to like, do you guys think about values
or culture or vibes when your guys is internal
like communication as to how to build something?
And maybe you can share that perspective
with the broader world.
Like what is the vibe of your particular project?
What's the, if you guys have like an ethos
that you stand on and yay, I'll start with you.
Sure, sure.
Yeah, that's actually a very important.
important aspect of scroll.
And you can check out more like on where we have posted
articles talking about our vision,
value, our technical principles.
We want to up code when we are designing our whole architecture.
But from the ethos and the, you know, like the vision side,
we are more open.
We are open, we have been building the open source way
from day one.
Like you know, the totally open source Zikovam circuits
where anyone can run any one time and just, you know,
even PR some code.
And most parts of the Zikov circuits,
actually co-built in collaboration with the privacy and scaling exploration team at the Enchuan Foundation
under permission-less license.
So we are actually co-bunist gather.
We like a lot of actually credit also come from this community.
And because we have this permission-laced license,
which means anyone can use this repo and build things on top of that.
And also we encourage the broader community to do so because we firmly believe that, you know,
building in this open-source way leads to more.
secure and resilient code and help us to foster a broader community of developers and they can
check our progress in a very transparent way because, you know, there are some over claims,
right? They can just directly fact-checked our claims of, for example, like whether we have
proofs or not. And in that perspective, we hold ourselves very, you know, to a very high standard
that with all our claims we made. For example, we actually have live sticky proofs in our
test net, now can be checked.
and we are focusing on building and shipping our, shipping new features on our test net.
And we are not doing endless PRs, but instead we are writing articles explaining what we are building and what our architecture look like.
And output more educational articles, which is beneficial for the whole space.
It's not directly, you know, like, pointing against each other, but more like for educating people why this is important and what we are building.
And I'm glad that, you know, even if we are still in a relatively stealth,
model, there is still a circle of ease and Ziki researchers who have recognized our work and
giving us credit where it's due. And we are actually finally ready to welcome a much broader
community. And I'm happy to see that we have received over like 25K signups within two days
of our pre-aTesnet announcement. And if you want to be the first batch of user, just signup at
signup.com.com.io. Yeah. Beautiful. Thanks, yay. And Alex, I'll say the same question to you. What's
What's the overall value or vibe of ZK Sync?
So ZKSink is deeply mission to run a project.
So this mission that I mentioned in the beginning
to accelerate the adoption of crypto for self-sovereignty
is hugely important.
We've written a lot about this.
We have a team handbook that walks the new team members
through the values.
We are extremely aligned with Ethereum
on the approach towards those goals.
And every technical decision we make
is matched, like, it is balanced against the principles of freedom, resilience, inclusivity,
like from escape hatches to the way we decentralized the prover, to the way we approve of
the sequencer, to the way we approach the standardization of the code basis.
We firmly believe in open source.
Everything we do is going to be released under permission of the software.
license, just like we did for version one. With version one, we made some interesting experience
that made us reconsider the approach to a complete openness. We opened, like, we were leading
the protocol development completely in the open. And then there were people who just forked the
code, made modifications that they did not understand, took part, like they coped some,
they forked off some bugs, and they also changed things that led to more problems. And they,
they try to front run us with regard to the token.
So there are powerful incentives for people to just rush with some unready code and try to publish it.
That's why we're taking a more conservative approach now.
So we're opening everything to independent researchers.
We actually next week we will announce some people we opened it with highly credible in the space.
And then we're going to gradually open it to more and more and more until we have all the audits
and all the testimony from the white hats where we feel comfortable that the cold is safe,
then we're going to open source.
Beautiful.
Nice.
Thank you too.
Yeah.
Kumboy.
Cryptoculture, baby.
Okay, Alex, you did just raise something that I really do want to get in before the end of this panel,
which is a question for both of you.
Okay.
All of the ZK, I mean, realistically, almost all of the, pretty much all of the layer two teams out there right now have upgrade keys.
There's some set of small number of people that own a multi-sig that can be used to upgrade the system.
And that includes upgrading it to something that is malicious and takes all of the money from the system.
This is obviously not ideal.
I don't want to rehash why that's necessary because I think we all agree that it is and we understand that it needs to happen.
But my question is, at what point can we cast those upgrade keys into the fires of Mount Doom?
From both of your perspectives, what is the point in time in which, like, obviously it makes sense that now for you to launch something, you can have some level of, you know, trust that it's at some level of productionization.
But it's a big shift to say, okay, we're throwing away our upgrade keys.
If someone comes to us to the bug in the future, we cannot solve that bug.
So I'm really curious from your guys' perspective, like, what is the timeline and what is the, like, concretely the criteria that you think has to be fulfilled to be.
be able to turn off your upgrade keys.
And Alex, well, I'll start with you again.
Sure.
This is a huge problem.
We're thinking a lot about we, I published a tweet where we offered a bounty for the
best design solution that it can help solve this problem.
And for the broader context, the, the problem here is very different in L2 space compared
to L1s.
Because if you have a problem with L1, you can always fork away.
the decision to fork is ultimately with the people who run the nodes.
So you never depend on the only majority.
And that is the superpower of public blockchains, like truly decentralized blockchains,
like Bitcoin and Ethereum.
And I can't really think of any other ones that fall in this category.
But you don't get this at Layer 2's.
Because at Layer 2s, you have all the funds are locked into this one contract on Layer 1,
that someone needs to control.
Someone needs to, like, this contract must objectively know,
like, who is now, like, what code is not canonical?
Like, what shall we execute?
The solution we came up with was we have a team multi-sick
that can initiate upgrades,
and those upgrades are subject to a time-lock period of multiple weeks,
and then all the users who are disagreeing with those upgrades can exit,
and we have permissionless mechanisms for exit.
like escape hatches, like forced block proving, etc.
But if there is a bug, we really need to accelerate,
we need to act now and just fix an immediate problem,
we must go and reach out to an external number of people
who we call a security council.
Those are highly prominent people from the Ethereum community,
very rich and famous, so it's very unlikely that they will all glue
to try to steal this funds.
And they must approve an immediate upgrade.
Now, this is not a perfect world because we don't want to expose those people to do
some political struggles, you know, to some like non-monitor incentives that it might force
them to do things.
Ultimately, what we can do is have multiple layers of protection in our systems.
that all of the checks must be made before something happens.
So the simple example would be if we just have a second factor
and we have a roll-up running like a Zika rola,
and then we have the number of validators appointed by the users
who have to approve transactions.
And then you would have to break both the consensus of this validators
or corrupt the stake and find a problem in the ZK circuits to try to exploit it.
But that could compromise the liveliness of the system if the proof of stake is, or like this
value data set is compromised. So another option would be to have multiple implementations of
roll-ups, like maybe if you're Zika roll-ups or maybe optimistic plus ZK roll-up, we combine those
together and we put them on chain and then we use governance only if they,
they disagreeing. This is something that Bitalik posted last weekend on Twitter, which is a very
interesting idea. But ultimately, we just need to wait for those systems to become mature.
Like if something is running for a few years with billions of dollars worth at stake,
and nothing has happened and those funds were not stolen, and you had honeypots running in
the open with like much lower barrier, much lower threshold of, okay.
ability and those are also safe and those have millions of dollars staking them that the attacker could just grab if they found like much much easier
to penetrate then uh i think then we can say okay we those systems are like plausibly secure and we can
rely on them and we can remove the admin key this is roughly how i think about it maybe there are some
ways where we could rely on the governance of layer one.
Maybe we could declare some roll-ups as like really important,
like systematically important for Ethereum.
And we can say like if something goes wrong,
those systems, then we would just rely on the votes of the general
Ethereum community.
And we actually, we don't need to declare them anything special.
We just need governance mechanism that can rely on this external voting power of
broader Ethereum.
made something like this.
So all the ideas are really appreciated.
Yeah, you want to take the same question,
just the overall security of scroll
and your guys' thoughts and plans around it?
Yeah, yeah, sure.
I think we do have like security plans
and security definitely, you know,
the first priority for scroll
because, you know, like stakes are locked in smart contract.
And for us, we do have security plan.
And so there is no like antidote for repeated
and more throughout auditing.
But besides that,
we are going to have an in-house security team.
And the team will keep an eye on our code all the time
and also collaborate with external auditors for safety as well.
And also like it's, it's safe for now because all the transactions
will be executed using this, you know, existing client implementation.
We don't even implement a new Ziki executor to execute our transaction.
So it's very hard for any attacker to, to attack our system
since they can't run this sequence of themselves.
That's the first place.
Secondly, that is existing implementation.
And also they have no chance to generate fake proof for this fake trace.
That's one aspect.
And secondly, as I mentioned, like, they are definitely trade-off between open source
very early or very late.
A score is built on an entirely open source foundation,
even including the proving stack of Halo 2, which is, you know,
like many eyes are on that, including the Zcash,
and for example, like community developer from their part,
and even Falkoan, you want to reuse the same, like, pulling stack,
and even like, you know, sometimes reuse the Zikimk codebase.
So more broadly, we believe that using this community standards
will be the most robust way to write security code
and secure our whole code base and secure the system of our,
the security of our, like, all our system.
And for the upgrade case, I think we will implement a sufficiently
robot system before the sequencer is decentralized,
like using some time delay,
which makes sure that users will have enough time
for, you know, before this smart contract upgrading.
And in the long run, we will progressively
add this decentralization and like more,
become more permission list.
Like it, but it's in a long run, like because, you know,
decentralization is at different levels,
like first we decentralized approval and then we consider the overall system.
Well, gentlemen, thank you so much for all of your time.
I know we've gone a little bit over.
Ben, do you feel like you've got all of your questions answered?
Oh, my goodness.
I had all the questions answered.
They spawn more as they always do, David.
But I feels like a goodest place to pause as any.
Well, I think the story of the ZK.EVMs is going, we're just at the very, very beginning.
So there will be plenty of ZK.EVM content as the story progresses.
So Alex, yay, thank you so much for joining me.
And also, Ben, my technical co-moderator here, to helping us unpack at the very start
of this very long story of the CK.EVM.
So thank you, too.
Thank you, David, Ben, and yeah.
Thanks for hosting us.
Thanks, Alex.
Of course.
And see...
Go for it, Alves.
Yeah.
Just want to say, see you guys in Maynard in 87 days.
Oh, yeah.
Absolutely, absolutely.
Well, actually, before I sign off,
can we do a speed run through the roadmap
for each of you?
Alex made that in 87 days.
Is there anything else about your guys' roadmap
that you wanted to talk about?
we're now completely focused on launching mainnet.
Testnet is up.
If you want to be one of the first projects launching a mainnet,
and we're going to follow the fair launch policy,
you should get on our test that now and start building.
And for the next features that are coming,
there are some really interesting things.
And I can talk about them yet,
but I will just say that layer three is a lot closer than many people think.
Well, I can see that very slight smile on your face, so that's getting me excited there, Alex.
Yay, what about you and Scroll?
What's the Scroll teams high-level roadmap?
Can you speed run us through that?
Yeah, yeah, sure.
I think for our release philosophy, we are progressively releasing more functionality to test
so we can fix any bugs and any UX difficulties early and often towards a more robust,
like infrastructure, test of time.
So currently we are at the stage of pre-Apha test net.
it's running internally right now with real live ZK proofs.
And we pre-deploy some applications like swap for users to interact with.
They can see their transactions being processed on layer two
and then finalize on layer one with a proof through an explorer.
And if you want to be the first batch of users,
again, like sign up our testnet at signup.scrow.io.
And the next step will be a more permissionless alpha testnet
where developers can deploy their smart contract.
and anyone can interact with applications on scroll,
it doesn't need any sign up.
You can directly use your mental mask to interact with scroll
using any interface you like and you are familiar with.
And we are testing our functionalities for now,
and it will be released soon.
And also, like in further release,
like anyone will be able to run the Provers at home
to provide a computation power for us.
And yeah, so that's roughly our plan.
Yeah, will there be a scroll token?
Yeah, that's a good question.
So we are focused on building and we are thinking on long-term scale and want to be extremely thoughtful about, you know, how to foster the 10-world community of user and developers.
And I think, you know, we can learn a lot of license from optimism and polygon, which is the only two layer tools which, you know, already launched their tokens.
But currently, we are still focused on building the best solution technically.
Yeah.
Okay.
And then Alex, same question to you.
Is there going to be a ZK sync token?
There might be an interesting token indeed.
I had an idea.
Awesome.
Thank you guys so much for joining me.
Risk and disclaimers, of course.
Crypto is risky.
Eth is risky.
Bitcoin is risky.
Layer 2 is risky.
We didn't get to the conversation of bridges,
but bridges are also risky.
But they're less risky if you go to a cryptographic bridge
rather than a cross-chain bridge.
But you can still lose what you put in.
We are heading west.
We're on the frontier.
It's not for everyone.
But we are glad you are with us.
On the bankless journey.
Thanks a lot.
Thank you.
