Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Ye Zhang: Scroll - EVM-compatible ZK rollup
Episode Date: November 18, 2023"The advancements of zero knowledge proving technology, hardware acceleration and proof recursion have significantly increased the efficiency of EVM-compatible zk provers. As a result, the former trad...eoff in efficiency implied by EVM-compatibility is gradually improving. Translating Ethereum’s virtual machine bytecode ensures seamless scaling for applications, without the need for additional audits and security compromises.We were joined by Ye Zhang, co-founder of Scroll, to discuss the EVM-compatible zero knowledge landscape and how Scroll aims to stay true to Ethereum’s values and community.Topics covered in this episode:Ye’s background in zk researchHardware limitations for zk techFounding ScrollWhy Scroll went for EVM compatibilityEfficiency of EVM- vs. non-EVM-compatible zk proversThe timeline of ideating ScrollScroll mainnet launchBuilding the Scroll communityScroll tokenSequencer decentralisationData availability, validiums and security tradeoffsL2 bridges and interoperabilityRoadmapEpisode links: Ye Zhang on TwitterScroll on TwitterSponsors: dYdX Foundation: The recently launched dYdX chain features new governance and token economics, that empower stakers and promote validator decentralisation. Bridge your DYDX tokens and contribute to the evolution of dYdX chain, fully permissionless and community driven. - https://bit.ly/47kqG59This episode is hosted by Friederike Ernst. Show notes and listening options: epicenter.tv/522
Transcript
Discussion (0)
This is Epicenter, Episode 522 with Ye Zhang, co-founder of Scroll.
Introducing the next generation of DYDX and the next version of the DYDX token.
Welcome to the DYDX chain. New token mechanics mean you can stake to secure the network.
Staking is fully decentralized and controlled by DIYDX token holders. All fees are distributed to stakers.
Earn rewards from using the DIYDX protocol, with rewards planned for traders and early adoption.
new governance means you are in control.
Trading has been democratized.
You can vote on protocol improvements, token distributions and more.
Bridge your DYDX to seamlessly transition to the YDX chain.
Bridge now at bridge.DX.
Trade and contribute to the evolution of the YDX chain,
open source and community driven.
Run your own validator.
Validating is fully permissionless.
Join us on our mission to democratize.
to democratize access to financial opportunity today.
Welcome to Epicenter, the show which talks about the technologies,
projects, and people driving decentralization and the blockchain revolution.
I'm Friedricha Anz, and today I'm speaking with, yeah, from SkK-EVM roll-up on top of Ethereum.
Yeah, thank you so much for joining us.
Hi, thank you for hosting. Nice to meet again.
Cool. We first met at AdCon this year, so probably about six months ago,
and had really good talk about kind of like, you know, L2 landscapes and L1 landscapes and so on.
So we'll definitely get into that a bit later. But for everyone who doesn't know you,
tell us a bit about your background.
It's also glad to meet you. And also your background is very impressive.
I remember our long conversation debating around like layer 2 and layer 1.
Okay, sure.
So hi, everyone. My name is Yeh. I'm the co-founder of scroll.
My background is more from academic. I work on ZK and crypto before.
So I think three years before scroll, I was working on purely on Zerunard proof research.
I work on hard work for Zerner proof because five years ago, that's the biggest bottleneck for using Zerner
proof in practice because the proof generation is just so slow.
It takes like several minutes or several hours to generate proof for any program.
So my first task is like use hardware, like GPU and ASIC to make this proof generation faster, to solve this kind of poor efficiency problem.
And later I work more on the theoretical side, looking into how, like, how this kind of magic works, like how you generate proof, how you compress a large program into a very distinct proof.
And more like theoretical construction.
And later like, you know, because, you know, the most fundamental problem of efficiency has been improved for order of magnitude.
So that's why I moved to more on the kind of application side, where how we can use this kind of magic technology to scale for privacy and for user cases.
So my background is more on the kind of ZK research side, like our hardware acceleration for ZK,
theoretical construction behind DK and DK applications.
At Skrower, I mainly work on like ZK research and some strategy stuff to bridge between non-tech and tech.
that's about me.
Tell us about the hardware limitations
that we currently face for ZK technology.
Why does it place so big a burden on regular CPUs
and why did you resort to DPUs?
Yeah, that's a great question.
And I think so just a high-level context.
So the only proof is that where you can generate proof
So you have a program, but it's too large to execute for verifier.
For example, like I'm on a phone, I'm on a device where I can't execute the program myself.
So there is a prover which it will generate proof to prove that it executes the program correctly.
This is a result.
This is a proof and you only need to verify the proof very efficiently.
So in this case, proof is more like a powerful machine where it can execute the program and a journey proof.
to save verifies effort to kind of re-execute without re-executing, you know the result is correct.
But the magic comes at some trade-off wear because you can't just magically save once cost.
So the cost is that the proofer need to generate proof with a much larger cost.
So imagine that the program like execution takes like, for example, one second,
and then like the proof generation might take hours' time,
so which is like one sound or even larger overhead to generate proof.
So which means like initially proof only need to run like this program like very quickly,
but a journey proof is like you need to pay for one sound of times overhead to just to save
verify as kind of effort.
And this kind of computation for journey proof involved in a lot of operations on a little curve.
It's based on something called probabilistic chival proof.
And it would map into a lot of operations on an easy curve,
which involve large finite, like large finite field operation, like, you know, modular 256-bit large integer,
and that's very, very expensive.
But luckily, it's very, very easy to powerless because, so that's why, like, you know,
using GPU and FPGR ASIC can make this become, like, order of magnitude faster.
And we have the first one to kind of tackle this problem from a more academic, like, perspective,
and decompose proving into several components.
Some components are like either curve operations,
and some operations are like FFT or finite field,
and we can make both parts become significantly faster,
and then the end-to-end proof generation can be, like,
order of magnitude faster.
So then, like, practically, you just run this proof generation over hardware,
and then, like, everything gets really fast.
And currently, the state of art is that a lot of, like,
companies are building different solutions.
Some are more ASIC-driven, like SISIC-X-E-L,
they are more leaning towards this ASIC solution,
which is highly customized and super-fast.
But a lot of open-source invitation,
like from super-Rational, from a lot of teams,
are more like GPU-based,
and that can also achieve a very, very significant speed up,
like 10 times or even 100 times,
compared to a device.
So the magic is in the parallelization.
Yes, yes, exactly.
Perfect.
Cool.
So you guys started 2021.
And when I say you guys, I say, I mean you and your co-founders, Sandian High Chen.
How did you three meet?
Yeah.
I think that's a very different story.
Like we actually first meet online.
So my backbone, as I mentioned, like I was working on academic.
I was working on ZK purely into that rabbit hole.
and the other two co-founder is Sandy and High Chen.
We work on three totally different separate area.
Sandy is more working on the business side, like non-taxed ecosystem building side,
and like High Chen is more leading the engineer effort.
So it's very interesting that we might actually through the ESM community
because Sandy has been doing like before scroll,
he has been like doing his own startup and doing regulation stuff
and doing a lot of investment in crypto.
she is more like a builder in the whole ecosystem.
And she has been paying attention to the whole, you know, what's happening in the
crypto world.
She noticed that there might be a huge opportunity to layer two because layer two will become
the entry point for billion of users to enter ESUM because Ethereum is just very expensive
for no more ordinary user to use.
So there is a huge opportunity there to scale using layer two.
And then like I would building like, you know, like thinking about how to improve
prove us efficiency to kind of scale
you're a general, like more general purpose way.
And then like, Hichai is more leading some engineer effort.
He has a PhD in system from, from, from, from, from,
the university of Washington.
He has been like working on AI and building system, but his system is very
comprehensive.
He built system including like GPU, compiler.
It's a very, very complete system and turning that into product.
So it's more like what research, which is how pro,
proof of concept idea, have research and architecture idea.
One is like can actually turn this idea into implementation,
into product level like system,
and then you know how to kind of, you know,
like how you scale, how to fit this kind of technical component
into the whole like,
ECM scaling like in diagram.
Three of us are both in like,
I met sending through ECM community.
We actually met through a UCM research forum,
which I post some early idea for all,
here is, you know, how you can scale ESSRM, and here is, you know, how you can accelerate
prover and how you make that possible. And then, like, I met Highton through, like,
also, like, our common friend is the ESM community, and also, like, he has been doing some,
like, competitive programming and mass, you know, like computation. So that's how we connect.
And it's more, like, very organic and a very kind of, yeah, like different, not,
different way where there's a research forum, there's community discussion.
And then we talk about this interesting idea and why it's possible now.
And then three of us just met online start to kind of, okay, so let's work on this.
And then gradually it grows like, you know, totally unaffected.
But yeah, that's a different story.
That's a super cool origin story because kind of like purely online.
I think it's maybe also a COVID story because it happened in 2021.
Yeah, that's definitely a big, yeah.
Yeah.
So kind of real life events were kind of suspended for a while.
but yet to me it's really amazing because kind of co-founding with someone,
it's a very intimate relationship, right?
So basically you have to work really well as a team.
And then kind of finding that purely online that is such a cool story.
How was it when you guys met for the first time?
I think we still have some, like share some common friends,
but all the discussions happen like in the research forum
or like in the community discussion around like layer two scaling.
And then like we feel like the value is pretty aligned.
We want to scale Isuan and we observe that there is a very vibrant
research community there.
Like that's why I'm attracted and there is a vibrant ecosystem.
They are like in YSuan and that's why Hohseni is attracted and high chain thing.
This technology is so amazing.
I want to turn that into a system.
So it's like there's still some common friends but you know it's more like community
thing and then like people meet and like.
got to know each other and yeah.
Yeah, but you guys have met in person, right?
Yes, yes.
It's only after, I think the earliest in person meetup is in DevConnect in Amsterdam.
That's the first time we followed.
So that was 2022?
Yes, yes, I think so.
Cool.
What scroll puts front and centers that it is EVM compatible.
What drove you to this decision?
Because, I mean, we recently spoke with a number of other ZK roll-ups.
and some of which kind of also went this route, like Hames,
with Jordi now Polygon ZKEVM, but others haven't.
So kind of like, for instance,
recently had on Azteg and Ellie from Starknet,
and they were adamant that the efficiency losses
that you suffer from doing an op-code-by-op-code-based translation
is not worth it.
So how did you kind of settle on this design choice?
Yeah, that's a great question.
So I think a big part of that is that,
like, you know, think about your motivation from first principle,
like, you know, why you want to even build a layer two solution.
So I think for me that we observe that ECM is congested
and it's very extensive and applications on ECM want to find a place
where it's very cheap, it's ultra-trial,
secure, it has the security from ECM.
So that's why we think, you know,
like reducing the effort that applications can find such a solution
is the way to go.
And also, like, I think from, like,
so that's our first motivation, like,
why we ought to build a very seamless,
scaling solution that all the developer,
all the users can reuse everything they have
to migrate to scroll in a seamless manner,
and the users can reuse all their familiar,
like, you know,
toolings and developer can use their kind of foundry and all those tools directly.
And also like, so that's the first intuition where we don't want anyone to change any of their
kind of habit to only get the benefits, which is faster, cheaper with a faster like, you know,
like pre-confirmation time and high throughput. So that's our first intuition. And second thing is
that I think it's actually from the security perspective where EVM has not only has an ecosystem,
but also has proven itself from years of time
where all the application deployed on this model
like there hasn't been any problem with that.
So that's why we think inheriting the security
from this kind of test of time model is very important.
Even like we are reusing to the code level
where we are trying to reuse the code,
the kind of node implementation from YSuan.
We are trying to use Go Isuan,
which is YSiam's client imitation to generate block
to generate our block to make sure that the behavior is exactly the same.
So using that, it's definitely increases the security
and also because developers don't need to change any line of their code.
So imagine like if every layer to require you to make a significant change
to our code, then like maybe largely jet application,
like Uniswap, Avey might worry that, you know,
why should I migrate to this chain with some risk in changing my code
and doing this re-audit?
So that's why we think this is very important.
And also, like, I think EVM just have way more, like,
tooling than like any other VM has.
Like, even if you think, you know,
Starware has put a lot of effort in kind of carroll,
but, like, if you think about how many indentation,
how many tools you can use to deploy on Carrow,
like, it's very limited compared with the whole EVM ecosystem.
So that's another thing which, you know,
like inherit the ecosystem, build your own network effect,
and only then you can think about, you know,
what things you can provide some actual value to the developer in your community.
And I think one last thing is that this even distinguish us
from several other, like language compatible.
The kid up is that we are like bi-code level compatible with the EVM.
So which means like per upcode, we will have some circuit mapping
to kind of prove on the bytecode level.
and reuse all the kind of implementation like GoEyM to kind of generate our block.
So this guarantee us that every time ESM is doing any upgrade,
it's very easy to apply to our chain.
Like, for example, like, you know, there is some kind of EIPs, there's hard work,
and it's easy for us to adopt those changes if we are EVM.
But if you are totally working on a different path,
it's very hard to kind of follow what the ESEM layer one is doing,
adopt all the innovations from the ESEM community.
So even if, like, you know, there's so many innovations, like new pre-compels, new discussions around, like, you know, ECM layer 1, and we can directly take all those innovations and apply that to layer 2.
Even some, like infrastructure, like 4367, PBS, all those kind of great ideas, like MEV, all those kind of infrastructure to solve those problem, can be reused on scroll directly.
So that's why, like, you can always, you know, stay up to date with being very SM aligned, reuse all the innovations from the research community.
without too much fragmentation.
And eventually I think, you know,
later to might become,
especially as we might become the kind of,
like we can go even one step ahead of ESHM
to test some kind of EIPs
and adopt some innovations.
And then like if it's proven to be successful,
then like maybe ECHRM has a larger possibility
to kind of adopt to push this innovation.
So always stay ahead,
always stay on top of this kind of nice solutions
and to benefit the whole.
whole eFEM community.
So, yeah, there are multiple reasons, but that's all, like, you know, I can think about
definitely like TLDDI is that developer and user expenses are the same, security model
with the same ecosystem and tooling are more vibrant.
And last thing is like, you know, stay always the head on this innovation and research and
benefit, eventually benefit the whole eFM community.
And I hear that 100%, but can I just kind of poke into this a bit?
more. So if you kind of look at, what would you say the efficiency gains you could get from
not using a ZK EVM would be? And is there any way to make up the efficiency you're losing
with respect to other roll-ups that are using more optimized languages? Yeah, that's a great question.
So I think that's actually the biggest, like, reason why, you know,
like so many rowups want to choose other virtual machine model,
because they used to think, like, you know,
being bico level compatible,
building such as the KVM will have a significant larger overhead.
Maybe they imagine, like, 10 times maybe, like 100 times maybe, like larger overhead
than their kind of Zika virtual machine.
But reality is that I think they didn't expect,
even like us did they expect, like the proven technology,
has been improved so much.
I think two years ago, two or three years ago,
I think when we start scroll,
compared with five years ago,
where the technology of ZK lies,
it's like the efficiency of PRURR has been improved
for three order of magnitude,
like by this kind of advanced proving system,
by the underlying hardware acceleration,
by proof recursion.
So the efficiency has already been improved
for like once on the times.
So that's why I think,
compare with different ZK
machine. I don't think the key EVM, by-co-level the key-evam, had that much overhead.
I think at most probably like two or three times, even like sometimes like can be even
lower. And that's only talking about the poorer cost, which is not the dominant cost for a
layer two transaction. So imagine like, you know, for our kind of layer two transaction,
if you want to put data on chain, that might take the majority of the cost, which is over 90%
of the cost. And then, like, among the kind of rest of 10% of time, you know, like maybe using
some other Ziki VM can save the proving cost for like two times, three times. And that's only
one portion of this 10%, which means, like, you know, it won't differ that much, even if you
are using other Ziki virtual machines. But the, the hugest loss is that you lose compatibility. And
you have to rebuild your ecosystem from scratch. So, and you will really suffer from that.
So I think that's why I don't see kind of too much loss on the approval on the approval.
And especially with the technology, keep improving.
And we predict, like, the Ziki technology will still improve for another 10 times.
Like, my prediction is like in six months or one year, we can make 10 times even faster ZikiVM.
So it's really not that painful as people predict like five years ago.
So you think it's kind of, is it fair to kind of compare this to, say, no,
longer building
applications in
C++, just because
C++ may be a little bit
more efficient at some
things, kind of the convenience
and the developer mind share that you
kind of get with more
common developer languages more than
makes up for this. So are
talking about using Sipsarcel to write
smart contract versus solidity or like...
Oh no, no, no, not smart
not smart contracts, just general
general code. I mean, so basically, I went to university a while ago. And back then,
whenever we kind of took home large batches of data, obviously C++ was kind of the go-to
thing to kind of use, just because it was much more efficient than Python, for instance. But
you're saying that kind of using Python and other more convenient languages for developers,
this is kind of the parallel, right?
It's kind of like you're saying kind of the technology
has made such tremendous advancements
that kind of the factor two or three
that you're possibly losing doesn't really matter much.
Yeah, in some sense, yes.
Like we want to keep the kind of Python level developer experience,
but magically reduce the overhead of backend
to some degree that, you know,
it doesn't matter like, you know,
you optimize C3 for C++ because that don't really influence your overall cost.
In some sense, yes.
But it's more similar to, like, I think other VM are trying to kind of, like, compel, like,
I think, like, it's very similar.
But one thing which is quite different is that if you run C++ and Python, like,
the complexity, the underlying, like, CPU is, like, still, like, influence some efficiency.
But here, especially if you are looking at the transaction cost, it's not just execution cost.
It's also like data cost, which means like make this kind of difference become even smaller
because it's like 5% among all the cost.
And then you save this 5% to some degree, but you give up like this compatibility,
either user user, but the transaction cost overall will be similar.
Yeah, perfect.
I want to talk about kind of how the cost for an L2 transaction is made up later and kind of talk about
data availability then. So kind of I would like to table this for now. Let's talk about kind of the
creation of scroll a little bit more. So I remember the ECC presentation where Jodi for the
first time talked about building a upcode by opcode transposition into a ZKEVM that was in June
2021, blew everyone's minds. How did you how did you guys fit into your timeline? Because I know
Scroll was also founded in 2021.
Yeah, that's a really great question.
So I think a lot of people are like maybe don't know the story behind.
I think when we started, we start in early 2022 in January, I think that that's the earliest.
We have this idea for we want to build a layer tool.
But the first idea of scroll is that because we have this hardware idea in mind,
so we want to design some decentralized power network where we can use a network of miners or
to generate proof for us, to provide enough computation power for us to generate proof for large
applications. So that's our initial idea. Firstly, having this architecture, and then ZKVM, ZKVM is just
application. And then initially we are thinking about more, even in our initial post in, I think in April
2021 is talking about like ZK virtual machine, how your ZK CPU, how that, you know, cycle works.
But later, I think I talked with, I met, I met Barry from EFM Foundation, who,
is like who actually invented Ziki Rob,
the concept of the Kiry Rob.
Barry Whitehead, right?
Yeah, Barry Whitehead, yeah.
And he has been thinking through like
whether the QM is possible,
whether BICO level, like,
the QVM is possible or not.
And then like we, like,
he read our documentation and then like said,
okay, so if you also want to build something like this,
we also had some idea.
Why not like, you know,
we collaborate on this and then like share,
if you think this is the right approach to,
to build, why don't we just building the open,
everything is open source and then like just build that.
I think at that time,
that's where I think all the VEVM,
even including like Hermit's like the idea from Jordani,
all start from there.
So I think it's Barry is discussing with someone else
about this idea for how you use lookup
to solve the biggest problem for read and write to storage.
Because like zero-nard proof,
you need to prove program for some static
program, which, for example, you have some sorting, you prove this fixed program, but VM needs
to read and write the state, the storage, and that mapping is very costly because you need to
use a mercury tree. Every time you read from an element, you need to prove a mercury branch.
And the idea is that we can use lookup to solve this problem, like totally, like because you
only need to build a lookup table, and then you can do efficient batch lookup to into that
that table. And that's where all the ideas come from, including like Jordy's.
I think Barry has been talking with a lot of teams, including like Jordi, us as well as some
other layer to teams. And then we are the first one to come in to want to build in the open
and we want to kind of build with their team as exploration to kind of, because we believe
that building the public is one of the core spirit of InSium community. Why is so vibrant?
like because people can kind of contribute freely,
people can kind of collaborate freely.
And then I think Jordan is more leaning towards like Stark-based approach,
but the architecture is very, very similar.
It's all originate from the, like the very old doc
where you use Luke up to solve this problem.
And then like he is, like I think then like he specced out,
like he wants to use Stark.
He takes some different design choices.
And then like he chose to like build, build with polygon
and build their own, like, I think he built their own, like, instruction sets to,
and also interpreter to interpret by code into that instruction set.
But we are more firmly believe that directly build circuit for each upcode,
so that you can have per-up code mapping.
You don't need to build any interpreter.
You don't need to build a sequencer from scratch.
You can reuse everything that Nizum has.
So that's why, like, us and the PSC team, which is short for privacy and scaling exploration,
team led by Barry Whitehead and work towards the direction where it's upcode level compatible,
maximum the usability of layer one sequencer, and then Jordani leads towards a stock-based
backend and some kind of other instruction set with an interpreter, but still like with
this kind of by-code level compatibility. So that's where different approaches differ,
but it's all actually arranged from the breakthrough or the idea for, okay, use look up to do this.
use custom gate to kind of express your upcodes.
So it's very similar.
I think it's exactly the same timeline actually,
like because I remember like, you know,
Barry is talking with many teams about, you know,
whether they are interesting kind of building this together,
but there will be different considerations behind teams
around like how this need to be built and like different design choices.
And we are more aligned on the other side,
like, you know, use this kind of KZG, use SNARC,
use maximum usability and build something like in the public.
So yeah, that's a difference.
But everyone starts the same point with a similar architecture
for how you handle this memory thing.
Yeah, this is super interesting to kind of fear
that it kind of all started with Barry Whitehead's idea
of kind of how to do the lookup.
So you guys recently kicked off your main net.
If I remember correctly, it was mid-October.
So tell us about that.
Yeah, it has been a really, really long journey from search to meet.
I think we officially launched in the Genesis happens in October 10th, and we encode some, like, the future is open using like a word language to express that, you know, we think the word need to be connected.
It's global and the future is open.
And then, like, I think the official opening is on May October.
I think before, it's really exciting.
Like the whole team is in Vietnam doing some off-site.
Like we actually are like celebrating in person with each other.
Like because it's a joint effort, not only like research, not only engineering,
but also like the whole team in like BD, Diverow, like ops, all those different teams like joined for us to build this together.
not only a product, but also like not only just an engineer effort, but it's a community
effort. I think we have been like really doing a lot, a lot of preparation to make this happen.
Like our test net had been running for over like 15 months from pre-Apha test net to alpha test net to
beta test net, a lot of upgrades, a lot of testing. I think we like before men at launch,
we are very nervous about, you know, the security because you are really launching something,
play with like users assets, like users can really deposit their money on your chain.
You really need to be responsible. So we, we pay huge attention to like security, monitoring
system, everything else to kind of make sure it's a successful launch. We have done like,
for example, we even like before we launched, we fetched in our testing environment. We fetched
all the transactions from, from Polygon, from finance, from different chains that to kind of
replay in our testing environment to make sure that we can generate proof for those transactions
to kind of stress test. And also like we we spend a quite amount of money on auditing like
internal and external. Internally we have some kind of red team to keep looking for bugs
attacking our test net and the environment and how to make fake proof, how to kind of attack the system.
And externally there is like we contract the best, the world class level of like all
like including like Open Zablin, Zalik for for node and and the contract auditing a lot of
companies like that. We also like set out like one million dollar back bounty to for the
community to support any bucks in our system. So it's like a lot of preparation. And also like we
our team has been like you know talking with a lot of project to deploy and test their
application on Cipolia. Yeah, I think I think it's all like it's very exciting moment.
that you turn, you literally turn some open source project, open source demo,
to a product, like, like, 100% like ready product online.
So I think that's a very, very, like, incredible journey for me.
And also, like, you see, like, your research results from, from paper to a product
and used by a lot of people.
So it's quite exciting.
And we are really looking forward to kind of, since after, like, you know,
how we build a community, how we keep improving attack.
Yeah, I think it's, yeah, it's really a moment that we will remember like for our, like, you know, it's launched and very exciting.
Yeah, congratulations on that.
Tell us about the scroll community, so kind of who is your community?
And do you have people who are building on scroll exclusively?
Yeah, yeah, that makes sense.
So first thing, I think my definition for community is that,
There will be like engineers who are part of like, they can build their own product.
They can build their own project.
They can do their own research.
But we communicate, we collaborate, we talk in the open, we call you some kind of talk about some some benchmark stuff.
So those are, I think, are part of our community.
And by, for example, we are hosting Ziki Symbolium, which is like,
due to the magnet is currently recently reduced a frequency a little bit,
but we will catch up very soon is that previously it's like biweekly cadence,
we invite the ZK researchers and application developer
to talk about their idea for building ZK,
like what application they are building using ZK.
Because I think a lot of, if you look at Twitter,
like all the kind of places where people only like talk about like some marketing
material, we talk about how great they are, like how fast they are,
like why they can achieve like, you know, 100% privacy, all those stuff.
But no one really dig into details about the architecture.
and want to build a place where people can like dive really, really deep into, like, what they are building and the architecture stuff.
So I think it more happens in like Ziki Study Club where people dive into like, like very academic papers, some mathematical constructions.
But very few people have some place to share very deeply about Ziki application ideas, like how they build this app, the application, what technology they are using, what the architecture look like.
And then that's where the league is symposiums it, where people have one hour time to present their idea in depth about what application they are building in the technology, and then people can ask questions.
So I think those are like in that way, we are building a like research community.
And also because everything is building in the open.
Like as I mentioned, like our our liquevm reuse all the tooling from, for example, like the preying library is from the cache team.
and there are a lot of other teams
like also building on top of
this crypto library
that those are also
I consider as part of our community
like the key community to build.
That's one part.
And second part is more like developer community
where people deploy the applications
and we introduce after Manette
we introduced something called
scroll or enjoy NFT.
So usually I'm not a big fan of this kind of NFT thing
but it's actually a soap bank where
if you deploy applications
you are early in our ecosystem.
You can claim you are like, you know,
one really cool scroll or range of FTA from us.
I think that design, I don't know if you have noticed,
is that it's totally different from like just images.
Like usually FTA are just images
stored somewhere else and then you put some kind of link
on chain, which is a fake FFT.
What we do is that we do something similar to like Uniswork with 3,
which is a generative curve.
It's a polynomial.
if you deploy your contract,
you, because the time you deployed,
because of the kind of,
the sequence, like how many contracts you are deployed,
then, like, you can get a different polynomial.
And then, like, we can draw this polynomial,
like on chain, like, you know, you get a different, like,
I think if you deployed earlier,
that the degree will be four or five.
I can't remember.
So it's like you get this kind of curve with different,
like so many kind of turning,
points and to show that you are early in our ecosystem and we really appreciate our effort.
Because being in an ecosystem early is definitely introduced some risk for, okay, so whether
it's better tested or not. So we welcome the early builders to join our ecosystem. That's only
for welcome and to prove that you are there and you are there earlier. And so like everything we do
is like I think it's very cool and very, it's based on like fundamental that make us possible,
which is zero-nobroof-proof and a polynomial.
And then, like, you know, application, you deploy, you get a polynomial,
you get an NFT, like, you know, which, you know, draw this kind of really cool polynomial.
I think that's what definitely introduced to do some kind of,
a lot of people just trying to kind of deploy maybe a bunch of like a year, say, 20 contracts,
but that's within our prediction.
Our kind of motivation objective is that encourage more people to try deploy your first contract
and in a neutral way.
It's not like we select,
oh, we give you this,
we give you that in an unfair way,
but we want to hold this kind of
like a blockchain's principle
of being quite a neutral,
and then every people you deploy,
you can get something as,
and also experience like how seamless that is.
So I think that's something like which
there are some community from there
and also like there are some more native applications
which we share the same value.
And then they believe that, you know,
we believe our way,
of being open, being community driven, being quite a bit neutral, and then they come to us.
Because we don't provide brand to application specifically because we think that will make
your ecosystem become a zero-sum game.
Because basically, like, there will always be new chains.
Like new layer one, new layer two, they always launch their token with, they read a lot of
money, they can give a lot to new developer.
Like, I give you this money, you come to our chain to deploy.
But then, like, it will become a competing game, which I give you this amount.
the other chain give you that amount, and then you go to the other chain.
And it seems like it's not like we are making the whole ecosystem grow.
We are like making this kind of small, high for crypto larger,
but it's more like I want to compete with each other.
So what we want to do is that we want to attract,
we want to focus on like building stuff and attract more organic community.
We are, you know, they are very organically attracted by us
because they see like there is a huge opportunity on scroll by being early.
and also like it's one of the most legit
and general purpose like top,
top player too and there will be a huge opportunity there.
So what we are focusing on is that we introduce,
we make, we get everything ready for developers.
For example, like we have EtherScan support.
So users and developers can use your most familiar infrastructure.
We get Chin Link as Oracle support,
which means like, you know,
all the kind of Defi can use your most familiar Oracle.
We get all the kind of indexing RPC provider ready.
that's something we will focus on, like, build an environment that developer are easy to build,
but we don't, like, enforce you to build, like, what we want. We just create this kind of
foundation. We keep in fuller documentation, tutorial, workshops. And then, like, a lot of people
are actually attracted by this. It's like, for some more, like, commercialized application,
for them, one way there is Versa, like, you know, there are some other wallace, which are native
to scroll. And there are some, like, new DeFi, like, Koch finance and some other, uh,
financial applications on scroll. And mainly it's like a lot of like small AI games like
happened to on scroll. Like I think days ago, like I don't know who built this, but it's like
there's a game called chat NPC where you chat with with an AI image and then like you
try to negotiate through this conversation about how much you can get. And then like you end up if
you end up being being high, you can get some score. It's it's very simple game, but it's very fun.
So I think being fun is really one important way to attract builders.
I think a lot of, I know like a lot of applications are still building.
Because the more serious, more like commercialized more ready, the project is it will take a longer time to build and being ready.
But for every love with listening, like pay attention to our ecosystem account, our main account,
you will notice a lot of good opportunities, good projects that building on top of us,
which will come out in the next a few months.
And in terms of, like,
also support a lot of grassroots hexathons.
We have been,
I think we probably participate
over, like, 20 hexons maybe,
like around the world, like everywhere almost.
And then, like, we,
a very recent one day that East Global in New York,
we, it's the first time that we surpassed all the,
all other chance in terms of deployment
and with a small price.
So usually, like, you know,
it's not just, you know,
know, people offer higher price, so people deploy on you. But if I realize EVM, and then, like,
you can provide something kind of subtle changes or, like, something cool, like, you get the most
deployment. And then, like, it's very welcome in this kind of hacker group. And it's very exciting
because it's even like before our Mennet launch, where a lot of other chains have all those kind
of even better infrastructure support. But we can still get so many hackers, interesting, kind of
building on top of us, I think it's definitely something like which we are kind of celebrating,
like, because this is very organic grassroots Hexon project, I do believe that in this way,
like you can grow your grassroots community to a large extent. So those are like all developer
communities we have, you know, including like as I mentioned like some wallets, some, some, some
kind of like defabifications. And there's a lot of hexam groups. And other other step, I think,
looking forward, we will expect more regional community.
So we will start seriously building community in several places,
including Turkey, Nigeria, Malaysia, and Argentina.
That's our plan for where we seriously want to put some efforting
to grow the community there.
Because our mission really to scale Esyrian,
to sometimes like, you know, if you have,
like we want to codify trust.
and like empower this ownership for individual and chief financial inclusion.
So and you see that all the blockchains are not scalable enough for billion of users.
For even like they don't even know like where all those 1 billion users come from.
So we have a very clear goal for how to gather the users because the users like who need crypto are where the 1 billion people really come from.
And you do see like in U.S., in even in China, in some kind of other more, like in Europe,
a lot of places, people don't really need crypto because they have really established like a financial system.
They only do that for cross-border like payment and maybe people use that.
But in a lot of places like Africa, like, you know, I have been there for 10 days for this little to trip
and to really understand how people are using crypto in practice.
you figure out that a lot of people
are really suffer from this kind of currency inflation
and they have a lot of problems
that can be tackled by blockchain.
We really want to kind of help those users
and onboard use those users to E-Sem ecosystem.
So that's why we select to choose those places
which they suffer from the problems
and they have real need for crypto
and then like we want to double down effort
in building regional community.
And eventually I think I'm imagining
like in a few months or like a few years,
scroll will be thinking about as the layer tool
that has the most real users coming from developing countries,
from some place in Asia,
so all those places where if application come to deploy,
they know that there is real user there.
It's not just the same group of people
like in migrating from this chain to that chain
because it's a new chain.
So that's our like mission.
And I consider that to be part of our community,
but we are still building that.
We just started on Turkey,
but we was like putting some more,
like, so if you look at like, you know, this year,
a lot of like projects put out to from, from, from Dem Connect,
but we are still like building a very, very large,
probably largest ever events called Lair2 Day with Lair2Bit.
We are co-hosting that.
It's like over 2,000 people capacity to kind of talk about Lair2s.
And we are seriously building a community,
starting from Turkey,
but other places we were,
If you are like, you know, in the regions, you really want to change, change people's life there.
Then like, you know, you can talk with us and we want to kind of build community there.
Seriously, we are, I think I see like where our community can come from.
Okay.
So basically, what you're saying is you're trying to grow the community organically through culture rather than through kind of like monetary incentives,
which is kind of often the way that it goes in crypto.
Yes.
Speaking of incentives, what's the token situation for scroll?
Yeah, it's for our legal reason.
Like, you know, we haven't like, you know, really publicly, you know, I'm more thinking like, you know, it's more like a mechanism.
We are thinking about like why you, why you even need this.
Like, you know, maybe there are some reasons for design sequencer, design approver.
There might be some other models for that.
So I think we have to you thinking through what's the best mechanism
and like incentive a model for our system to work the best
instead of, okay, so just this token thing.
But yeah, we have been working on some mechanism design
which aligned with our system to make our system become more stable and more usable.
But, you know, for some of other concerns, like, you know, we are not yet.
But currently,
Gas on scroll is paid in ETH.
Yes, yes.
You already alluded to it in your last answer.
You currently still have centralized sequences.
It's kind of, it's been normalized over the last, like, year or two.
But in principle, what it means is that you kind of,
you have one or several permissioned sequences,
people who can, or entities who can actually build blocks,
which is very much not what blockchains should strive to,
because kind of in terms of decentralization,
kind of your least decentralized component
determines how decentralized the entire system is, right?
And so if you have one centralized sequencer,
you have essentially the entire chain is built by a single entity.
What are your plans towards decentralization?
Yeah, that's a great question.
So I definitely agree that, you know, that's in some sense, like, it's different from like how normally like a layer one blockchain works and like why only a few parties can produce block.
But I think what I think about this desalization problem, I think about like, you know, again, like from first thing for like why you need decentralization.
And there are several reasons.
Like in layer two contexts, specifically there are like maybe maybe three aspects.
I think one is a sequencer who generate a block for you.
so who can produce a block and then pass that to Prover.
Pruever is basically generally a ZK proof for each block.
To prove that, okay, so all the transactions inside this block is valid,
and then finally submit this block and they proof on-chain.
And then on-chain verify will verify this.
So there are two parties.
One is sequencer, generate a block.
One is the prover, like generally proves.
And the two parts definitely need to be decentralized,
but for different reasons.
So for a sequencer, the reason is that,
So actually, like one thing, like, you know, just to add to this is that even if we are using a centralized sequencer, like what I relate to it is doing is that it doesn't influence your user's phone security.
Because if you think about like, so what the bad things can do, like from a signal perspective, so you can set a transaction.
Sequencer includes that transaction in a block, prove a genetic block, the journal proof for this block.
So if we are operating a sequencer, we want to do something bad, right?
if we insert a bad transaction,
then we can't generate proof for this
beta transaction because, you know, ZK's
ZK proof will kind of
only attach to, like,
they write transaction as a valid transaction,
so we can't insert any bad transaction.
So that's number one.
Because of ZK proof, you can't do something bad.
Secondly thing is that, what you can do
is that you can censor transaction.
If you send a transaction to a centralized sequencer,
central sequenceer say no, I want to include you.
But that can be like,
you know, solved by, okay, so if users find that one layer to that accept their transaction,
it can send this transaction on chain. And then like layer one, like verify where you force that,
if you don't include this transaction in maybe 24 days, then anyone can send me the block. So in that
way, like, you know, you can avoid this kind of censorship resistant problem. Like,
and users will never suffer from the problem of you can't withdraw a phone because you can
always do that. We can't, you know, even if you reject, like, you, you know,
you can do that.
And then, so let's go back to,
so central sequence that influence the system's kind of security.
But what it really influenced is that there are some sense,
like real-time censorship is a problem.
So imagine like, you know, you will be liquidated in the next one minute.
You sell a transaction to deposit some money,
but then we reject and then you are liquidated.
And then even if like your transaction is like guaranteed to be included in next a few blocks,
it doesn't really matter because you're already being liquidated.
So that's a problem.
And that's where you can use DeSan Sequencer to help.
But again, like there are still maybe some other way to solve this problem.
Maybe you can encrypt your, use some equipment pool where you increase the transaction
where sequencer can't distinguish which transaction and then it had to include.
And then like after decryption, you know like what's included.
That can also solve this problem, right?
So like it's desicization sequencer is a tool, not like we have to kind of, oh, we have to
full decentralization, we decentralized.
But it's for solving some problems.
And we do think that's a promising way
to solve this real-time censorship-resist problem.
And also there might be some legal issues
where there are multiple, like, you know,
entities running sequencer in different regions
than, like, the risk for, like,
being more censorship-resence can be reduced.
So that's for sequencer.
And so the prover, there are some kind of problems.
The thing that you can solve is that it can be very scalable.
where imagine that there is a network of minor or prover running or kind of proving algorithm.
So it's very different from proof of work, even if you also have a high requirement for hardware.
Because they can prove it is that like proof of work is basically like 10,000 people computing for the same randomness.
And then who gets the answer, who can submit.
But the key proof is that you have a fixed deterministic algorithm and anyone can execute and get it.
And that's it.
So it's quite different.
And so in terms of it's more like an outsource computation.
Autosource this useful computation to the proofers.
And then having this decentralized proofer network can help you to kind of having some backup
if one prover party just like goes down.
And then like other other prover can still generate proof for you.
So you have some backup.
You have some more stronger leviness guarantee.
And also you don't need to buy machines yourself.
It's more scalable.
Like people can use their own machines to generate.
proof. Yeah, it's more resilient. So kind of to actually gain the forced inclusion that you talked
about earlier, if you're being censored on the L2, you need to make all transaction data available
on the L1. So basically this is kind of why we're now getting dank sharding with blobs where kind of
data is going to get, where temporary data is going to get much cheaper and so on. But, but
but it's still the main cost driver for L2s.
So it's on the order of 95% or upwards of that
as compared to kind of just the check-ins that you kind of need to do
to kind of prove the state, right?
So there's a couple of L2s that kind of have gone
the Validium route recently,
where basically they decided to not actually post all transaction data
to L1, just because it kind of drives up price on the L2
currently prices aren't crazy
but I mean they have been crazy in the past
they'll probably become crazy again at some point
which will kind of mean that lots of applications
that are currently viable
kind of like the AI game where you kind of
you negotiate with an NPC
they will no longer be viable on air 2
right so what's
what's your strategy there
so I mean basically
even when dank charting comes
call data will become maybe 10 times cheaper or so,
but this is still pretty expensive for lots of applications.
So what are you through it's on kind of going the Validium route?
Yeah, that's a great question.
So I think for this Validium, I think,
so I think Vardidium definitely trade out some security for being cheaper.
But for us, I think currently we are sticking to,
as I've said it's goal mentioned, will stick to posting data on Ethereum
and strictly inherit the same security from Ethereum.
So because our, I think for most role-up, like, different row-up can have different value
proposition. I think for us, it's always, like, security first is always, like, drive most
of our design decisions, including, like, that's why we do this auditing this way, we open-sourcing
in this way, we have bug bonding this way. So it drives almost all the decisions because we do feel like
there will be a crucial amount of applications that need this level of guarantee. And then, like,
we make this become our first priority.
So for quite a long time,
we will stick to this, like,
approach and guess what we think is that,
I think Vitalik is recently just post a blog post
about like different layer tools.
And there is actually a spectrum
for what applications need,
like, on the kind of left-hand side,
there's a key store, you know,
which is the fundamentals of all the smart contract wallets,
storing their keys, key mapping,
key value mappings.
which need extremely high security.
And then there is ENS,
which store your identity information.
And then the more on the left side,
the higher security you want,
like maybe defy, institutional money,
and some government,
if they want to kind of issue something important,
they need security.
On the right side, it might be gaming,
like some NFT or some kind of those things.
I think our right proposition is that
school main chain will lie on the left hand side
where we want to kind of attract
more security-driven applications.
And then, like, you know, if Deng Shardin is not there yet,
we might spare some effort in helping ECM to kind of build this solution for,
for everyone in the whole ecosystem.
So that's our goal, like, you know, how we handle this situation.
And if we really reach the point for where Dengs Shardin is not even enough,
I do see like maybe some other teams building their three on top of us,
they can be validium, they can kind of, you know, do other trade-offs that on top of us.
They can even use state-diff, post data in a different way.
But that's what I see, like, you know, how this will go.
It's like, we will remain a very, very high standard for security
to attract the most kind of legit, large, the most fundamental applications.
And then, like, if game really, really require, you know, like high throughput,
or, like, extremely high throughput or, like, that thing,
they can build a layer three as a validium on top of us.
And like, I think it can be either can be layer three or some other other side solutions.
Like at the same security level, we were trying to kind of reduce the cost.
Like for example, we are also working on data compression, like how you kind of compress the data,
you pose on chain.
So there are some way, like which you can reduce your data post on chain, become like three times smaller.
So there are some way for doing compression.
We are looking into that, how you reduce the bridging cost.
but everything like stands like it can't like you know security should should always be the baseline
and should be never something we will compromise.
Do you think we'll be able to do data availability optimistically at some point?
Because that would solve a lot of problems, right?
So I think it still depends on the like secure assumption.
I think for to me like, you know, I think for Scroo's main channel like,
So firstly, I don't see any kind of very, very well-established, like,
optimistic data-vdivation solution, even if it's like, you know,
there are teams like Eganleus last year is building some DA solution.
But I think it will still take some time to test whether this works or not,
whether there are some other issue related to, because, you know,
like, you have been running this infrastructure layer one.
You know, like how many things, sumptuous things that it needed.
Like there's Oracle, RPC provider.
All those things might require direct access to the data.
and ideally on the same chain.
So you don't know, like, if you switch to the other solution,
what the subtow changes you need to make.
And I don't think most of the solutions is mature,
that mature enough for a main chain to migrate from.
But I think we are still remaining optimistic.
We will kind of always pay attention to, like,
the progress from other companies, from other solutions.
But right now, we think, you know,
it's not there yet,
and we will stick to this kind of security,
principle and encourage all the kind of layer three is consider that as an option if, you know,
you will have some trust assumption.
Yeah.
You posited earlier that there'll be a whole ecosystem of L2s with different trust assumptions
and different associated costs.
I agree that that's kind of like the goal, but we're currently very far away from that goal.
So just before we started recording this podcast, a tweet from Joseph D. Long,
came out and it literally read L2s don't actually scale Ethereum, they just fragmented into a bunch
of unrelated chains. Obviously alluding to the fact that inter-L2 bridges currently are not operative,
so basically it's a way to kind of go between L2s is to kind of bridge Raya Ethereum, which is obviously
very costly. So in principle it seems like
bridging across different L2s should be possible because they have the same security assumptions
and kind of building on top of Ethereum.
When do you think we'll get there and how do you see it happening?
Yeah, that's a great question.
So I think in terms of like interbo like between layer two,
I do think like people are, there are like two directions like people are going towards.
It's orthogonal.
It's not something like contradicted to each other.
So one approach is that as you describe,
like all the layers are based on Ethereum.
So they post the same state route on the same layer one.
So what you can do is that,
especially for the Kirrub,
with a faster, like shorter finality,
is that imagine there's this eCAM layer one,
you post two state route.
Like for example, there is scroll,
there is like the other chain,
A and B.
And then like what you can do to access B stator
is that you can read the state route
of another layer two from ECM layer one,
because your route is also there.
So you can read that.
And then provide a proof,
proving that the story slot,
the element you want to read from the other layer two,
is like provide this Merkel path to that route.
And then you can trustlessly read
and like operate over the other chains like state.
So that's how you can do this in a trustless way.
Because think about like two different layer ones,
because they have totally different value.
like validator set, the bridge had to be kind of
like multi-sig or some other like valetia network
where you don't have this kind of shared security.
But for different layer two,
especially ZKBase layer two,
you can read the other chain state through this proof,
like Merkel path from the other chain and then like,
but this thing should do some latency
because you need some time to generate proof
and it might break the composite bit
because of this latency.
And so I do feel like
in some sense,
possible to have some standard in messaging between different layer tools, but it's still
not super practical and user express friendly to kind of do that here fully translated. People still
use bridge to bridge regardless. Another, I do feel like, you know, as improving technology
has been improved as, you know, more layer two become adopting some similar standard for how
this communication can happen. It might be solved. So that's one direction.
That direction like people keep talking about is share about share sequencer.
So share sequencer is basically like where the same party sequence two,
sequence like, you know, like transactions from two different chains.
So that's how like, that's where you can, if you want to interact with the other layer two.
And the same sequencer knows the order.
So that's like how you kind of may be possible.
Like you have this kind of, you know, some level of compatibility between different chains.
But there are like still a lot of problems.
really to this because most
these share sequencer are only ordering the transactions.
They don't guarantee that
like one transaction successfully
done like you execute the other.
It then has that level of
atomicity. So
both are still very early.
I think we are kind of
watching closer to those research directions.
But currently it's just focusing on
building one chain. I think eventually
a different layer tool have different communities.
And it's good to have some standard
to talk with each other.
Because at least compared with different layer 1 fragmentation,
it's theoretically possible to do that in a trustless way.
But it still takes some time.
I mean, when you say different L2s will have different communities,
that's kind of at odds with what you said earlier,
that kind of different applications will choose different L2s
based on the security assumptions they need.
So basically, I may want to play some game
that doesn't need a lot of security assumptions.
but I still want my smart contract
while it living on a different chain, right?
Yes, exactly.
Yes.
Yes, I think due to this kind of different property
and assumptions, like eventually different layers
will have different, like,
because they have different value proposition,
they have total different mission and vision.
Like some want to kind of, you know,
get users through marketing,
and that may be part of their brand.
And some might, you know, like,
want to build a total different new community
from EVM.
And, but,
But we are like, you know, we're trying to kind of give our line.
But you don't see, you don't see like the same users kind of using different chains
for different applications.
Right now, yes, it's because most chains don't have that many, like, you know, exciting applications.
It's like, you know, the same suit for, for defy and like, but I do think like, you know, as,
you know, but you can feel like different chen had to do a different brand, right?
like some chance, like the first impression is this, some chance first impression is this.
Like through the first impression for most people is like super high seam alignment,
high, very reaches for their tech and security assumptions.
So if you go to kind of say, if you go to the field in Turkey or Malaysia or Nigeria or Argentina,
any of the countries you mentioned earlier, well users who will actually use this to kind of solve
real world problems where they care which chain they're on.
So in my view, kind of the DAP developer will kind of decide what kind of is the best security or the needed security model for their DAP.
And then the user ideally down the line, they won't know which chain it's on, right?
They don't have to care about the vibe.
It's just like you and I browse the internet every day and we don't really think about TCPIP.
Just happens in the background.
Yes, yes.
That's the most ideal situation.
So by developing communities there, I'm more saying like, you know, doing more like develop education.
and growing like a grassroots developer community.
Because what's different like in developing community
versus developing a global community is there
is that local community can spot a lot of like local problems.
Because if I'm here, I don't know,
if I haven't traveled to those places,
I don't know like, you know, what people are building there,
like what problem people are facing.
So like getting developers there
and like a track value of line developer there,
grow community there can help getting more local developer
to build applications on top of you.
and then like potentially get more users.
So that's also like some,
so that's how you grow.
This is not like we are acquiring users
and then like user have to choose their chain.
But it's more like from application level,
we can kind of help more grassroots developer
to build to solve local problems.
And then like users will use them
and then use scroll as default chain.
So what does the roadmap look like for scroll?
I think it's a very, very kind of short
like technical roadmap.
will be we will use data compression
to massively reduce the transaction cost.
So like anyone can expect,
like, you know, transaction on scroll
will become like, you know,
we cheaper, like in a few months.
And why it takes a few months is because
we were trying to minimize the frequency
for doing upgrade, because doing upgrade is very,
very kind of scary because you are using most things
to upgrade the most important part of your protocol.
So being cheap using data compression,
is very important thing we will do.
And second thing is that we are keep focusing on improving the security.
So as you mentioned, like censorship, resiliency,
we are trying to have some solution for that.
And this kind of proposal, sequencer, failure,
if, you know, we are not producing block,
then, like, what will happen?
So we are working on solutions for that.
That will also, like, be solved, like, in around three months.
Like, you know, after that, a large upgrade will happen.
at that time, and then you will see, like, Layer 2Bit, which is the reference for checking
layer to security, like a lot of red area will become green. And also we are exploring how to,
like, even add more security, more than just what Layer 2B is talking about. Like,
people worry about like Ziki has bug. Ziki has bug. So like we have exploring like multi,
multi-prover system, like where we can add another additional prover. So if they keep that to work out,
the other proofer can still be backup to kind of, you know, like, make sure, like,
it's secure.
So security is very, very important for us.
And then, like, if application care about security, they can come to us.
And then that will be a, like, shorter goal.
And for the research, we are, we have been keep focusing on, like, desalization.
We will have some proposals published to kind of discuss very broadly.
And we, we are keep focusing on, like, working on the next generation,
ZikiVM to make Zikm become time time faster.
There are already some designs there.
We are still doing benchmark for which process.
We want to use how to architect the next generation ZKEVM.
So it's also there as a long-term, long-term thing.
And in terms of ecosystem community building,
I think we will get all the kind of necessary infra,
as I mentioned, ready.
Like China just to integrate like this ago,
and units were passed year governance,
and a lot of like basic building blocks will be there,
like in a few weeks or, so like, you know,
come and build.
And then we will have some community initiative for the community to interact, to kind of engage and more calls on Discord to kind of really understand what our community really need.
Yeah, I think a lot of initiative will come out and also start community building in the regions we select and then like double down effort in kind of helping builders there to build.
So those are like a short term like, you know.
Yeah, and also like in terms of security,
also definitely security console, like which
we remove this to some really credible
security council members that they will control
like upgrade so that we are not
controlling like the schools, like upgrade.
So those are things where happen in short term,
like being cheaper, ultra like high security
and also a lot of community initiative
and keep doing research.
So that's for,
that's what you can expect in the internet
like three months and
I know a lot of projects
are coming to to scroll ecosystem
so like follow ecosystem account
and scroll account and scroll our
our main official account will man for protocol
update our members
event, hexans, all those stuff.
Ecosystem, we're talking about ecosystem project
there's weekly
like what will happen so if
you know, feel free to kind of interact
and then like use what
understand what's happening there.
And what's the best way to kind of
in touch. Are all the links on your website?
The best way should be like our Twitter, where scroll underlie ZKP is our official account,
and then there is another ecosystem account which is build with scroll or build on scroll
and you check. But that's like we have another ecosystem account for following the recent
ecosystem updates. So if you follow that very closely, there is weekly updates around
everything happening on scroll. And we will support multi-language community. So if you are Chinese,
You are like Spanish speaking, Turkey speaking, like, there will be like corresponding account
like you can follow to know more about scroll.
And join our Discord, definitely.
Yeah.
Perfect.
Thank you so much for joining us today.
Thank you for joining us on this week's episode.
We release new episodes every week.
You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud, or wherever
you listen to podcasts.
And if you have a Google Home or Alexa device,
you can tell it to listen to the latest episode of the Epicenter podcast.
Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter, so you get new episodes in your inbox as they're released.
If you want to interact with us, guests or other podcast listeners, you can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show, and we're always happy to read them.
So thanks so much, and we look forward to being back next week.
