Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Succinct: Every Rollup Will Be a ZK Rollup! - Uma Roy
Episode Date: December 11, 2024Polynomials are quintessential in machine learning for establishing relationships between outputs and inputs. However, there is also a field in cryptography which could not be made possible without po...lynomials - zero-knowledge technology. In zero-knowledge proof systems, computations are often represented as arithmetic circuits, and these circuits are translated into polynomials. This process is crucial for generating proofs that can demonstrate the correctness of computations without revealing the underlying data. The involved complexity explains the massive adoption hurdle for zk rollups compared to optimistic ones. Succinct aims to simplify the use of zero-knowledge proofs by providing a zkVM (SP1) that allows code written in languages like Rust to be proven in a privacy-preserving way. By doing so, it aims to lower the barrier to implementing zk-rollups and increase their adoption.Topics covered in this episode:Uma’s background and her interest in zero knowledge techHow Succinct’s story beganZK light clientsZK circuitsSP1 and the RISC-V instruction setThe prover networkUse casesZK rollups and commoditizing ZKPsIncentivizing proversSuccinct’s business modelSupported blockchain applicationsBottlenecks in ZK adoptionSuccinct metricsSP1’s competitive advantage and future roadmapThe real world impact of verifiabilityEpisode links:Uma Roy on XSuccinct on XSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: Chorus One is one of the largest node operators worldwide, supporting more than 100,000 delegators, across 45 networks. The recently launched OPUS allows staking up to 8,000 ETH in a single transaction. Enjoy the highest yields and institutional grade security at - chorus.oneThis episode is hosted by Friederike Ernst.
Transcript
Discussion (0)
My hot take is that all ZK roll-up teams are actually just roll-up teams,
and they should think of themselves as roll-up teams, not ZK teams.
With the technology we put out there in SP1, ZK has become a commodity.
It is accessible to every single roll-up team out there.
And actually recently, we did an integration with O.P. Stack, which is traditionally an
optimistic roll-up stack, and we made OP stack into a ZK Roll-Up.
The integration is called OPsyshinct, and it's like live and running, and people can use it.
So if you want to prove some computation, you just write it in rust, and then you can generate proofs with us, P1.
Welcome to Epicenter, the show which talks about the technologies, projects, and people driving decentralization and the blockchain revolution.
I'm Frederica Ernst, and today I'm speaking with Uma Roy, the co-founder and CEO of Sucinct.
Sucinct are building a decentralized ZKVM and Prover Network.
And before we talk with Uma, let me tell you about our sponsors this week.
This episode is proudly brought to by NOSIS, a collective dedicated to advancing a decentralized future.
NOSIS leads innovation with circles, NOSIS pay, and Metri, reshaping, open banking, and money.
With Hashi and NOSIS VPN, they're building a more resilient privacy-focused internet.
If you're looking for an L1 to launch your project, NOSIS chain offers the same development environment as Ethereum with lower transaction fees.
It's supported by over 200,000 validators making NOSIS chain a reliable and credibly neutral
foundation for your applications.
NOSISDAO drives NOSIS governance where every voice matters.
Join the NOSIS community in the NOSISDAO forum today.
Deploy on the EVM-compatible NOSIS chain or secure the network with just one GNO and affordable
hardware.
Start your decentralization journey today at NOSIS.
If you're looking to stake your crypto with confidence, look no further than Corse 1.
More than 150,000 delegators, including institutions like BitGo, Pintera Capital and Ledger trust
Corus 1 with their assets.
They support over 50 blockchains and are leaders in governance or networks like Cosmos,
ensuring your stake, is responsibly managed.
Thanks to their advanced MEV research, you can also enjoy the highest staking rewards.
You can stake directly from your preferred wallet, set up a white label note,
restake your assets on eigeneyer or symbiotic or use the SDK for multi-chain staking in your app.
Learn more at chorus.1 and start staking today.
Uma, thank you so much for coming on.
Thanks for having me.
Cool, Uma.
Can you tell us a bit about your professional and educational background before starting succinct?
So I've always done a lot of math.
In high school and college, I did a lot of.
of math research. And after school, well, during school, I studied math and computer science at
MIT. And I also got my master's there actually in machine learning. And after school, I was
working in machine learning research at Google Brain. And then after was at a small startup doing
machine learning stuff. But my interest in math is what originally hooked me into ZK,
because obviously you know, has a lot of math,
and it's a very math-heavy field.
And I found ZK through a friend,
who I met at MIT.
And, yeah, that's kind of like how my background ties into ZK.
Your foray into kind of like blockchain was mostly prompted
by your interest in zero knowledge stuff?
Yeah, that's roughly correct.
I'd always been interested in kind of like the ideas
concepts of blockchain broadly, not just ZK.
But when I finally learned about ZK, I kind of, that got my interest enough that I realized
I wanted to build in the space.
What was it about zero knowledge technology that kind of drew in specifically?
Probably like a lot of other people.
I was just really fascinated by the actual technology.
I think ZK is a very magical concept.
like you can prove to someone else that a certain computation is true
without revealing what the inputs are.
That seems almost like it should be impossible.
And so it was just a puzzle to me of like how to figuring out like how that's possible,
understanding the math, understand the cryptography.
And yeah, that's like what really drew me in.
Cool.
So you came for the tech.
Tell us about how that kind of transformed into a vision for succinct.
Yeah, so we were building in the ZK space for a while.
So I met my co-founder at this summer residency hosted by Xerox Park, which is this, I guess, sister organization to the EF, the Ethereum Foundation.
And they were hosting this like kind of co-working over the summer together.
So I met my co-founder there.
We were working on a ZK project.
Actually, it was related to NOSOS.
And I think maybe that's one of the first times we interacted.
was through that.
But we were building a ZK bridge from Ethereum to Nosis chain,
and in particular, building an Ethereum ZK-like client.
So that was like our first war in to the ZK world was building this project.
How did the product offering the volume over time?
So Sysink was started out of that project.
So originally when we started Sysk, we were building a ZK bridge,
kind of with this endgame thesis that the future of bridging was all going to
be ZK-like clients. And I still think that's actually true. So we were really focused on building
this bridge and making all bridges ZK. We worked on that for maybe like six months and actually
shipped an Ethereum ZK-like client and it got integrated into the NOSOS validator set as a ZK validator.
We shipped a few other ZK-like clients. But then after some point, we realized that
you know, ZK was too hard. Like building each of those like clients,
you know, several months and many engineers. And it was very, very complicated. And we realized that,
you know, ZK has a lot of potential, but if it was always going to be so hard to use, it wasn't,
it wasn't going to go anywhere. And that's what inspired us to build our ZK VM, SV1, which makes ZK
really easy. So instead of winning months, we're, you know, making a like client and writing these
like things called circuits, you just write normal code and rest and you still can use ZK. And yeah,
that was evolution from our early days in ZK as like ZK builders to like building underlying
ZK info that makes ZK accessible to everyone.
Quick. Before we kind of dive into the ZKVM, because I have so many questions as to this,
let's kind of briefly clarify kind of what a ZK-like client does.
So kind of like it's basically it's a client that can kind of verify the state of a
another blockchain on chain, right?
So can you talk about this a little bit and why you think ZKLA client bridges are kind
of the end game of bridges?
Yeah, so a like client is just a protocol to verify the consensus of a particular chain.
And so when you verify the consensus of a particular chain, you can then, you know,
understand if someone has like burned a token on that chain and then you can mint them a
token on the other side. So kind of like classical bridging. So a ZK-like client is just
implementing that like client in a zero-dange proof and then verifying that proof on chain
instead of verifying like the whole like-client functionality. And the reason you'd want to do this is
if you want to run a like client on Ethereum, it'll be really, really expensive because you'll have
to verify a bunch of signatures. You'll have to do a bunch of computation. Whereas if you have a ZK-like
client, you just verify a proof of that computation, and that is just a lot more gas-sufficient.
So basically, TLDR is a ZK-like client as a way to generate a zero-knowledge proof of a
light client that makes it really gas-sufficient to use in on-chain contexts for bridging.
I think that's a fantastic summary. So kind of let's move on to the kind of ZKVM part.
So you said that kind of like building the stackets was really, really,
hard. How can we kind of like as someone who's never actually written a circuit before,
how do I go about it? So kind of I know what kind of the logic is I kind of want in that circuit.
What do I do? Yeah. So to take a computation that you want to generate a ZK proof for,
you have to write it in a form that like you can actually do a ZK proof in.
And historically, that meant taking your computation and breaking it down into additions and multiplication.
So it was very primitive.
You'd basically use a specialized language or a DSL.
There's a bunch of other options out there, including tools like Sircom, which maybe people have heard of,
where you would make a computational graph out of ads and multiplies that express your computation.
Obviously, that's, you know, very, it's not very fun because you're,
you're working with these very basic primitives and very basic building blocks.
And it's similar to like writing assembly or writing a very, very low-level language
instead of writing something like Python.
Why is it that in order for it to kind of be incorporated into a ZK second,
why can you only use additions and modifications?
Yeah, that's a good question.
So the way ZK protocols generally work is you take your computation and then you encode it into polynomials and then you prove relationships between the polynomials.
And that's kind of where the additions and multiplications come from.
Like in polynomial world, you can do additions and multiplications and you can prove that polynomials are related to each other by like those operations, I guess.
and so it has to do with all the underlying math and cryptography of how ZK actually works
and kind of turning your computation into something as polynomial friendly
is the reason you have to break it down into these very primitive relationships, if that makes sense.
I mean, obviously there's a lot more math there that I could dive into,
but maybe that's like a simplified explanation.
I think that's fair.
Maybe we're linked to some resources.
in the show notes.
So the ZKVM is my understanding correct
that I can put whatever rust code or piping code
or whatever into it and it will kind of automatically turn it
into a ZK circuit?
So it is correct that you can put any Rust code into it.
So now if you want to prove some computation,
you just write it in Rust.
And then you can generate proofs with SP1.
You can't do Python because, like, we, you know,
Python's like, interpretant language.
But, yeah, you can do Ross, which is, like, I guess,
one of the most common languages used in blockchain.
It doesn't quite turn the code into a circuit.
What it does is it compiles the code to an instruction set called Risk 5.
So that's, like, similar to EVM.
You know, you have a bunch of op codes.
and then we prove the execution of that risk five,
of all those risk five instructions.
So we basically prove that like when you execute the program
and all the instructions in the program correctly,
that it gives like a certain output and a certain result.
So we have a risk five circuit that is responsible for proving the execution of this
risk five bytecode.
And then your program gets run through that circuit,
if that makes sense.
So that's, we're not turning your computation into a circuit.
we're instead going through this risk five layer.
What's a risk five layer?
Risk five is like an instruction set.
So your program gets compiled to a bunch of instructions in a row.
So it might be like do an add, do a multiply, do a subtract, you know, then do a jump or do a branch or whatever.
So that's like the underlying instructions of your program.
And then we prove that like the instructions of your program are.
executed correctly, like in the correct order, and, you know, end up touching memory
correctly and stuff like that and having this final output.
For those that are familiar with the EVM, it's very similar.
Like, you take solidity, it gets compiled to EVM by code, which is a bunch of like
EVM opcodes.
And then when you run a smart contract, you're just executing a bunch of EVM opcodes
in a row.
So it's pretty similar mental model.
When you say we prove, who is we in this situation?
Yeah, like our ZKBM, Asapine Juan, is proving the execution of the Risk 5 code.
Okay, cool.
So it's a fully trustless setup, right?
Kind of like it's not you guys proving that, okay, why hasn't this been done before?
So kind of like it seems like kind of turning.
It seems like it's pretty well understood what kind of operations you can kind of feed into a ZK proof.
So why, and I assume there's pretty good cookbooks, how to kind of turn whatever you have into that.
Why didn't what kind of you build exist before?
Yeah, so good question.
So I think for a long time, people were really fixated on proving the EVM.
So there was a lot of ZK EVM teams.
And I mean, that makes sense because validity and all the Ethereum stuff runs an EVM.
So people are really fixated on that particular objective.
And then I think also it wasn't super clear that proving Risk 5 would be performance or doable.
Like people thought it would be really slow.
It's not an instruction set that's really designed for ZK.
It's actually a very common instruction set that's used by, you know, in many other contexts.
So people just didn't think it'd be performant.
They'd be like, sure, you could do this, but it would just be really slow.
And I think one of SP1's major innovations was that we actually showed that it could be fast for workloads,
like clients or roll-ups that people care about.
And now there's, I mean, there's a lot of people trying to prove this Risk 5 instruction set these days.
And yeah, so these days there are more people trying to do that.
I think it's worth mentioning two teams.
One is the Starkware team.
They didn't try to prove Risk 5, but they wrote a ZKVM for Cairo, which is like their instruction set.
and it's like a, yeah, it says instruction set that's optimized for ZK or like design for
ZK.
So they kind of like pioneered in many ways ZK VMs way back, you know, five years ago.
And then the Risk Zero team was working on proving Risk Five execution.
And then I think our major contribution was kind of taking risk five.
five ZKVMs and making them really, really performant for BOLA and showing that it's actually
like practical to use for a lot of workloads people care about.
Cool.
So I think kind of we've covered the basics of SP1.
The other key part of your offering is the Prover Network, right?
So tell us about that and kind of like how these two intact.
Yeah, so SB1 is, you know, an open source project.
you can run SP1 and generate proofs on your laptop.
But I like to tell people that's not really a fun time
because proof generation is very intensive.
And so to actually make it very performant and really cheap,
we have a GPU implementation of SP1
where if you generate proofs on a GPU,
it's much faster and much cheaper.
But obviously, I think most people don't want to like set up that infrastructure themselves.
And so our Prover Network is a way for, you know,
anyone to outsource proof generation to the proofer network.
So instead of generating proofs on your laptop, you generate them through the
Prover Network.
So you send it like, hey, I have a program.
Here's my input.
I want you to generate a proof.
And then the Prover Network provides a protocol for anyone in the world to plug in their GPU
and contribute capacity.
So it's kind of similar to mining where like in the past, anyone in the world could
you know, mine Bitcoin blocks through the proof of work mechanism.
We have a similar kind of concept or architecture where anyone in the world can generate proofs
and like earn transaction fees from the Prover Network.
How does the consensus mechanism for the Prover Network work work?
Yeah, that's like work that's coming out soon.
So right now the Prover Network doesn't exist.
It's just like a concept we've put up.
out there, but it's not actually like live or running yet.
There's no consensus mechanism for the Prover Network.
There's more like an allocation mechanism for like if there's a bunch of proofs coming in,
who gets the right to generate the proof and earn the fees for that proof.
So the allocation mechanism that we have is this like auction style mechanism that we're
going to publish some more material on soon.
and that's responsible for deciding
who gets to generate the proof
and at what clearing price
the fees get paid.
What do you envisage as the use cases
for SP1
and kind of the proofs generated
by the Prover Network?
I think I'm really excited about ZK roll-ups
in general.
I think right now there's a lot of talk
about the Ethereum
fragmentation and all these problems
that the roll-ups can't interoperate and talk to each other.
And I think the biggest flaw of the optimistic roll-up design is that withdrawals and bridging
takes seven days.
And fundamentally, I think that leads to a lot of this fragmentation and interoperability
issues.
So I think ZK is the best way we're actually going to solve that in the Ethereum ecosystem.
Every roll-up, I think, will be a ZK roll-up, and they will all be able to talk to each other
via, you know, verifying ZK proofs of each other's date.
And then there will be, like, seamless perching and interim.
So in the near term, I'm most excited about ZK roll-ups using SB1 and turning every roll-up into
ZK roll-up.
In the longer term, I think there's a lot of other really exciting applications, even outside
of blockchain.
So, for example, there's teams that are doing ZK email or ZK passport, where they basically
prove, you know, passport at his testimony.
information or email information in a zero knowledge proof.
And then you can use that to bridge off-chain data on-chain.
Or you can kind of even use it in non-block chain context to prove that like you have
certain credentials or your certain nationality.
And I think that's really interesting.
Like basically ZK attestation, ZK identity is pretty exciting.
And then there's a bunch of other types of software broadly that can be proven ZK
that could benefit from verifiable.
viability, then I'm also excited about.
So talking about ZK roll-ups, I mean, one of the core tenets of kind of their existence is that they
kind of produce these ZK proofs.
I earlier kind of got the impression a little bit that this is infrastructure for customers
of whatever feather that kind of are not.
specialize in ZK or kind of don't want to kind of put in the effort to kind of build their own
ZK infrastructure. This can't be set for ZK roll-ups, right? So kind of like if you look at the ZK
roll-ups that are currently out there, are they interested in kind of using a commoditized
version of a ZK proof generator? Surprisingly, yes. I think my hot take. My hot take
is that all ZK roll-up teams are actually just roll-up teams,
and they should think of themselves as roll-up teams, not Z-K-team.
I think with the technology we put out there in SP-1,
Z-K has become a commodity.
It is accessible to every single roll-up team out there.
And actually recently, we did an integration with OP stack,
which is traditionally an optimistic roll-up stack,
and we integrated, we took their state transition function,
we stuck it in SP1,
and we made OP stack into a ZK roll-up.
The integration is called OP Sysync,
and it's like live and running,
and people can use it.
So already we've kind of taken existing stocks
that are non-ZK and integrated ZK into them.
And so it's just inevitable that every single roll-up will be ZK.
So if you're a ZK roll-up team,
I think there's a lot to building a roll-up
that is very, very difficult and very important.
That's not on the technology.
side. You know, it's like BD, you know, exchange integrations, making your ecosystem great,
making your app developers lives great, you know, attracting liquidity and stuff like that,
attracting users. And I think now that all the roll-ups have ZK, I think all the roll-ups should
like focus on basically like that more user-facing layer of this stack and just use whatever the
easiest best tool is for ZK, which in this case, like sticking, you know, your role
say transition function and SP1,
seems like a pretty good option.
If you look at the proofs that ZK roll-ups use,
are they in any way geared towards the use case?
Because kind of SP-1 is very general purpose, right?
So kind of I would have assumed that kind of like,
if I were to build the ZK roll-up,
I would actually build it in such a way that it's geared exactly towards my purpose
and that would make it more performing than kind of a general purpose machine.
That's also a really great question and observation.
So I think historically that is what ZK roll-up teams have done.
They like hand-coded this specialized custom circuit for the EVM and for their roll-up.
There's two interesting things, though.
One is actually, SP1 is very, very performant.
So we've actually seen it be sometimes even more performant than custom solutions that teams have created.
And that's because a lot of these ZK roll-up teams started a few years ago,
and the ZK tech has improved a lot in the meantime.
And also, SP1 is really fast, and we have a GPU implementation,
and we have this system of, like, pre-compiles that make it very, very efficient.
So one surprising fact is that actually sometimes S-1 can be more performant than a
custom stack for the EVM.
And I think this was even surprising to us and surprising to other people.
So that's like one interesting thing.
The other thing is even if SP1 is a bit slower, say it's even like, you know,
50% or 25% you know, more expensive or slower.
Because you can just take normal code and run it in SP1, life in all other dimensions gets a lot better.
So because you can just reuse existing Rust tooling, like REF and REVM and Ethereum Node Software,
because your developers can update their state transition function whenever Ethereum upgrades,
because you kind of have all this flexibility and you have really great tooling and it's so easy to build,
I think the costs basically, or sorry, the benefits basically vastly outweigh the costs.
and so I think all teams are basically moving towards a ZKVM-based model.
Okay.
Let's talk about the Prover Network in a little bit more detail.
So kind of like in order to kind of become a Prover, I kind of need specialized hardware.
I get that because kind of like you're carrying this toward GPUs.
What else do I need to fulfill to kind of become part of the Prover Network?
and how do you make sure that the Prover Network remains decentralized and incentivized effectively?
Yeah, so for the Prover Network, as you mentioned today, people need like GPUs to join and run proofs.
And again, our Prover Network, like, we've written up a design for it and we're going to be publishing that soon.
And we'll probably have like a TestNet in the upcoming months for it.
and so we'll be able to test out a lot of these ideas and make sure that they actually work.
The mechanism we designed for the Prover Network in terms of allocating who gets what proof and stuff like that
was specially designed to keep decentralization in mind.
So normally you could just to decide who gets a proof, you could allocate it to, you know,
the person who offers a proof at the cheapest.
Or you could have some mechanism like this that's kind of a straw man.
our mechanism is specially designed so that it still maintains decentralization and prevents
like prooper concentration.
So more concretely, there's kind of this parameter that you can tune up or down.
And depending on what the value of that parameter is, prove sometimes with some probability,
people who are a little more expensive than the cheapest prooper will still get selected to generate a proof.
And that's like important for making sure that a lot of people can participate and
it just doesn't concentrate towards like one really big large dominant prouver.
So yeah, we kept that pretty explicitly in mind when designing the mechanism for the
Prover network.
Yeah, that's probably the biggest important thing for it.
This is a very noob question, but is the output of the ZKBM deterministic?
So kind of like if I put in some sort of rust code, will I always get the same outcome?
Yes.
Or people should only prove programs that are deterministic.
So yes, that's true.
Okay.
So I can easily verify that kind of the quote that I'm getting back from the Prover Network is sufficient and correct, right?
Yeah.
The whole kind of point of ZK is that you can verify that the proof and the computation was done correctly in much less time to redo the computation.
So I think that uniquely enables the proofer network.
There's like no additional trust assumptions.
There's no, yeah, there's no additional like trust vectors.
You send it your request for a proof.
Someone generates a proof.
And then you can verify that the proof is valid.
And then the money gets like sent out to them.
after the proof gets verified.
So I assume succinct business model is also somewhere in this incentive layer, right?
Yeah.
Yeah, that's exactly right.
I think our thesis is that people will want to use the Prover Network
because it'll be much more cheap and much more efficient
and much more easy than them setting up their own infrastructure.
I think in practice, people that use SP1 today use our cloud-proofing service
instead of running it themselves.
We basically offer like a,
we like to call it our Prover Network beta,
but really is just us generating proofs
and sending them back to people.
And most people use SP1 through that
because setting up your own infrastructure really sucks.
So we imagine that, you know,
people want to generate proofs and pay for that.
And, yeah, that, you know,
those fees will be directed to the Prooper Network.
I kind of imagine it actually similar to DA,
where today, like, if you're a roll-up,
You pay Ethereum for settlement.
You pay Ethereum for DA because you're posting your transactions there.
And I imagine in the future that part of each transaction will pay for proving.
And that will go towards our Provener Network.
When you say that goes towards the Prover Network,
you yourselves, are you a privileged party in that Prover Network?
kind of do you get kind of some basis points of all transactions that are kind of
settled on that Poover network or are you just one of the Pruvers yourselves?
Oh, like our company?
Yeah.
No, no, I don't think we'll be a privileged party.
Like it's going to be a decentralized neutral protocol and we will just participate in it.
Probably is like a prover of last resort, basically.
how does it work for you in business times, though?
So kind of like if you're only a proving participant in your network and you kind of,
you get the fees that everyone else who hasn't developed either SB1 or the Provo Network also gets,
how is that, how is that worthwhile for you guys?
Yeah, well, I think the network will also have some like marketplace fee for,
for, you know, facilitating kind of this interaction.
Okay.
So, so that way, kind of you are, kind of, your company somehow is a privileged party
and that kind of like you're offering the marketplace and kind of like,
if you, if you're working under the assumption that kind of like these,
these ZK proofs will be commoditized completely, then obviously kind of the fees will be kind of like
erased to the bottom.
So you kind of, you need to be able to kind of participate and kind of,
of like the entire volume and kind of make it worth your while, right?
Yeah, I think, yeah, exactly.
I think it's like any other protocol.
For example, Lido kind of facilitates this interaction between people who want a stake
and then also like the professional node operators and they have a 5% fee to the node operators,
5% fee to their protocol.
I imagine it being pretty similar.
Okay.
And so what blockchain applications are currently supported?
by offering me. So you just talked about the OP succinct offering. Is there anything else or anything
you're particularly excited that that's launching soon? Yeah. So two bridges actually run on SP1.
The Celastia Bridge and the Avail Bridge are like SP1 programs and, you know, run through
SP1 and, you know, our ZK-like clients actually on Ethereum that, you know, secure a pretty
big ecosystem. Our original dream of every bridge being a ZK bridge came true, but it is through
SP1 instead of, you know, our original approach of handwriting every single program.
So that's pretty cool to see. I'm very excited about the ZK roll-ups and OPEs succinct.
They're a very popular roll-up stack. A lot of people use them, like base. You need to
chain, world chain as well.
So it's kind of exciting to one day have a path towards all those ecosystems becoming ZK
roll-up.
There's also a lot of other people building very cool stuff.
Some in the Bitcoin ecosystem.
They're building like Bitcoin roll-ups or Bitcoin L2s, some Bitcoin bridging.
There's even some people in Solana that are building with ZK with their new network
extensions. That's also quite interesting. There's all these application-specific blockchains also
for like trading that I think are quite interesting. Basically, I have this thesis that in the long term
for very, very popular apps like trading, it doesn't make sense to have the overhead of a general
purpose execution layer. You're going to have these like specialized roll-ups that are only built
for one application and are hyper, hyper optimized for it.
And some people are building basically these decentralized exchanges
that are hyper optimized for trading with SP1.
So that's also another cool kind of category.
And then there's stuff outside of blockchain like ZK email.
Someone made this proof of driver's license application with SP1 that was cool.
Like you verify your driver's license and you can reveal information about that with a ZK proof.
So the ZK identity stuff is also really interesting to me.
Yeah, super interesting.
What do you see as kind of the main bottlenecks in ZK adoption?
I think right now with SP1, ZK is actually finally really easy.
Like in a lot of the protocols we talk to and work with, for them using SP1 is often the simplest part.
It's really building on the rest of their protocols.
protocol, that's hard.
You know, building the on-chain smart contracts, even designing your protocol logic,
building out the front end, stuff like that.
So I think there's this new generation of like ZK native protocols that are building
with ZK from day one.
And actually ZK is the easiest part.
And getting their entire protocol to main net is kind of like the biggest blocker to
them wanting a lot of proofs.
I also think there's some like kind of technical,
debt and emotional attachment to the old way of doing things in the ecosystems as well.
So for example, you know, I think tomorrow every roll-up should be a ZK roll-up and all the OPE
stack chains should immediately use ZK because it's cheap, it's really fast, it works.
But, you know, I think for a lot of people, like, they're used to the optimistic technology
or in many cases they don't even run fault-proofs at all, so they're just using a multi-stice
And, you know, that's kind of easy enough, you know, while things are good.
Obviously, it's really bad when something breaks, but that happens rarely.
So I think ZK is kind of like, it's kind of similar to decentralization in a way.
It's like eating your vegetables.
Like you kind of have to decentralization is really important when something goes wrong.
Or there's, you know, some like breakdown in trust of the centralized actors.
And that's when people are happy they built a decentralized system, right?
So I think ZK is similar where it's like, oh, maybe using the multi-sig is fine, 99% of the time,
but then there's this catastrophic tail risk.
And you really wish you actually did have full validity proof in those situations.
And so I think as an ecosystem, one of the biggest things today is like we're all just, you know,
kind of living in the happy path and living in this world where we don't worry about those scenarios.
And I think in general there needs to be like more of a push to like actually having
decentralized verifiable systems that operate
that actually kind of fulfill the promise of
all the stuff that we're doing.
Yeah, no, I agree.
I think people don't like thinking about the failure modes
kind of as long as things are good, right?
Yeah, they don't think about the failure modes
until it's like too late.
Yeah, absolutely.
what kind of metrics do you look at for succinct to gauge your success?
So kind of like what are you optimizing towards?
I think one really concrete metric is just a number of proofs and also like, you know,
amount of computation.
So we call it like cycles, like how many risk five instructions are reproving.
So that's a really fun metric to look at.
I think recently like in the,
the past month we proved like trillions of cycles and that's been going up into the right very
quickly especially as we get more and more roll-ups using it because they use a lot of cycles.
The other metric I think I personally care a lot about is like kind of like the quality of the
usage. So it's like are we meaningfully moving the ecosystem towards more verifiable systems
and like, you know, more verifiable applications in general?
So for me, Obesysync was really exciting
because it was like an opportunity
where you can now finally see a path
for every roll-up on a theorem to be a ZK roll-up.
And even though that doesn't happen like today,
it's kind of like the first step
and then you can fill in the dots.
So yeah, that's like another metric.
It's like, oh, like, can this thing be generalized
to like end more instances?
So FP1.
one's open source, right?
Yeah.
So as a Prover Network, what's your mode?
Kind of like, what stops me from kind of just setting up a second Prover Network that kind of just make
sure there's a kind of fee raise to the bottom?
Yeah, that's a good question.
I think integrating with the Prover Network, there's like a different.
developer integration cost and developer switching cost.
And then also, like, all the provers will be connected to our network and generating proofs
with our network.
And so if you want to access kind of all the provers in our network, then there becomes
another competing network, then you're not going to, like, want to switch over.
So I think the network itself kind of aggregates both the supply software.
So like all the provers and then also all the demand side, which is like all the proof
requests.
And then with that, there becomes this like flywheel that's like, you know, the more demand
there is, then the more supply gets built out.
You know, and then the more supply that gets built out, proofs get cheaper.
So hopefully there's more demand.
And so I think it would be really difficult for another prover network, I think, to come in and
compete and like make sure all the provers are integrated and make sure all the demand also gets
routed there and stuff like that. Which way do you see succinct developing? So kind of like if you
if you think about kind of succinct in the next say three to five years or so, so something
inordinately long kind of like in blockchain terms. Where are you going? I think for us like we just
want ZK to be as widely adopted as possible.
and making, you know, as many things verifiable as possible.
I think verifiability is this, like, very fundamental new primitive.
And obviously in the context of blockchains,
it's very powerful and needed because that's, like, one of the fundamental premises
is that you're building these, like, verifiable, trustless systems.
And for systems to be trustless, they have to be verifiable.
So I think it's very fundamental to blockchains.
So I really hope it gets adopted in all the blockchain contexts that it should be.
So that's like roll-ups, bridges, stuff like that.
But then, yeah, beyond that, I think I'm also really excited about seeing these verifiable systems more in the real world as well, beyond blockchains.
And I think that will be very, very interesting.
And I think that's uniquely kind of enabled by being able to write normal code.
Like I think it's hard to imagine that normal people adopting this ZK stuff if they had to go through all the pain of writing custom circuits and doing all this cryptography and feel like that.
But now that you can just take normal Rust Code and generate proofs of it, hopefully that enables a much larger class of applications to become verifiable.
What new features or improvements can we expect?
Or is it, I mean, I assume kind of the stack's not ossified, right?
kind of like it'll keep developing.
Yeah.
We're always trying to make it faster and cheaper.
So that's like our main series of improvements is on the proof system side,
kind of innovating and figuring out how to make the underlying cryptography better.
Also on the engineering side, there's a lot more optimizations that we're always working on
to make it faster and cheaper.
So yeah, on SB1 side, that's like our North Carolina.
stars to make it as fast and as cheap as possible.
And then I think a really big upgrade will be finally having an actual Prover Network,
where anyone in the world can participate in generating proofs.
And hopefully that also enables SP1 to get better and better as well, because now it's not
just us generating proofs.
It's like harnessing humanity's collective intelligence and optimizing the system.
So I kind of view it similar to like mining where in Bitcoin mining you had this proof of work game or proof of work puzzle and there became this global scale hardware buildout center around it.
Right.
You had so many mining companies that were developing, you know, their custom A6 and then also deploying it in data centers across the world.
And it became this like global movement to optimize computing a shawl hash function.
And our hope is kind of like by having this preverting.
network and also setting up this very similar game and the objective that, you know,
it'll be like this collective global race to also improve SP1.
If kind of SP1 and ZK tech kind of gets integrated into all the tech that kind of like
you're hoping it will get integrated to, how will the internet be different?
in five or ten years from now to how it is today?
Yeah, I think today, like, if you want verifiability on the internet, there's two options.
One is you transact with, like, a decentralized system like Ethereum or something like this, right?
You have this very expensive, like, decentralized consensus.
Or you have regulation.
So for example, like, if I want to get a credit score, there's a credit scoring agency that is responsible for, you know, computing my credit score based on a bunch of statistics.
And then they're regulated by the government to make sure that they're like not doing anything racist or bad.
And then, yeah, we just have trust and verifiability through basically regulation or trusting centralized intermediaries.
So I think in a world where verifiability is like very easy and very ubiquitous, you could imagine instead that the same credit scoring company instead has to post as DK proof that they updated everyone's scores in a correct way, according to a fair model that we all agree on is like the correct model.
And instead of having regulation, you basically verify a cryptographic computation and you can make sure that people,
are doing the crack things and like, you know,
respecting the properties you might want of their systems.
So I think that's pretty exciting.
You can just have verifiability in a lot more places
and you can have it enforced with code
instead of with like regulations.
So kind of like if you look at metrics that pertain to the average user,
how would they notice that?
Yeah, that's a good question.
I guess in this credit scoring example,
I think they would probably notice systems becoming, like, more transparent.
So you have just, like, more transparency on, like,
what's actually going on in, like, all these currently regulated entities?
And right now you're, like, trusting that the government is a reasonable job regulating.
But in the future, you can check yourself.
So I think you just get a lot more transcoring.
transparency. And then I think if you have more verifiability, then, you know, it should also be a lot more
efficient, right? Because, and I think that's also like one of the thesis of like crypto broadly, right,
is like, okay, now we can have this like trustless exchange. And now we can have like trustless exchange of
like any asset. And that's kind of what lets us have like all the millions of assets we trade today on
chain like all the meme coins and like NFTs and stuff like that.
And so you can imagine that like, oh, now the verifiable is really easy,
maybe we have more expressive conditions that we can use in our day-to-day lives,
if that makes sense.
I mean, I guess this is a bit abstract.
But I think you just have more expressivity and then that leads to more efficiency
because you can just like do more complex stuff.
So tell us about the timeline for the Provo Network.
When is the test net going to come live?
How can people find out more about it?
How can they be prepared to kind of take part as proofers?
I think the kind of architecture and design will be out in like the next few weeks.
And then after that, the test set will probably be out in like the next few months.
We already have like an internal version working that we're testing out.
So maybe we have like a dev net internally that we use.
But yeah, the test net, that's like people can participate in a full form.
I'd be out in like the next few months.
And how do people stay abreast of developments?
Where should they follow you on Twitter or where do we send them?
Yeah, Twitter's great.
Just as the same to Labs account or my personal account is also a good resource.
Cool.
Fantastic.
Thank you so much for coming on today and telling us about the Prover Network and SP1.
It's been a pleasure.
Yeah, thanks for having me.
