Bankless - Ethereum's Last Big Upgrade: The zkEVM | Ansgar Dietrichs
Episode Date: February 23, 2026Ethereum’s next big leap might not look like a single “flip the switch” moment—but it could change how the chain verifies everything. In this episode, Ansgar Dietrichs comes back to unpack the... ZK EVM: why “re-executing every block” has been Ethereum’s hidden scaling tax, how real-time proofs finally make a different verification model viable, and what it would take to transition safely without sacrificing the verifiability that keeps Ethereum credibly neutral. They explore the three true bottlenecks of blockchain scaling (compute, IO, bandwidth), the roadmap from optional proofs to mandatory proofs, and why client diversity could look radically different in a ZK-native future. --- 📣SPOTIFY PREMIUM RSS FEED | USE CODE: SPOTIFY24 https://bankless.cc/spotify-premium --- BANKLESS SPONSOR TOOLS: 🔮POLYMARKET | #1 PREDICTION MARKET https://bankless.cc/polymarket-podcast 🪐GALAXY | INSTITUTIONAL DIGITAL FINANCE https://bankless.cc/galaxy-podcast ⚡ EUPHORIA | REAL-TIME ONE-TAP TRADING https://bankless.cc/euphoria 🌐BRIX | EMERGING MARKET YIELD https://bankless.cc/brix 🏅BITGET TRADFI | TRADE GOLD WITH USDT https://bankless.cc/bitget 🎯THE DEFI REPORT | ONCHAIN INSIGHTS https://bankless.cc/TDRpro --- TIMESTAMPS 0:00 Intro 0:43 Ethereum’s Biggest Upgrade 4:35 The Core Idea: Verify Blocks Without Re-Executing 10:40 From Bitcoin’s “Verify Cheaply” to Verifying Full Execution 16:22 Cryptography 2.0: Proving Arbitrary Computation 22:56 Scaling All 3 Constraints: Compute, IO, Bandwidth 32:33 Why Ethereum Has Been “Slow” by Design 38:29 3× Per Year: Scaling Now, Not Someday 41:28 ZK Isn’t About Faster Blocks (But Speed Still Improves) 46:33 Rollout Plan: Optional Proofs → Mandatory Proofs 49:59 Dependencies: Block-in-Blobs, Repricing, New State Trees 54:14 Security Reality Check: Performance → Security → Production 1:01:24 Client Diversity in a ZK World 1:10:51 Timeline: ZK Ethereum, Around 2030 1:16:31 Second-Order Wins: L2 Bridging and Beyond Crypto 1:20:49 Closing: Build the Boring Infrastructure, Enable the Apps --- RESOURCES Ansgar Dietrichs https://x.com/adietrichs --- Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
ZKVM is this fundamental insight that what you can do is you can basically allow nodes to verify that a block followed all the rules without having to re-execute the block.
It's a very non-intuitive thing, right?
A blockchain by its nature is a very symmetrical thing.
Every node basically does the same thing.
Of course, you have block producers, but then every node kind of has to download, re-execute.
You're duplicating the effort across the network.
And now you're jumping through this very, very.
fancy cryptography. You're jumping into this world where you still have the same effort to build
a block, but then verification in a way is effortless. It has this magical compression element to it.
Bankless Nation, I'm here with Onzgar at Dietrichs. He's a researcher at the Ethereum Foundation.
We're going to talk about the ZK EVM today on the show. Onsgar, welcome to Bankless.
Hey, great to be here again. Pretty ambitious subject, Onsgar.
Ethereum has had this history of very big forks, hard forks that have upgraded Ethereum
from this early primitive proof of concept where it started in 2015 to what it is today,
which is fundamental infrastructure, the backbone of internet money and internet finance.
We had the merge, which did proof of work to proof of stake.
We had EIP 1559 that upgraded ether economics and transaction user experience.
There's also 4844, which just enabled Ethereum's roll-up environment to become its best self.
With each of these forks, they all represented this rallying cry for the,
Ethereum community. They were this kind of grand unifying force of attention by the Ethereum
community, and it allowed Ethereum itself to command attention from the rest of the world.
The rest of the world paid attention to Ethereum when Ethereum had these forks, these incoming
forks. The Ethereum was just loud. And I think in these kind of represent Ethereum, some of
Ethereum's best moments when Ethereum has these kind of cultural shelling points for technological
upgrades to what we consider in the Ethereum community to be critical social infrastructure.
Now, I think Onzgar, and I want to suss this out, this topic out with you, that there is
another fork on the horizon.
It's not soon.
It's not this year.
It's likely not next year either.
But nonetheless, it is there on the horizon.
And I think it deserves attention.
I think it deserves the treatment that the Ethereum community has given previous forks.
And I think it, in addition to all of the valuable things that we got from the three
Forks that I just mentioned. This one is actually the biggest upgrade that Ethereum will ever
experience because it relates to users more than any of the three forks in the past. And that is
the fork that introduces the ZKEVM to Ethereum. Now, Osgar, these are the sentiments that I
want to start this podcast off with. Before we get into what is the ZKEVM and all the technical
details about it, I just want to give those sentiments to you and have you reflect upon them before we
kind of dive into the technicals. I personally share your excitement on this topic. I really think
that it's one of those changes that are really Ethereum at its best. It's one of those really ambitious
technical projects that I think Ethereum is in a unique position to deliver. It will have a huge
impact primarily through scaling, but in many ways, I'm sure we will talk about all of this.
And I really think it's something we can look forward for. We can look forward for. We can
we can be proud of and yeah, I'm excited to talk about the details.
I will say, by the way, you said heart fog and the interesting thing here is like similar to
if you think back at the merge, right?
We had first the launch of the beacon chain, which was one moment in time and then we
later on had the mergers, like two separate moments in time.
I think similarly, maybe even to a larger degree with ZKVM, as we'll discuss, it actually
has this nature of, it's an ongoing transition that is basically about to start.
Then we will have the main hard fog and then it will continue after.
so it's much more like an ongoing transition.
But yeah, let's dive in.
So it is the introduction of an era of Ethereum
rather than an acute hard fork.
And I think the ZKEVM era will be,
has the potential to be Ethereum's best era
because of what the ZKEVM does for Ethereum.
So let's stop hyping it up
and start to get into the technical details.
What do we need to know about what a ZKEV is?
What is it?
And then we can talk about like why,
what it is that's so significant to Ethereum.
Yeah, so I think, you know, as you understand this, like really kind of you have to start
from the problem statement, right? So ZKVM really arose in the context of scaling, of, and basically
the fundamental point is that a blockchain, if you run a blockchain, you have these three
primary constraints. You have the data, right? You have to first, like any new block you create,
it has to get to the user. Then you have the I.O. You have to like, then go to disk. You have to get all
the data you need to actually like then verify the block. And then you have the actual verification.
the execution, the compute, right?
So those are like the three main constraints,
the bandwidth, the IO,
and the compute.
That's any, any blockchain, no matter of the design,
those are the main constraints.
And so if you want to scale this,
you can just do the thing where you take that
and you just scale it up.
And we'll talk about this in a bit.
That's actually, to some degree,
what we're doing in the short term.
And that's what many other chains have been doing.
That's a very natural thing.
But you do run into limits.
You do run into tight limits.
And so ZKVM is this fundamental,
like it's it's it's it comes from from the cryptography side these snags is your knowledge proves
and it is this fundamental insight that what you can do is you can basically allow notes to
verify that a block followed all the rules without having to re-execute the block and that's again like
that that that's it's it's a very non-intuitive thing right normally it's it's a blockchain
by its nature is a very symmetrical thing every note basically does the same thing of course you have
you have block producers, but then every node kind of has to download, re-execute,
you're duplicating the effort across the network.
And now you're jumping to this, like, through this very fancy cryptography,
you're jumping to this world where you still have the same effort to build a block,
but then verification in a way is effortless.
It has this magical compression element to it.
And then specifically what's so important in the L1 context is the real-time element to it.
So a ZKVM just allows for this compression.
and, for example, many listeners I think will already be familiar with the concept of ZK roll-ups, right?
So those have been around for a while, and that actually was a huge first jump in this technology,
which just allowed for this compressed ZK verification in the first place.
But so far, this is done in an asynchronous way.
So meaning you have your L2 blockchain that, you know, it's its own chain, basically,
and it keeps progressing.
And then afterwards, with some, you know, up to several hours of delay, you come and you basically,
you compute over a long time these proofs
and then you bring them to the chain.
And what now is the second huge jump here
is to go from this very asynchronous,
delayed process to a proving
a verification loop
from block creation proving verification
that all happens at the same speed of the blockchain
synchronously. So like within a single Ethereum slot
right now that's 12 seconds, we'll bring that even further down.
You have this entire loop,
closed loop within that short amount of time.
And so basically that's many orders
of magnitude of performance improvement.
And that really is what it unlocks all of these huge gains for the L1.
Galaxy operates where digital assets and next generation infrastructure come together,
serving institutions end to end.
On the market side, Galaxy is a leading institutional platform,
providing access to spot, derivatives, structured products, defy-lending, investment banking,
and financing.
With more than 1,600 trading counterparties,
Galaxy helps institutions navigate every phase of the market cycle.
The platform also supports long-term allocators through actively managed strategies
and institutional grade staking and blockchain infrastructure.
That scale is real.
Galaxy has over $12 billion in assets on the platform
and averaged a $1.8 billion loan book in late 2025,
reflecting deep trust across the ecosystem.
Beyond digital assets, Galaxy is also building infrastructure
for an AI-powered future.
Its Helios Data Center campus is purpose-built for AI and high-performance computing,
with more than 1.6 gigawatts of approved power capacity,
making it one of the largest sites of its kind.
From global markets to AI-ready data centers,
Galaxy is serving the digital asset ecosystem end to end.
Explore Galaxy at galaxy.com slash bankless or click the link in the show notes.
Euphoria brings one tap trading to the palm of your hand.
Built on MegaEath, Euphoria takes real-time price charts and projects it over a grid of squares.
You tap the squares that you think the price will enter in just five to 30 seconds in the future.
If the price goes into that quadrant, you can pocket anywhere between 2 and 100x your trade.
No other application helps you trade faster and with more leverage on market driving events like
FOMC meetings, presidential speeches, or global macro events.
Thanks to MegaEth's real-time blockchain, Euphoria is the way to get real-time
price interactions with the market.
On Euphoria, you'll be able to compete with friends using Euphoria's real-time social
trading experience, allowing you to go head-to-head with your friends.
A great party trick if you project the app on a TV.
It'll be like the Mario Party of derivatives.
To trade on Euphoria, people can deposit stable coins from any chain or do direct
fiat transfers, and everything gets converted into MegaEth's native stable coin, USDM, in
background. Check it out at euphoria.finance and download the app or find it in Telegram as a mini app.
In 2024, emerging markets generated over $115 billion in annual yield for investors,
with yields ranging between 10 to 40%. These are some of the highest, most persistent yields on
earth. The problem, defy can't access them. Bricks changes this. Built on Mega-Eath,
Bricks takes emerging market money markets and solve them carry and turns them into composable
primitives you can access straight from your wallet.
DeFi investors earn 3 to 6% on stable coins and T-bills, institutions have been harvesting 10 to 50% yields backed by sovereign monetary policy.
Bricks connects these worlds with institutional gray tokenization, local banking rails, compliance across jurisdictions, and real-time stable coin settlement.
Bricks does the heavy lifting so Defi can finally access real collateral and structured products on top of real-world yield.
Even the best carry trades can be within reach.
Bricks brings DeFi's promise to the emerging world and brings the emerging market yield to your wallet.
Let the yield flow with bricks.
Maybe going back to just like what makes a blockchain a blockchain,
Bitcoin had this fundamental insight of the way that we get rid of a leader in a blockchain
is that everyone checks the legitimacy, the authenticity, the correctness of everyone else.
And so when some Bitcoin miner mines a block, but it finds the correct hash and it proposes that block,
everyone else in a network doesn't trust that leader.
They re-execute all of the same work to verify it for themselves.
And that's the way that Bitcoin discovered the way to have a decentralized network is everyone's checking everyone else.
And that re-execute word has just been the status quo for all blockchains.
Everyone re-does all of the work.
And the way that that impacts blockchains, all blockchains to this day, is that it kind of is hamstrung by the slow
node in the network. Or at least there is some requirement for computation that every blockchain
has that if you aren't at least this fast, you can't keep up with the network because you can't
keep up with executing all the everyone else's work. And now, you know, some blockchains have
different opinions as to like how much requirement you have. Bitcoins is very low. Ethereum has
also been a very low requirement because we want to be decentralized. You know, as you said,
like some chains like Solana or other very fast chains have had a higher opinion as to the
computational requirements it takes to do the re-execution. But nonetheless, all blockchains to
this day are re-executing all of the same work and it's redundant. It seems unnecessary.
It seems like is there a way where we can not do all of that extra work and still have a
blockchain? And parallel to that, as you said with like the Ethereum layer two's, what we
understand is that there is a way to not do this, and that is with ZK proofs. So in addition to the
technological progress of blockchains as a whole, we can make them more efficient, we can,
you know, we can juice some of the throughput. But on a parallel path, there are,
there are cryptographic algorithms that instead of allowing or forcing everyone to do the
re-execution, you can simply verify a cryptographic hash, a cryptographic proof.
And that part is trivial.
It's easy to verify.
It's hard to produce in the same way a block in a blockchain is hard to produce,
but it's trivial to verify the correctness of a cryptographic proof.
And that's kind of the trick.
That's where we remove the re-execution.
A great Elon Musk quote here is,
the best part is no part at all.
And what a cryptographic proof does is it removes the whole part of re-execution.
So it blocks in a blockchain,
get executed once, and then no one has to actually re-execute it.
They can just trivially verify it, which allows for a lot of redundant work to get removed
from the system, and that allows for just work being constrained down to one block producer.
And then everyone else is just like, thumbs up, that is correct.
And we really like take off the brakes off of a blockchain system.
Now, the reason why Bitcoin wasn't built like this in the first place, the reason why Ethereum
wasn't built or any other blockchain wasn't built like this in the first place,
was, you know, technological progress along cryptographic hashes also needed to mature.
Maybe you could like take everything that I just said and run with it, but also talk about
just like the technological parallel path of cryptographic proofs as they've been progressing
alongside blockchains.
Yeah, absolutely.
So actually, just to start with where you started with the Bitcoin example, because some
listeners might have heard of this might have been like, hey, actually, isn't there this asymmetry
as well where a miner does all this like very expensive.
work, but then not every other note has to like redo the same mining, right?
Like you, indeed, in the mining process, there's the same efficiency like asymmetry.
And that's actually, it's a very common trick in cryptography where basically like you try
with mining, you try all these different has one hash that has enough of like leading zeros.
That's how the difficulty in Bitcoin worked.
And then you can just show people and it's very cheap to, to verify.
So Bitcoin on the consensus mechanism side already uses a similar trick, right?
but on the actual content of the block, right?
So like what is in a block, in a Bitcoin block, it's all the transactions.
Each transaction comes with a signature.
So you have to actually verify the signatures.
You have to say, okay, balance was moved from this account to that account.
All of the actual operations of the blockchain, that's the re-execution part, right?
So Bitcoin does get, has this like, again, because this is a very typical trick in
cartography that you have this like asymmetry of generation versus execution.
It uses that for mining because that's easy to do with proof of work.
it's very, very hard to do this for the actual operations within a block.
And so now this is what basically the main unlock here is that basically now we're bringing
the same efficiencies that people are used to from this like one minor, everyone can verify
easily.
We're bringing that same efficiency to the entire block in block.
And of course, on Bitcoin, the actual Bitcoin block is very small.
It's a very simple operation on Ethereum because you can run smart contracts and
we are massively scaling the throughput.
It's much more complex.
like the vast majority of the overhead in processing and following the chain is not the consensus part, not the proof of stake part, but it is the actual contents of a block.
So what has changed on cryptography?
Actually, my friends from the Xerox park team, they are like one of those cryptography research labs, they always talk about, I think they call it maybe, maybe I'm getting the slightly wrong, but they call it the first generation of cryptography and the second generation of cryptography.
What was the first generation of cryptography?
it was basically handcrafted algorithms for very specific use cases.
So a signature algorithm or a hash function or anything that basically, it fulfills a very
specific purpose and you can use it in a very specific context.
And those are amazing, right?
And like that's been like the story of cryptography for the last 50 years.
It's basically like more sophisticated special purpose mechanisms.
And those were already very mature when say Bitcoin started.
right. This is why they were able to just take the concept of hash functions off the shelf and you can
do amazing things, signature mechanisms, all that kind of stuff. What is like very new, it basically
started, I don't know, a decade ago or something like this, probably academically a little bit earlier.
I'm not actually a cryptography expert myself. So I don't know the exact kind of like early story
there, but that's basically like cryptography 2.0 in a sense. It's general purpose cryptography.
It is basically now the ability to make cryptographic statements about arbitrary computation.
instead of having to like handcrafted for a specific use case,
you're now, you're going to this general purpose world.
And this is like a huge leap because it means that instead of like just, say, signing a message,
you can prove whatever you want.
Anything to incomplete, anything that you, any execution whatsoever,
you can now compress, you can make it, you can make a cryptographic statement over.
And that was a giant leap.
It was, I think, only really, it was pulled from academic theory to feasibility,
I think through a lot of funding.
that came from the blockchain space, of course.
And it's really incredible progress.
And that progress, I would think of it as several stages.
So one was just, not just.
One was what we saw with ZK drawlups.
And then of course, already prior to that special purpose chains like Zcash, right,
was just the ability at all.
You have a protocol and you can make a proof of it.
You can basically you can prove that a block of a blockchain is valid.
What we've seen since is like this progression of the tech stack.
So for example, all of these earlier stages like, again, Zcash, early ZK rollups, what they all did is they basically, they handcrafted the rules of the of the chain that they were trying to verify into like very low level like it's called circuits.
It's basically like you basically express it in like very low level constraints that you then make these knowledge proofs about.
and where we've been going from there is now we have this
and you can really it's really it parallels the early progression of computers as a whole
right we went from you have to specify you have to manually specify every individual
system you want to prove yeah yes it's like this the set of constraints of circuits
it basically went from there to introducing and it's such an elegant idea but it's it's
crazy that it works just introducing this intermediate instruction set so so it's
called it
ISA instruction set architecture
and you can think of it like
how a processor in a computer has
an instruction set, so x86
for example, right? Like Intel or
or one or not right? Basically, it's
what instructions does your
process understand? And
the way these modern
system are now built is you pick
one of those instruction sets, like
the one that is actually becoming the standard in
Ethereum right now is risk five.
Risk five is similarly in principle.
It's just like a list of operations that you
processor could do, right? Like, it's often run in a virtualized way. So it's not actually run on
real risk five hardware. It's mostly run on in a virtualized kind of way. But, but basically,
it's just like a list of instructions. And then you then write zero knowledge provers that can
just prove arbitrary risk five code. So you're just saying, like, look, give me any risk five code.
And I just have this machinery that can make statements, cryptographic statements about it. And what
that now unlocks is instead of having to handcraft like the early ZK EVMs, they were literally
handcrafted EVMs inside of ZK systems.
Now, you can just literally compile.
You can just take basically, basically,
you can take an Ethereum client,
instead of compiling it to whatever your local machine has as an instruction set,
instead of compiling it to X86 or something,
you're now just compiling it to Risk 5,
and then you just get the ZK proving for free.
And Risk 5, like that's just like a typical kind of endpoint for compilers, right?
So basically, you're modularizing the tool.
tooling and tool chain. Of course, that's only possible now with all the efficiency gains
because you're losing some benefits of hand crafting all the optimizations. But this,
it's a phase change from how feasible it is to do this for just like big complex projects.
And so really the way Ethereum does is EKVM is, again, of course, the real world is a bit more
complex, but in principle you can really think of it. We take the existing Ethereum clients
and we just compile them to Risk 5 and then we just have proofers that specialize in making
proofs over risk five.
And that's, that's just, it's, it's, it's, it's, it's, it's, it's really amazing how far
the industry there has gone to make that feasible.
And then the last jump, the last big kind of conceptual jump from there to this is becoming
feasible for us is the real time element.
So you, you, you basically, we arrived at that world and you could do that within, you
know, an hour.
And sometimes if the block is actually convenient to prove, maybe you can get it down to a few
minutes, whatever, like, that's the world that we used to be in.
And then we basically, we, we, we have had this, this, this massive,
industry collaboration effort that started like a year and a half ago with Justin Drake really
pushing super hard on this.
And these teams, this is really mostly driven by teams outside of the Ethereum Foundation,
these teams have done an absolutely amazing job.
And I would say the last year was really the year of performance, of real-time performance.
Throughout the last year, teams just kept pushing this down orders of magnitude.
And now we're at the point where you can, we are starting to achieve the target zone.
So like we are actually able to prove consistently reliably prove a full Ethereum block within five seconds, something like that.
And that's the basically the promised land.
Because now we have all the technological building blocks and now we can talk about the rollout and all these things.
But we have all the like from the cryptography side.
We now finally for the very first time ever, we have all the elements we need to run a general purpose blockchain at real time proving speeds.
And that's something that has never been possible before.
I really like the idea of there has been this, you know, three parallel paths of computing.
First, starting with computers where they were first narrow, and then we were able to make them
generalized. And then we were able to make them generalized and fast, which is where, you know,
modern computers are now to this day. And then we created blockchains, you know,
virtualized ledger-based computers in the, you know, in the sky, decentralized systems.
They started narrow with Bitcoin. And then we learned to generalize them with,
Ethereum and then we learned to generalize them and make them fast with many other smart
contract chains.
And now we are doing the same thing with cryptography, started narrow with cryptography,
learned to make it generalized, and now we are making them generalized and fast.
And that generalized and fast unlock on the computing tech tree of cryptography is now being
able to be taken and bestowed into Ethereum, which is what we're going to talk about for the rest
of this episode.
So now that we have the ZKEVM, and it's in the Ethereum blockchain, and it's up and running,
what does that actually change with Ethereum?
When we get to this point, how does Ethereum actually change?
Right.
So of course, we're not there yet, but that's kind of, that's where we're going.
And so why is this useful?
So coming back to scaling, right, I said that there's basically these three main elements
of scaling.
There's the bandwidth, the I.O, and then the actual compute.
Now, the amazing thing about real-time ZKVM is that it actually is the core of a broad,
like the way I would say it is like it helps us scale all three of these,
but not just on its own, but it's basically it's the unlocking piece that allows,
that basically enables a broader transition that addresses all of these elements of scaling.
And so that's why when we talk about ZKVM, to me, it's more like the most exciting element
of this broader change.
And that's why when you said at the top of the podcast, this might be the biggest change ever,
I would agree, not just the ZKVM itself,
we're talking a second about statelessness,
about data availability sampling,
like all these things come together to unlock this.
And so let's take it step by step.
So the one of those three constraints,
the one immediate impact you get us on the compute side, right?
So because that's the nature of ZK proofs, right?
You basically, you're able with very little compute effort
on the verification side to verify arbitrary length execution.
So no matter how much you fill the block,
Now, of course, we can talk about constraints.
They're still block building.
Some node, somewhere needs to do that.
So it doesn't give you literally infinite throughput, but basically, right?
Like you can have whatever length of computation you have.
You compress it down into a constant size proof.
And then you can verify that with just very little compute.
So compute scaling, that's in a way the easiest one.
That's the one that you get very easily.
Now you look at the other two and you're saying, okay, how does it impact IO?
Right. So historically, traditionally, when you execute an Ethereum block, what you do is you start executing, you do some compute at some point you want to load some state. Actually already at the beginning of a transaction, you want to, you know, you need to load your account. You need to load the account of that you're calling into, that you're sending eth to. So you basically, you immediately need to go to disks, right? So you have this intermixing of sometimes you go to disk, you load values, sometimes you do some compute, then you go to disk again. It's like this, is intermixing. One actual change to Ethereum that we're already doing before ZKVM. It's called BlockLabend.
of the access list. So it allows us to, it basically, it adds some annotations to a block of like,
this is the data you'll need. So actually what happens now is that you actually go to disk at the
very beginning. You bring all the data and then you can do the execution. But you still have this element
of having to go to disk both before the block and then again after the block to go and like be,
okay, but what's, you know, like we have to update all the values and then we have to also like
compute what is the new state route. So how does it look with ZKVM? Well,
there's a few things that are fundamentally
improved by ZKBM.
So the important part is that ZKBM basically
already takes in as part of the claim.
It's like, hey, assuming the blockchain was in this state
and I apply these transactions,
now then the next state is this.
So basically, like, you no longer need to go
and load the values from disk.
So basically you're saving this IO
on the load side naturally.
And then the thing that you normally still have to do
is you have to like go and still,
write the updates.
So if you still have the state of Ethereum,
so after you verify the block,
you still have to go and say,
okay, these values change.
Right?
So you have to go and apply that change.
One, that's no longer in the critical path.
So you can do that after you've already finished verification.
So if you have valid data, you can already vote.
You can like say, ah, this block was valid.
And then afterwards I go and actually apply the update.
So in terms of like, what is the kind of price of this uniswap pool?
Or what's the balance of this account, right?
Like, I might only go update this on disk after I already know that the block is valid.
So that's a natural benefit you get.
But if you want to push it further, we have to, and this is what I'm saying,
like this is one of those changes that is enabled by ZKVM, but it's its own change.
It's stateless Ethereum or partially stateful Ethereum.
So what does that mean?
Well, instead of like today, any node in Ethereum network basically has to have the full state.
And that's with re-execution, that is unavoidable, right?
Because if you want to verify a block, you have to go and again, load all the data.
You have to have it all locally.
once you have ZKVM, that becomes optional because you don't actually need the data local to double-check the validity of the block, right?
So what you can do is you can, in principle, what you could do is you could throw away the entire data, right?
So you can basically just, you can only keep like this root commitment and you can just always update the root commitment and that's it.
In practice, what you'd want is because Ethereum nodes have multiple functions.
They also operate the Ethereum mempool.
They have to understand validity of transactions in flight, all these kind of things.
what you'd want to do is you don't want to run fully stateless.
You want to run in what we're calling partial statelessness.
So, for example, there's this proposal called VOP's validity-only partial statelessness.
So it means you specifically have a subset of the state, and that can be defined by several
different rules.
It can be, say, the balances of all the accounts, or it can be, I don't know, if you are
specifically interested in some state that belongs to you as the user or something, you can
define what state you're interested in.
But basically, now you can keep a subset of the Ethereum state.
And that's totally safe because of ZKVM, right?
And you only have to apply the div.
You only have to go to disk.
You only have to have the IO overhead of updating that subset.
So that's the second, basically.
You have ZKVM for compute.
Now you have partial stateness for more optimized IO.
And also, by the way, for keeping your disk size contained.
You'll talk about state growth maybe towards the end,
but basically, you know, so you don't have to have like a huge disk.
And then it leaves the third one, which is bandwidth.
And how do you actually like keep scaling the chain now with the ZK system while actually keeping bandwidth requirements the same or even reducing them?
Well, that's yet another separate trick.
That's also again enabled by ZKVM, but it's separate.
And that is you no longer actually need to download the full block.
And that makes sense, right?
Because you kind of you get the ZK proof.
You have to download the proof.
And the proof tells you, hey, assuming there is a block with this hash or something,
once I apply the block, this is the result.
And that's proven.
So the only thing you need to know about the block is that it exists.
And that's a bit of a nuanced thing.
Like, why do you even need to...
I mean, someone clearly must have created it,
otherwise they could not have created the ZK proof.
So why do you have to verify that it exists?
Well, that's for the nuance reason that you can otherwise withhold the data.
Like, that's also the same for...
That's why, for example, we even have blobs in the first place.
Actually, for L2's is the same story.
You have to publish...
You have to basically prove that the block was published,
so anyone can access it and anyone can get access to...
to the transactions that were applied, basically.
So, but what you can do is, I mean, that's again where like the synergy with the altus.
It's just a beautiful story.
We have already built out specialized functionality for verifying the existence of data
very efficiently without downloading at all.
It's called data availability.
It's called blobs, right?
So what we will do is we'll take the theorem blocks and we'll just,
we just basically become our own roll-up in a sense.
We're putting the data into the blobs.
It's called block and blobs, bib.
And with that, now all in.
Ethereum node has to do is just sample.
Sample the data and we are in the progress of making that more and more efficient because
we want to provide more data for our L2 partners.
And that now naturally also benefits ourselves because now you can have more like bigger and
blocks while keeping the footprint in terms of bandwidth also very consistent.
So now you're right.
Coming back, we have ZKVM and we have partial statelessness and we have block and blobs, data
availability sampling.
Together they scale bandwidth, they scale IO and they scale compute.
and that is how you basically like use all of these elements to scale the blockchain.
And then there's some nuances.
You don't get everything for free.
You have state growth.
We can talk about state growth that we have to separately address.
And you have things like being able to efficiently sync an Ethereum client.
There are things like being able to efficiently run an RPC node, you know, like what in Fuhrer is doing, these kind of things.
So there's more to scaling than this.
But the core story is that you have these three constraints and ZKVM directly and indirectly addresses all three.
You zoomed in on each one of those three.
And as you just said, you put those three together.
That's how a blockchain becomes a blockchain,
and we improve all three of those things.
I want to zoom out and really focus at that level of advantage.
When we reconstruct how a blockchain becomes a blockchain on all three comprehensively,
you said, you really kind of said it when you said,
Ethereum uses its own data availability to be a ZK roll-up.
as I understand it, the ZKEVM, when it is up and running and operational and, you know, fully fleshed out and forked into Ethereum, the Ethereum layer one has the performance of a blockchain that is a ZK rollup.
In fact, it maybe even is a ZK rollup. It just also is a the layer one itself. And so we get all the performance benefits of rollups. We get to ZK.
everything, which unlocks the brakes,
takes off the brakes on the Ethereum layer one.
And we already have the infrastructure needed
with the data availability sampling for this to get done.
And so from a performance perspective,
the Ethereum Layer 1,
which is known to be a slow, antiquated,
you know, expensive blockchain to do computation on,
upgrades itself to have the performance properties
of a ZK roll-up.
Is that a true statement that I just said?
Yeah.
I think that's right
and I think
I think
it's important to understand
like why even does Ethereum
like why
why is Ethereum so slow
right?
Like if we ask that
provocative question
the one really important element
is that core to Ethereum's
design philosophy
is this guarantee
that Ethereum
never wants to compromise on
which is
like easy verifiability
and auditability.
So the world
that Ethereum
always wants to be in
is that anyone
that any user of Ethereum can easily, if they want to, verify or audit that the protocol is following the rules.
And why this is so important, like, people are always like, well, but in practice, many users don't do it.
And like other chains, yes.
Like, for example, if you're trying to join one of those high performance chains, it's actually, it's really, really hard to run a full node for one of those chains that scale just by increasing hardware requirements rapidly.
Because not only is it, do you need a heavy machine, but often you don't, you can't even, you're not even allowed to join the
peer-to-peer network because it's so performance sensitive that they have to like have white
lists for who even is allowed, which nodes are even allowed into the network because otherwise
they are just too brittle, right? And they just immediately collapse. So basically, and why does it
matter? Because I think people think about proof of stake always in this like, well, there's validators
and they vote on what's the can't state of the chain. In Ethereum, validators get basically like
handed the can't rules of the chain by the community, right? Like any hard fork is basically a social
decision of, hey, it's a social governance act.
The Ethereum community decides that now there are new rules,
rules to the chain, and the validers only vote from like,
okay, given those rules, like which blocks did I see,
which blocks follow the, it's a very, it's a very,
there's no, there's no individual decision that
that in attest any theorem that makes, right?
They just watch the chain and they just attest to what they see.
In other proof of stake chains,
while in principle, that should be the same thing.
What in practice happens is that,
because any non-validator user of the chain is just a light client
because you can't just participate in the chain.
Basically, any user in those chains just trusts the majority of validators.
So in practice, those validators determine what the rules of the chain are, right?
Like in a chain that does not center verifiability,
validators de facto control what the rules of the chain are.
If the majority of validators want to run a different set of rules, they can do that.
In Ethereum, that's not the case.
Validators can't accept or reject a fork.
They can just make a fork of their own.
they just get handed the rules by the community
and the ultimate power always lies with the community.
So that's why that's why like verifiability, auditability is so core to Ethereum.
And that's why we have been historically slow to embrace scaling
because that would endanger that property.
And now with ZKVM we have this magical way of getting the best of both worlds,
getting the full verifiability and the full performance.
Although I will say all of this is a bit too black and white.
actually what's been happening.
So for example, I'm not actually
like I'm personally while I'm involved
with our ZKVM work, we have experts,
we have Justin, who's been on the podcast before
many times, we have Kev
who's doing absolutely amazing work there.
We've many people there that full-time work on this.
And I'm actually focused much more
on short-term scaling.
And so while it is true that with traditional scaling,
there's like a limit that you can reach
and otherwise you basically have this fundamental trade
if you can't escape.
Ethereum historically has been very much in this mode of,
well,
we're working towards this eventual end state.
You know,
and we know we want to eventually do ZKs,
so, you know, we'll focus on that.
And as of like, say, a year,
year and a half, two years ago,
I think the mindset on Ethereum has shifted a lot towards saying,
look, we're now in this moment in time,
real world adoption is here, right?
Like, it's no longer this future thing that we're building towards.
So we have to, like, now and it's actually,
it's really, it's a, it's a very,
like non-trivial thing.
We have to find the right balance between still working on these like Manhattan project tile
type jumps, like real-time ZKVM, I really, I think, like you said, like I think it's the biggest
thing Ethereum probably will ever have done.
But we can't just wait for another three years for this to arrive.
Like, we have to do things now.
And so this is why I think we actually like, we're now scaling is this perfect example.
We have this really good hybrid approach.
The next, like we started last summer.
We're saying ZKVM is three years out and we will in a second, I think, talk about more the
sequencing of the exact raw, but we don't want to wait three more years, right? This is what
the old Ethereum would have done. What we're actually doing is we are, we now, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we, we are, we, we, we, we are,
the throughput of the, with this, by, by roughly three X, every year. This is more of, like, a goal,
an ambitious statement. It's not clear that every single year we'll be able to hit that, but
we think we see a path at least.
It's a possible outcome.
And in practice, the first three years of that scaling with traditional means,
and then from that point on, basically,
we have the smooth handover into the ZKVM paradigm.
So it's not all just black and white,
any terms only doing ZKVM,
but actually now I think we have the best of both worlds now.
We have like the next two, three years,
we're doing this ZKVM in parallel,
but we're still doing the traditional scaling,
and then we jump into the ZKVM paradigm
and so that means if you're a builder
and you're considering building on Ethereum 1,
you have this like, instead of having to like exactly think,
okay, one is this hard fog and what is the exact?
No, you can just say three X every year.
You look at the throughput today.
And you can just like very simply calculate like,
you know, what throughput needs do I have?
Is the I want a good fit or not?
It's a very simple story.
But under the hood, it has this like,
these like two synergistic elements to it.
Sorry, that was long answer.
Yeah.
Well, the idea is that we're pressing the gas on stealing
scaling on multiple fronts, not waiting for the Manhattan project of the ZK EVM, which, you know,
the ZK EVM has been in the Ethereum roadmap since Genesis, I think.
Like we've understood theoretically of the possibility of turning the EVM into a ZK algorithm.
And that, you know, we understood that theoretically back in 2015, now we're in 2026 and like,
oh, no, this is now, you know, just an engineering challenge.
And we're like in the last mile of this.
And like, it's basically almost here.
and in the meantime we are scaling on the more traditional front as well.
I want to get into the qualitative nature of the scale of the ZKEVM.
So with block times and block sizes, those are the two ways that you have throughput.
You have how big is your block and how frequently do those come, you know, height times length.
So can we talk about what the nature of scaling with the ZK EVM does?
Does it help lower block times?
Does it just increase block size?
I want Onzgar both fast and big blocks.
I like my blocks big and fast.
It would be great if we could increase the size of blocks,
but there is also a very important element of just like block times is critically important for trading and finance.
So how does the ZKEVM impact both of these variables?
Right.
So to answer that question directly, ZKEVM, indeed, it's not a panacea,
it specifically addresses the throughput level.
So it gives us much, much, much bigger blocks in the same kind of.
of time constraints.
It's even, to be fully transparent, it is a small extra strain on the timing just
because you have one extra step, right?
You have to have this proving step that's in between block creation and block verification.
You have to have proving, but that's a minor constraint.
But it in itself does not give us a lower latency.
And this is why when you said at the top, like, it's the biggest ever change.
I was actually tempted to say, well, to me, that's true on the execution side of that blockchain,
right?
Like same with Bitcoin, how we said there's the consensus mechanism,
of work in that case, in our case, prove of stake.
And then there's the actual processing of the blocks,
Bitcoin transaction, Ethereum transactions, that kind of thing.
For the extra execution, for the transaction bits,
the ZKVM and the related changes really are the major story
for the next five years.
We in parallel out, so now putting together this really,
really exciting roadmap on the consensus layer side.
And like the latency, that's all a consensus layer story, right?
because that's where basically the heartbeat of the blockchain is determined.
And so we have this separate process.
And you should probably, you know, this is maybe setting us up for a separate podcast episode.
You should have bring someone on that's specifically focusing on that type of work at the air for end or the broad ecosystem.
Because I think we have this really exciting roadmap there that's getting us to a much faster finality.
So right now finality in Ethereum takes two epochs, that's 64 slots on average, two and a half epochs, actually, even.
So it's like long amount of time.
We're bringing this down all the way to basically single slot finality,
two slot finality.
It's going to come down like orders of magnitude.
So that's super exciting.
And then even within the single slot,
instead of 12 seconds,
we have a story there that's going to gradually get us down from 12 seconds to,
I don't know, 8, 6, 4 much, much, much faster.
And then there's separate work streams around can you get even faster inclusion guarantees, right?
So that's the heartbeat at which the chain actually progresses and you get guarantees about that's the result of your transaction.
But can you maybe get in principle like speed of light, you know, like just round trip time confirmation that your transaction will be included, right?
Like ideally I want to click a button.
And before I can even like, you know, within the 100 milliseconds it even takes me to realize something happened, boom, I have the confirmation.
Like my trade will be included.
And then within like say four seconds, I know at which price.
Right.
Like I think that's the world we ideally want to be in.
And we have a really, really exciting roadmap there as well,
but it is a separate roadmap from ZKV.
Okay, understood, understood.
So the ZKEVM massively increases block sizes.
I don't know if you can put numbers around that.
And then as a marginal increase in block times,
can that block speed come down in the future?
Or what does it take for block times to get faster?
And is that something that we are aspiring to in the roadmap?
Yeah, so that's what I was just talking about.
Like, we are aspiring to that.
That's not just aspiring.
That seems so indetermined.
indeterminate optimism, we actually have a plan that will come down. It will come down as early
as towards the end of this year. That's not quite certain yet, but basically like, we're starting
to make this a priority as well, and it will rapidly then become a major priority.
So maybe the part that I wasn't sure of is like maybe the block speeds don't necessarily
come down, but transaction assurances come down very, very fast. And you're kind of saying, well,
that's what people want anyways. Is that correct? Well, it's basically, you have three things.
You have the time to inclusion confirmation. You have the actual
time to the next block and you have the time to finality.
All three of these will come down.
The heartbeat of the chain, the time to next block will actually be the one that's only
going to come down maybe by a factor of three, something like that, from 12 seconds, maybe
to four seconds eventually.
Maybe we can go lower, but I don't necessarily want to promise this.
I think the other two are actually the more exciting ones.
Finality will come down massively and time to inclusion.
That's a bit more of an exploratory and process still, but that also will come
down massively. So I think, I think basically like, yeah, but blocktimes as well will come down.
But none of this will be through ZKVM, although, of course, it will be part of an integrated system.
Right. Okay. Understood. Okay. So you're saying there's a variety of ways in which Ethereum speeds up
broadly. And then there's like, zooming into what speeding up means, you know, has nuances,
which you just went into. And as at least when it comes from a user experience perspective,
we have ways of providing essentially instant speeds from this perspective.
of a user.
Right.
Okay.
Let's talk about the rollout plan for the ZK EVM.
We are in phase of Ethereum where there is no ZKEVM.
In the future, we will be a phase of Ethereum where it is all ZKEVM,
but it is not an acute moment, as I understand it.
How do we go from A to B?
What does that roadmap look like?
Of course, because this is like a multi-year process,
it's as typical, there's like very concrete steps as, say, for the next 12 months.
And then as you go further into the future,
I can point out that's the current plan.
These are maybe the open question.
These are the directions, right?
So that's how these things always work.
The interesting thing, as I said,
top of a podcast is that it's not just a one-time hard fork.
There will be a one-time hard fork,
and that is about the eventual switch from what will come first,
which is optional ZKVMs for those valid notes in the theorem network
that want to consume proofs instead of re-executing.
Then at some point there will be this moment in time
where we say, okay, now Ethereum just runs on proofs.
Of course, you can still run a node optionally
in re-execution mode if you want to,
but by default, like the network now guarantees
that there will always be proofs, basically.
And then from this point where with the switch
to mandatory proofs is,
is when you really get the scanning gains.
Because before then, you're basically not yet mandating
that anyone, right?
So like you're still allowed to run a full of re-execution node.
And after that point, it's like...
Exactly. And the network will hear you.
Exactly.
And after that, it's like,
okay, if you want to be a re-execution node,
that's a special purpose role now.
That requires special purpose hardware.
Of course, internally, it is a big project.
Like, how do we make sure that if we run at much faster speeds,
that you can still run an RPC node in a performance way?
So, like, this is a separate work stream that we're working on.
But in terms of like the typical both typical validator,
even and the typical full node out there, that's not even a validator,
those people basically by default will all at that point then switch over to ZK.
Now, again, as I was saying, before then is this phase of options.
proofs. So that has not started yet.
Like, right now we're in the proof of concept phase.
So like I think Justin presented at in Buenos Aires, there's this proof of concept of,
hey, see, my validator canon principle already run on ZK.
But that's not yet like if you're validated, like you can't use this yet today.
Right.
But the idea is that very soon, so meaning within say the next 12 months or so, we are starting
to put this out there in a early production ready state where the idea is that we will
of course, we will give very click guidance of like,
these are, this is the specific, nuanced level of confidence we have yet in the,
the security of the system, all these kind of things, right?
And for example, at that point, we could not yet have the majority of the network run
on this yet, right?
Because like if there is some bugle with it or something where you very much still want
to have the backbone of all the major validators run on this.
But if you're just a full note, just for hobby purposes, or maybe you're a validator
on a very weak machine, you might be tempted to just, at that point, transition over.
so that will be the first step.
And then one thing we haven't really touched on yet is that like, well, I guess a little bit
is that there's actually quite a few technical requirements that we need to hit before we can
move the bulk of validators over.
And I can briefly go over those.
So one we already touched on, for example, is the block in blobs, which will come at some
point where we basically say, look, we now put the block into the data.
So there's also the sampling aspect to it.
If you are a re-execution note, you still download all of it.
but if you now are ZK note, you can start only sampling it, right?
But this will come after the initial optional proofs rollout.
So before then, a validator basically has to download the proof,
but also has to download the full block store.
So it means they don't yet gain any bandwidth benefits.
They only get the I.O and the compute benefits.
So basically, like, we have the block and blobs that will have to come.
We have to have, in general, networking improvements that are in the works.
We have the pricings, meaning we have.
to actually make sure that the parts of the Ethereum chain
that are especially hard to ZK verify,
we make a bit more expensive,
we basically rebalance the cost.
And then the most important technical dependency
for the mandatory proofs,
the full transition basically,
is actually it's related to the statusness element.
And that's specifically that we need to transition
the Ethereum state tree over to a new format.
Like long-term listeners might be familiar with
this like elusive worker tree.
year, right? And so, Berkel trees were this early Ethereum idea of like, hey, we can, we can't
have a Merkel tree. So like any account in Ethereum is part of this huge tree structure and every,
block, the entire tree is updated. And, you know, at the roots, you have your balance and you have
your, you know, all these, all these individual elements about your account. The original idea
was that transition is over to a more efficient form that's called Berkel trees. And that was,
the unfortunate fate that Verklet trees had is that they, they were just never really necessary.
they were always like one of those nice to have features.
Back then, back when we were not quite sure, like, how aggressive do we want to scale the chain?
How quickly will state growth become a problem?
There was some worlds in which it would have been a more urgent topic, but because if we never went down those routes,
it was always like right beyond the edge of urgent enough to ever do.
So we never ended up shipping verketries.
But the nice thing is we now already have a lot of prior work.
And now we can actually go directly to the next generation of cryptographic structures here.
And so instead of a vertical tree, we're going to,
something that's basically called a unified binary tree.
It's somewhat similar.
The main difference is that it has a very different kind of like,
instead of like a vertical tree is a very wide tree,
a binary tree is a very narrow tree.
And the main, I guess, simple set,
the main difference is that the binary tree uses a post-quantam secure hash function
that is also very efficient to prove.
So it's already basically like fitting into this like future world that Ethereum is going
to, whereas the vocal trees were basically the standalone piece that doesn't quite fit.
But the nice thing is we have a lot of prior expertise.
We have Guillaume, who has been the champion of virgil trees,
and he's been frustrated to Noant that we never ended up shipping it.
And now his time has come.
So he's been very excited.
He's now working towards this binary tree upgrade behind the scenes already,
and he's doing an amazing job there with his team.
And so actually over the next two years, I would say the biggest,
the biggest kind of individual story that we'll have in Ethereum
will be this upgrade to binary trees.
So that will probably over the coming months start to become a bigger and bigger topic.
People will start hearing about it.
And that will then enable very efficient stateless operations or partially stateless operations for notes.
So to recap, basically starting a year or so from now, we will roll out optional proofs.
Those optional proofs will initially only be immediately effective for compressing computation
and helping somewhat with I.O. load, but you still have to run in stateful mode mode.
And then we will, bit by bit, start bringing these pieces into the protocol that unlock the full potential of ZKVM and in parallel keep hardening the ZKVM security properties so that by the time we are running out of conventional scaling means, which is, that's why all of this is so beautiful.
We basically have exactly like three years of scaling or like two and a half more years of scaling ahead of us of traditional scaling.
And at that point, we will be ready to just seamlessly move over to ZKVM.
So one year from now, optional proofs, two and a half years from now ish plus minus, this full transition to mandatory proofs.
And then we'll have all the pieces ready to then immediately keep scaling based on ZKBMs after that.
So that's the like the full out.
Right.
So as I understand it, the way that it happens is that in a year we will introduce optional proofs,
the Ethereum enthusiasts of the world who just love Ethereum, tink.
with Ethereum run nodes or Ethereum out of just pure passion
will start to do these optional ZK proofs.
They will be the pioneers of the transition of Ethereum
to be a classical blockchain transitioning
into a ZK blockchain.
And that will give Ethereum researchers like you,
the EF, a lot of data of what it looks like to be in production
because of these enthusiasts that are running this optionally
because they just love Ethereum so much.
That will give you guys the information you need
to do the prerequisite,
grades that are needed to actually get a full mandatory ZKEVM fork. And as you alluded to, it will also
give us just insight into, you know, in production use of the ZKEVM. Maybe there are bugs. If there are
bugs, we need to find them before we make them mandatory. And so, you know, all the different clients will
have their own version of the ZKEVM and we'll be stress testing all of those by using them into production.
Basically, there's a whole era of demo Ethereum ZK EVM. And that will take.
I think you said, you know, somewhere two to three years, as we run out of classical scaling,
that will have, we will have the hardened data and the information.
We will do the prerequisite work to unlock mandatory the ZKEVM.
Around two and a half, three years from now, the mandatory ZKEVM hard fork will happen.
And then Ethereum will make the transition to this is now a ZKEVM blockchain.
The story doesn't end there, though?
What happens after the mandatory ZKEVM fork?
How does this story continue beyond that point?
And just by the way, to clarify a little bit for people that maybe think,
oh, we are now Gangho starting to release optional proofs for anyone who wants to be like
a experimental, you know, like guinea pick here.
I think when we are ready to start releasing this, like there will be very explicit guidance
around like what is, like, what is this for?
Like what kind of production grade readiness does this have for which use cases?
I think it's more, you can imagine more like it's about like how many nines after the
after the comma, right?
Like, if you remain, it must never go down, right?
Like, we have 100% uptime and we're not willing to risk this.
So we're basically willing to take extra precaution there.
But importantly, if you're, for example, at some point running a ZK validator and you actually
daze a bug or something, right?
The worst it happens, like, no one will get slashed, right?
Like, what happens is just you're briefly kicked off the chain and then you're automatically
flipping over back to normal re-execution mode.
And then worst case, if we're already in this partial status world, you might have to
first re-sync some of the state, right?
So worst case, you're offline for a couple hours,
and then you're back to, back on the chain.
So none of this, we'll do it very responsibly,
just because, you know, just to clarify this.
But yeah, so, and basically, I think the way these, again,
absolutely amazing ecosystem ZK teams are talking about this.
I think last year was all about, I would say,
it was the year of performance, getting to real-time ZKVM.
This year is the year of security, getting to, like, absolutely hardened.
There's also like this bits of security measure, right?
It's like getting to a level where we are very confident in the security level.
Then next year I think will be the year of productionizing the ZKVMs.
And then the year after will be the year of like transition to mandatory.
So like that's basically the performance, security, production and then like full transition.
That's how we think about it is like one year at a time.
In terms of what comes after the transition, well, it's just I think and that's why I was
saying earlier like with the further you go out, the more unknown unknowns there are,
it's just about saying at that point we will have all of the ingredients.
Like, you know, we have the partial statelessness.
We have the block and blobs and we have the ZKVM to take advantage for scaling.
But we don't expect that once we get closer, that it's like a one-time switch and now
we can run it a thousand times faster.
Instead, we basically like right now conservatively quote unquote are projecting this three times
per year because we expect that there will be individual remaining challenges we have to address,
right?
Maybe we have to restructure the way notes sync.
or maybe you have to restructure the way RPC nodes again operate,
so you're confident that the chain is still usable at higher rates, right?
So this is just expressing that while we have the main architectural ingredients,
there will still be a lot of detail work.
And so we expect, instead of making use of it all at once,
it's going to be this continuous process.
And again, the nice thing about this rough 3x number is if you just say,
look, every two years you get a rough 10x, 9x, 10x.
So basically we're thinking we have like a path for maybe five or six years
of this, so six years at 10x every two years means 1,000 X.
So basically, the first three years of that we get traditionally, then the next three years,
so these EKADM, so in six years, roughly 1,000 X of where we started last year.
That's, I think the, again, is this guaranteed yet?
No, we don't yet have, we just, we think we see a path.
We think we see a path.
That's our goal.
And then, of course, beyond that, you could, if you want to be like more in sci-fi world,
like now you can think about native roll-up so maybe the way we then keep scaling beyond that
is not through just the single chain you know maybe then we're back to this kind of sharding type
setup of multiple chains synchronously composed yeah we'll have to see but that's that's the
the plan what if you could trade gold four x and global markets with the same tools and speed
that you use for crypto that's exactly what bit get tradfi unlocks after strong beta demand
including over a hundred million dollars in single-day gold trading volume bit get
TradFi is now live for all users. Inside of your existing BitGET account, you can trade 79
instruments across 4x, precious metals, indices, and commodities all settled directly in USDT. No platform
switching and no fiat conversions. This is BitGET's universal exchange vision in action. Crypto
and traditional finance side by side. You get deep liquidity, low slippage, and leverage up to
500x, letting you apply crypto strategies to macro markets. New to Tradify? Start with gold. The gold
U.S. D pair is liquid, macro-driven, and a familiar natural bridge between crypto and traditional
markets. Try trading gold on BitGET now at bitget.com. Click the link in the show notes for more
information. This is not financial advice. Few people in crypto put real skin in the game when they
make public top or bottom calls. The D-5 report is one of them. The week before the October 10th flash
crash, Michael from the defy report emailed his entire newsletter, saying he's going aggressively
risk off and sold the majority of his book from crypto into cash. This is when Eth was about $4,000
and Bitcoin was 110.
Michael runs the Defi Report,
an industry leading research platform
built on data,
cycle awareness, risk management,
transparency, and most importantly,
skin in the game.
We like Michael at Bankless.
We like his analysis,
and that's why you hear him
on the Bankless podcast
about once a month.
And the Defi Report
is giving Bankless listeners
one free month
of access to the Defi Report.
So if you're looking for some sharp
data-driven analysis
to make better informed decisions
around your portfolio,
you can learn why and how Michael called the top
and what he's doing next,
all in the Defi Report Pro.
it out. There is a link in the show notes. Asgar, as I understand it, client diversity is a big
topic here. Why, why is client diversity relevant to the ZKEVM and how does the ZKEVM impact it?
So, I mean, of course, I think people will be familiar why client diversity is so core to Ethereum
and to Ethereum's kind of 100% uptime, right? Like there's the redundancy factor you get,
you get from client diversity. And so the reason why this is relevant is just that like the nature
of clients, the nature of client diversity changes in this world. And that is, and that is,
because again, if we think back at it to how I explained how there's like this basically most, most likely risk five kind of intermediate target for ZK.
And then you basically, you just run a, of course, heavily modified, but basically like a traditional execution layer client that gets compiled to risk five.
And then you take one of those new ZK proving systems that then take the risk five code and prove, prove execution over it.
So what that means is now basically the Ethereum execution layer nodes live inside of the ZK proof.
which is of course conceptually very different from what that used to be before.
And so what it means is that now the actual node architecture is actually quite interesting.
You basically run and that that is a little bit still TBD.
Like it might be that you're still running this explicit split of two clients like the consensus layer client and the execution client.
But the execution client's role is very different now.
It basically just verifies the proofs, the one that you run locally, right?
It just verifies the proof and does some maybe like mempool networking, that kind of stuff, state management.
but inside of the proof lives the this the ZK program that was also derived from an executioner client.
So if you think about the roles of clients now, basically it means that the main question is like,
what about the diversity within those proofs, right?
Because the outer system we are familiar with, but what about the diversity within those proofs?
And so the nice thing is that in principle, you kind of get a very comparable, very parallel type of map.
where you can just, you know, you don't just take a single execution client and compile it into
risk five. You take multiple. So, you know, you basically take kind of, you know, the existing
ones you know, or like also there's the few ones that will be specially written for that use case.
You compile all of those. And then to make sure that the redundancy is full stack, not just
the first half of the stack, you also have multiple of these proving systems that take risk five,
because of course they could also be bug in that part of the stack, right? Like, they take the risk five
and prove over it.
So you say you have like, as an example, five of each, right?
You have five execution layers that can be compiled into this prescribe.
And then you have five different proving systems.
And what you can do is you can basically build pairs of those.
So you can say, and Justin has this really nice idea where you can even,
you could in principle even like say performance match them.
So maybe the fastest execution client is paired with the slowest proving system.
So you basically, so the pairs kind of balance each other out.
But that's just an idea.
But basically the point is you then have these, these combinations of like,
okay, this execution client with this proving system.
And then in the end, you basically, you're again in this example of five,
you'd be in the world again where you have like five different types of proofs that could
and they're all kind of redundant, they all have different, you know,
their full stack different from each other.
The generally novel thing here is that today you run one execution client, right?
Like there's multiple, of course, and there's multiple consensus that client,
but you choose one of each.
In this new world, what you can do is you can just verify multiple proofs.
So for example, there's this idea, and again, just to use example numbers, but they seem
roughly ballpark right.
You could have a system where you say, I only accept a block if I saw at least three
different valid proofs for it.
So I know that there are these five different ones, and I have to have seen at least three
of them, otherwise I don't accept the proof, except the block.
And so that actually gives you better redundancy because it's kind of almost as if every
Ethereum node today would run three different client setups and would basically only accept
blocks if they all agree, which of course gives you much better properties.
then right now we only have the redundancy across nodes, not within a node.
So it's actually, it's a better story, but it's also one we actually have to be intentional,
so that you don't accidentally collapse any layer of the stack.
And it's just a side note, there is this experimental idea.
And of course, in the age of AI, all the timelines collapse.
So who knows, you know, like maybe that's actually even short-time viable,
but this experimental idea of a fully formally verified client.
And you could imagine, right, like an EVM implementation in Risk 5,
that is fully formally verified to be correct.
in that world, that could basically then
you would no longer need redundancy
at that layer of the stack. But again, this is
as I said, like the further out items
have some more uncertainty. This is like one of those
theoretical out there approaches, but that
of course would be also really nice to have.
And I think formal verification in the age of AI
will become a much bigger deal anyway, so this
might be really nice synergy.
Yeah. As I understand it, the
clients are where all of the risk is with
the ZK EVM and where we have to
have an extreme.
level of caution with the transition from a classical blockchain to a ZK blockchain. And like if
something is going to go wrong, it's going to go wrong at the client level. I mean, I suppose that's
always where it would go wrong. But when we, you know, we have, you know, Ethereum has over a decade of
uptime because of client diversity, because of how hardened these clients are. And we are kind of resetting
that to kind of go back to, you know, zero Lindy with the ZK EVM. You know, we have some properties that
will be carried over, but nonetheless, it's risky in the sense that, like, we have all this
great hardened infrastructure and we're kind of rebuilding it to BZK. And so we have to have this, like,
extra levels of redundancy, as you said, like three proofs, three correct proofs to make sure that,
you know, not just two proofs, because two proofs might have the same bug, so we might prove the
same bug twice, so three, things like this. And so, you know, what's your level of fear about this
part of the transition for Ethereum from the classical blockchain, which is so hard and 100%
uptime to go where we go here. How scary is this? Oh, it's a really good question, right? Because I think
the promise here is so huge that we're all very, very excited about this, but it is also generally
like a huge, a very, very big challenge. And this is why I think it's not at all natural that we are
even doing this two-step rollout with the optional proofs and then the mandatory proofs.
In principle, we could switch over at the end of this year, right? And we already plan.
with this extra 18 months period,
specifically because of that, like,
that level of certainty that we want
and that we project, like,
that will just take some more time.
Again, it also gives us the extra time
to roll these other dependencies
to really make use of ZK proofs.
So it's actually quite synergistic,
but still, right?
Like, this extra 18-month delay
is specifically for that reason.
And to be clear, like,
we would always be responsible with this.
So, like, if it turns out 18 months are not enough,
of course, we would, like, delay this full transition
to mandatory proofs.
Maybe we even find some more gains.
we can get on the classical scaling side until then, right?
So maybe it wouldn't even matter.
But basically, we would always wait until we are like really, really confident.
And it's not in principle harder, but it's just, as you said, right, like it's a bit of a reset.
So like a lot of say our internal expertise, both inside of the EF and across the client teams around security work, testing work.
A lot of this is currently actively being restructured for this very new domain, for this very new type of operations with ZK, understanding like what even are the weak point.
here, like also say on the cryptography side, like how we have, we have absolutely world-class
cryptographers inside of the Ethereum Foundation and in the ecosystem, and they are like very thoroughly,
like, really turning around every single stone here in this, in this overall, like, stack,
and really making us understand, like, what are the critical points here? And, and again, like,
how far are we from being, being willing to actually trust this? And it's actually, so, for example,
to take a related example. I'm not sure if you already had it maybe an episode on post-quantum,
but that's also a big topic on Ethereum.
We will soon, yeah.
Yes, mostly unrelated, but of course, there's synergies here.
And it has a similar nature where I talked about the binary trees
and part of the binary trees is this choice of hash function that you need in the tree.
And there, for example, we also like, can't be not blocked,
but like the longest piece of the timeline there is us talking with our cryptographers.
We have a candidate, like a family of candidate hash functions.
But getting to this point where we're saying,
look, they are actually robust enough, they have been around long enough,
that we actually trust that they are secure, right?
Like, especially something in cash function that's so fiddly, you can't really prove security.
You're just, it's basically like a lindiness to it, right?
Like, how long has it been around?
How many people have tried to find vulnerabilities?
Has there been anything found, right?
This kind of thing.
And so some of these things you just can't accelerate, right?
Like how many years of academics having looked into this has there been, right?
That's just like a hard constraint.
And so both in this post-quantum, but also then binary trees we're also using for
making use of ZKEVMs, it's not directly ZKVMs, but making use of them.
There's just some elements of the timeline there that are dictated by the security needs that we have,
and we just can't cut corners.
So it's a big concern, but I think we are very responsible about it.
Yeah, yeah, yeah, yeah, which is why it's taking, you know, not a short amount of time.
So just to maybe conclude this podcast, the timeline, it is now at the start of 2026.
By the time we hit 2030 is a good guess for.
when we think we will have the full power, the full properties of the ZKEVM.
You're not in your head.
Does that sound right?
That sounds right.
And I think we will be still probably in the process of making full use of it for scaling.
So we will be hopefully 2030 will be another 3x year, maybe more than 3x because we have AI
and the hard fog time ends are compressing.
But basically another 3x year and 2031 will look like another 3x year.
So we will be on this continuous scaling path, but already squarely in the ZKVM-Backed side
of that scaling path.
Right, right. I guess one point you made earlier, and I guess it's worth reemphasizing here, is we are, the aspiration of Ethereum is to do a 3x scaling increase every single year, not just for the next three years, next three years for classical scaling, and then the next three years after that for ZKEVM scaling.
So, you know, while I am excited about the ZKEVM, and I think it's incredible and why I want to, like, rally the Ethereum community around it, acutely, there won't be a ZK EVM moment.
as felt by the transactors users of Ethereum because we are doing a 3x scaling per year
for the next six years, first with classical, then with ZK.
And so while the merge, you know, acutely transitioned us from Prove of Work to Proof and
Proof of State EIP 1559 acutely transitioned us from, you know, to have the burn and better
transaction UX.
And like same thing with 4844 is an acute transition.
This won't be that because we are scaling anyways.
But nonetheless, I think it is important to know that only Ethereum will actually be able to access, you know, the final, you know, years three through six of scaling in that capacity because this is Ethereum's Manhattan Project.
Like we said, only Ethereum has been working on this.
It's been working on this since Genesis.
And while Ethereum makes this transition from a classical blockchain to a ZK blockchain, it will be leaving every other blockchain behind in the.
the in the previous classical era.
And so maybe that's why I'm so excited about it is like Ethereum is making the
generational leap to the next gen blockchain and no other blockchain will have these
properties that we've been discussing about on this podcast.
Well, and I think this is this is what I said earlier.
Like it's not just that like it's not an accident that you won't notice this transition.
Like it's actually by design.
Like we're trying we're like, I think in this in this moment in time, we're really trying to
balance this continue the.
strength of Ethereum of being able to make these like leaps, these, these paradigm jumps that
I think like other projects really struggle to be able to follow.
I think again, that's why we'll also just naturally have the post quantum properties.
I think many chains will actually like struggle quite a bit with with actually getting there.
And at the same time, realize that now we're no longer in the sandbox mode.
We can't just like say, I just wait, just wait for three more years.
Like how, you know, like, don't be some patient.
Like no, no, no.
I mean, people are coming on chain.
Agents.
AI is coming unchanged, like today, right?
So, like, I think it's important that we basically just like,
we are a continuously scale in blockchain,
and it's our responsibility to under the hood make that happen
and like use whatever like both traditional and magical future ZK means
necessary to make that happen.
I will say, because he said, like, no one else will be able to do that.
I think, I actually think it's one of those areas
where there's, again, natural synergy between Ethereum and the, like,
the EVM-L2 ecosystem.
I think one thing that, for example, we didn't talk about at all,
but that I'm very excited about is that, like, again,
similar to how the initial jump to non-real-time ZKVM came mostly driven by the L2s,
I think now that we are driving from the L1 side,
this move to real-time ZKVMs,
the L-2s will also be huge beneficiaries of this
because they will also just become the, like,
or like, gain the ability of real-time settlement,
so that means also say all the, like, bridging pain across the L2 ecosystem, right?
oh, in principle, it takes either I use a mint and burn bridge
or it takes like seven days for my asset to move across chains.
All of this will disappear, right?
It's going to be a few seconds for any asset to move from any L2
to any real-time ZK-EVM-proven L2
to any other real-time ZK EVM-proven L2
through the Ethereum L1
or of course, into or out of the Ethereum L1.
So I think it's yet in one of these cases
where the fact that if you're part of the Ethereum family,
we're like, this is kind of,
this is the ecosystem that really has this principle of approach to things.
you get all of these benefits for free.
You are on the principled architectural path,
and I think that has always been our competitive advantage.
And I think while doubling down on the competitive advantage,
I think we really are already trying very hard.
I think we have to keep trying even harder
to close where maybe we've had the competitive disadvantage,
which is I think that Ethereum in the past has sometimes been a bit too much
in this pure research mode and maybe discounting the type of activity that already existed
and saying, ah, that's just sandbox, whatever.
like the real world adoption will come later and then then we'll start focusing on it.
Real world adoption is clearly here.
And so finding the right balance, I think, is the ongoing challenge.
It's what, for example, Tomash and Choway in their, in the time at the Ethereum
have like really put a lot of focus on.
And I think that's how I would like narrate the future of Ethereum, the both of Manhattan
projects and the short term focus and ownership of the protocol as a useful thing today.
One theme that I've picked up on a handful of your answers throughout this conversation on
Gar is that there seems to be a significant number of second order positive effects of the ZK
EVM that are not related directly to the main quest line of the ZK EVM, which is just straight,
you know, layer one scaling, but solves a bunch of second order problems, you know,
layer two scalability and composability being the one that you just said.
How big is that second order effect?
Like, am I correctly identifying that is actually like somewhat large in the positive second
order effect number. Yeah, I mean, I think there's the immediate second order, like the, as you said,
like the things that, like, just the benefits to the broader EVM ecosystem, especially EVML2 ecosystem,
because again, and I guess maybe I didn't mention this so much. Like, I think, I think it's much
easier to adopt, to benefit from the technology for L2s, for EVML2s, whereas like other
EVML1s, I think, while I think that's actually, it's also very exciting for them, I do think,
basically you'd have to re-architect your entire chain,
similar to how I was saying, like, the Ethereum L1,
the ZKVM is the core piece,
but there's like many elements to it, right?
Whereas because the L2s already have this architecture
where they are just like naturally settling on the L1,
they just have to compress the timeline, like the settling time.
Like for them, it's like a, it's almost like a trivial upgrade
to follow us to this world.
And like, so I think I really think there's the unique synergy
for the Ethereum L1 and then the Ethereum EVML2s.
I think longer term,
if I'm talking beyond,
blockchain see for a second. I think we've already seen how in the world, outside of crypto,
we are starting to see this second generation of cryptography really impact and it become
very impactful. It took a while. It took a couple of years for people to start taking it
seriously. And so I think you can start to see it with all kinds of things like Microsoft
is doing things like a lot of governments are doing like, I know, ZKID type of systems.
You're starting to really see use cases that go beyond just blockchains. Blockchains are like the most
valuable. So that's why we always
see the technology there.
But you can imagine a world, and especially
once you have this real-time element unlocked,
you can imagine a world where like, I know,
just to be futuristic here,
like AI agents might use
real-time ZK proofs to make provable
statements for trustless interactions
with each other, right? Like some of that might be on-chain
for like, you know, for direct asset
detect interactions, but some other things might just be
literally just, ah, I'm just proving that I have access
to this data and this data has this structure
and that I, you know,
all these kind of statements.
You can just down trivially real-time proof that you just couldn't be forced.
So I think that's a five year down the road maybe kind of thing, but five to ten years,
but that will come and that I think will be really exciting.
And then, for example, I don't know if you've seen this, like more and more countries
starting to introduce, I don't know, social media bans for like a minors and that kind of
stuff.
And like usually that's implemented in a super dumb way.
You have to like just, they use a service.
You have to upload your ID to the service, right?
And if we can replace that with like a ZK ID system where you really don't leak anything
other than I own an ID and my birth date is above this level, this threshold.
Obviously, that's a much preferable world.
So I think we are currently like, I think blockchains and especially the Ethereum ecosystem
is kind of funding this massive leap of the cryptography toolkit that we have.
And with some delay, five to 10 years delay, it will also hit the non-blockchain space.
And I think it will be super impactful.
Yeah.
One idea I've had is that, you know, Ethereum and all the research that we have,
invested in over the years, hopefully is one big contributing factor to like kind of restoring
the brand of crypto by helping the world overcome some like generational challenges as you've,
as you correctly identify. You know, you know, crypto doesn't really have the best brand at this
present moment. Hopefully with some of these, you know, sci-fi tech advancements,
this Manhattan project that Ethereum has been working on, we don't just, you know, improve the nature
of our own blockchains, but we improve the nature of the world around us.
and the second order effects upon Ethereum as a brand as an ecosystem,
ETH price is benefited downstream of all of that.
Onzgar, this has been a super educational episode.
I really appreciate you coming on here and giving me in the Bankless Nation the time about the ZK EVM.
I think broadly the crypto industry is looking for reasons to get bullish about something.
I think this is a very valid thing to be excited about and to be bullish on.
And so I'm trying to rally the troops around the ZK EVM fork just in.
in mindshare, in education, and I think you've done the job I've hoped we could do here on the
episode today. So I thank you for that, sir. Sounds good. And one last caveat, just to repeat this,
right? Like, I'm not personally a ZK expert. I mean, obviously, I'm in the loop on a lot of these things,
but I'm more like our broader scaling expert. So this is part of my job. But really, we have absolutely
amazing people. So I'm sure I got like some of the minute, minute details a little bit wrong,
and the people will scream at their monitors. But I hope I got the,
the broader picture roughly right. And I agree. It's very exciting. I think both the
execution layer side, the ZKVM scaling story and then on the consensus layer, like these
next generation upgrades we're planning there. Very, very exciting. I do think we need,
we should understand though in this moment and time also, I think we should try to become
more and more the boring infrastructure layer. And I think we should really like ready the stage
for the applications. And so I'm personally like incredibly excited for like the like the actual real
world application side of crypto.
We're really starting to see this come online,
agentic payments, real world assets,
stable coin payments. All this is incredibly exciting.
I think it's a great moment to be in crypto.
And yeah, and of course, one last shout out there,
maybe actually, if anyone listening to this,
was actually interested, excited by these technical details
of everything we talked about, though,
and actually wants to help on the infrastructure side,
do reach out to me, I don't know, either on Twitter DM or unscaredetheorem.org.
We also always, in principle, are hiring,
if any smart kit out there, like, really would want to join us here on the infrastructure side.
It's not the only exciting thing in crypto, but it is still very, very exciting.
And please, come join us.
We'll make sure your Twitter is in the show notes on YouTube or Twitter or wherever people are listening to this podcast.
Angar, thank you so much.
Thank you very much.
Banklessation, you guys know the deal.
Crypto is risky.
You can lose what you put in.
But nonetheless, we are headed into the future.
We're going to ZK the future, too, with the help of the ZK EVM.
That's not for everyone, but we are glad you're really.
with us on The Bankless Journey. Thanks a lot.
