Bankless - Ethereum Beast Mode - Scaling L1 to 10k and Beyond | Justin Drake
Episode Date: November 12, 2025Ethereum hasn’t reached full speed yet. Now it might. Justin Drake of the Ethereum Foundation outlines Lean Ethereum, a plan to optimise the stack so validators stop executing and start verifying. W...ith zk proofs in under 12 seconds and on-prem provers around 10 kW, the base layer can reach gigagas capacity and roughly 10,000 TPS while getting more decentralized. Add Fossil, seconds-level finality, and post-quantum signatures, and the changes stick. We unpack the EthProofs race, the four-phase path to mandatory proofs, the three-times-a-year gas target in EIP-7938, and why native rollups could remove gas ceilings for L2s. If you’re wondering whether Ethereum can scale without turning into a data center chain, this is the roadmap. --- 📣SPOTIFY PREMIUM RSS FEED | USE CODE: SPOTIFY24 https://bankless.cc/spotify-premium --- BANKLESS SPONSOR TOOLS: 🪙FRAXNET | MINT, REDEEM, EARN https://bankless.cc/fraxnet 🦄UNISWAP | SWAP ON UNICHAIN https://bankless.cc/unichain 🛞MANTLE | MODULAR L2 NETWORK https://bankless.cc/Mantle 💤EIGHT SLEEP | IMPROVE YOUR SLEEP https://bankless.cc/eight-sleep 💠BIT DIGITAL ($BTBT) | ETH TREASURY https://bankless.cc/bit-digital We’re being compensated by Bit Digital (NASDAQ BTBT) for this segment promoting their company and BTBT. The compensation is paid in cash as a one time payment. You can find additional information about Bit Digital and BTBT on their Investor page at https://bit-digital.com/investors --- TIMESTAMPS 0:00 Intro: What is Lean Ethereum? 3:32 Beast Mode? Gas & Blocks 5:39 GigaGas, TeraGas, Gap to Target 9:32 Why Scale L1: Decentralization Tradeoffs 20:22 Provability, Power, Real-Time, Decentralization 24:43 L1 Security: Money, Key Uses 28:59 Lean Ethereum: SNARKs, Beast, Fort 36:32 SNARKs & zkVMs: What, Why 48:50 Execute→Verify: Validators & Lean Client 56:08 Builders, Provers, PBS, Fossil 1:08:49 Devconnect Demo, EthProofs, Roadmap Phases 1:31:06 Rollout, Gas Limits, Slots, Hardware 1:44:33 Home Provers: Power, Costs, Incentives/Penalties 1:54:38 Data Availability, Lean Consensus, Upgrades 2:04:22 Talent, Competition, Community, Closing --- RESOURCES Justin Drake https://x.com/drakefjustin Lean Ethereum https://blog.ethereum.org/2025/07/31/lean-ethereum --- Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
Justin, what is Lean Ethereum at the highest level?
Sure. So Lean Ethereum is the conviction that we can use this very powerful technology called Snarks, this magical cryptography, to bring Ethereum to the next level, both in terms of performance and scale, but also in terms of security and decentralization.
And I call the former Beast mode and the latter Fort Mode.
Wait, okay, Beast mode and Fort Mode.
So what's Beast Mode and what's Fortinent mode?
Yeah, so part of Beast Mode is this vision of scaling the L1 to one gigagas per second.
I call it the gigagast frontier or the Gigawess error, as well as dramatically increasing the data availability so that we can do one terra gas per second on the L2s.
So that's 10,000 TPS on the L1, 10 million TPS on the L2s.
And if you were to summarize this in one sentence, it's basically having enough throughput for all of finance.
And so Beast Mode is on the execution layer, I suppose, which will further define a little bit more.
But that's where the block space and the gas transactions and all of that activity and smart contracts, all of that activity happens on the execution layer.
Defi.
Defi. Payments. Yeah. Exactly. It also includes the data availability layer.
for the L2s.
And that gives us, roughly speaking,
a 1,000x amplifier relative to what we can do with the L1.
Okay.
And even the data availability layer for the L2s,
what do the L2s do, execution?
So it's all about execution in Beast Mode.
Okay.
Fort mode.
What is Fort mode?
Fort mode is about having totally uncompromising security.
Best in class uptime, best in class, decentralization,
post-quantum security,
making sure that the MEV pipeline is cleanly decoupled from the validators,
and also having best-in-class finality in a matter of seconds as opposed to what we have right now,
which is in a matter of 10 minutes, 10 to 20 minutes.
People have called this.
This is the consensus layer, right?
So consensus layer is fort mode, beast mode is execution layer.
Exactly.
There is a little bit of a tying in the sense that the technology that we're going to use
to solve beast mode also allows us to.
to solve fourth mode.
And the reason is that the snarks allow the validators
to verify very, very small proofs.
And that really helps with decentralization
because the barrier to entry to becoming a validator
from a hardware standpoint perspective is extremely low.
And beast mode and four mode,
I feel like these are just offensive and defensive.
Like execution, beast mode is Ethereum is being aggressive.
We're going forward, we're pushing forward, beast mode.
And then Fort mode is kind of what Ethereum has always done, and we are just continuing to do it, which is kind of what we call World War III resistance.
It's like everything in the world will go wrong, but Ethereum will still be producing blocks because it's that resilient.
It's the bunker coin.
So we have like offensive defensive is maybe like a way to portray this.
Exactly.
And with Snarks, we basically have permission to dream bigger dreams on the aggressive scaling and performance.
It's worth highlighting, Justin, that Ethereum has never really done Beast Mode before.
We've never really gone on the offensive.
Like, Lean Ethereum is kind of the first time that we can actually credibly say, yes, Ethereum is scaling not marginally, but aggressively.
Is that correct?
I mean, I would argue that the data availability that we've been working on for many years now is part of Beast mode.
for unlocking the L2s.
But the L1 has remained stagnant.
At the L1.
Beast mode at the L1.
Correct.
So for four years,
well,
four years ago,
the gas limit was 30 million gas,
and today it's 45 million gas.
So in four years,
we've only scaled the gas limit by 50%,
underperforming even Mores law
and hardware improvements.
Yeah, not very beast.
Not very beastly.
But now again,
you know,
we have the permission to be extremely ambitious
just now that the technology is reaching maturity.
Okay, last four years, we've gone from 30 million to 45 million on the Ethereum layer one.
As I understand it, though, in the early days, maybe this is back in 2016.
It was like 6 million gas limit.
So we did go from something lower to 30 million.
And the way we accomplished that is just like raw engineering.
But to call that scaling on the L1 anyway, beast mode, not quite right.
I mean, what did we do?
This is like a 3 to 5x, something like that.
And it took, you know, 10 years of Ethereum history.
But when we say gas and gas limits and that sort of thing,
what we're talking about is transactions per second, right?
Or at least it's a proxy for transactions per second.
We're going to be referring to, you know, block size throughout this episode a lot.
So can you just define what block.
size actually is and how that relates to transactions per second and just like overall scaling.
Absolutely. So the simplest transaction possible is called a transfer and it consumes 21,000 gas.
You don't have to remember this number. But on average, we're doing more complex transactions.
For example, we have Dex swaps and those consume 100,000 gas. So if you have one giga gas per second,
that's 1 billion gas per second,
and you divide that by your average transaction of 100,000 gas,
you get the 10K TPS.
And continuing with our theme of powers of 10,
there's roughly 100,000 seconds in the day,
and there's roughly 10 billion people on Earth.
And so if you were to denominate it per human,
what you get is 0.1 transactions per human per day,
which in my opinion is a great start,
but it's just not enough for all the finance, right?
As humans, we make more than one financial transaction every 10 days.
And there's also the robots that are coming on chain too.
Absolutely, a great amplifier.
And so what I'm hoping we can achieve in terms of scale in the longer run is 10 million
transactions per second.
That's the TerraGas era, and that unlocks 100 transactions per day per human.
Okay, but so the one giga gas is that the hope is to achieve,
that, and the plan in Lean Ethereum is to achieve that on Ethereum layer 1. Am I correct in that?
And then the Terra gas is in the Ethereum Layer 2 ecosystem by committing, you know,
a data to the Ethereum's scaled-up data availability layer.
Correct. So Terra gas is the aggregate of all of the roll-ups combined. And you can think of it,
roughly speaking, as being a thousand copies of the L1, each doing one giga-gagas.
Okay, where are we now?
So if we're trying to get to giga gas on the L1 and tear gas on the L2,
you mentioned some numbers of where Ethereum is.
Was it 60 million gas limit?
What's the current gas limit and how far are we away?
Yeah, so the gas limit is a lot confusing because we have slots of 12 seconds.
You have to redenominate everything down to the second.
But at L1, we're about 500x away from that goal.
So between two and three orders of magnitude.
And to a very large extent, the primary bottleneck that we have today is the validators.
So we set ourselves as a constraint, as a goal to have maximum decentralization.
And we're not allowing the validators, or at least we're not assuming that the validators have powerful hardware.
They're running on laptops.
The meme has been Raspberry Pi.
and by removing this bottleneck,
we can easily, in my opinion,
get a 10x, 100x,
and with sufficient work, we can get this 500x
that gets us to one gigagas per second.
Okay, and so how many gigagas per second
is Ethereum right now?
So you said it's just because we're translating multiple things.
You said...
It's two mega gas per second.
It's how much?
Two megagas per second.
Okay, so it's two megagas per second.
megagas per second right now.
And we want 1,000 megagas.
Okay, that's why we're 500x off.
Another way to translate that is
we have 20, around 20
transactions per second maybe for those simple
transactions on Ethereum, and we
want to be 10,000. So again, that's
500X off right now.
That's what Beast Mode is saying.
We're going to do 500X on
Ethereum layer 1, correct?
That is my hope. That is the vision
that I'm trying to put forward.
and slowly every week with more and more developments on the ZKVMs,
people are starting to believe this vision, that it is indeed possible.
Okay, that's interesting.
So we're going to talk about vision and execution throughout this episode in many places.
But actually, before we do, since we're still setting the context for this,
I think some people will be scratching their heads and saying,
well, why are you Justin talking about and why is Ethereum in general talking about
scaling the L1 at all.
I thought the plan was the roll-up-centric roadmap where Ethereum layer 1 stays pretty
slow.
Yeah, maybe it scales up a little bit as Moore's Law improves and as engineering improves a
little bit, but it's not going to Beast Mode because the Beast Mode plan, I thought that
was already defined as the roll-up-centric roadmap.
And all of the scale in Ethereum happens on L-2s.
Well, you're saying, no, we're going to continue scaling the L2, but we're also scaling the L1,
and some people will be scratching their heads and asking, why? Is this a change? Is this a pivot?
Why are we trying to scale the L1 in the first place?
I'd say, yes, it is a change. It is a pivot. Because we have now technology that allows us to scale
while preserving decentralization. So what came first was the requirement for decentralization.
and then we try to do the best with that requirement.
And that's the status quo that we have.
But now that we have a new technology,
we can start rethinking the kind of skill that we can have at L1.
So the first answer is just because we can.
The second answer is that if we want Ethereum L1 to be a hub,
meaning the place where assets are minted,
where bridging happens, where force withdrawals happens,
where a lot of the high value liquidity happens,
then we need L1 to have a minimum amount of scale.
And 0.1 transactions per human is,
you should think of it as being the highest density economic transactions
that can make it.
Everyone else will largely be potentially priced out.
So you can think of it as being settlement transactions,
as being minting transactions,
as being transactions where you enacting,
an escape hatch, right? You have your, you have, you know, $100,000 stuck on an L2, the sequence
has gone offline or something. Well, you can just force withdraw and you'll be willing to pay the
10 cents or whatever on L1 in order to free your funds. So those are the two reasons. Going,
back to the first then, because we can, this implies we couldn't before and we'll have a whole
conversation about snarks and this magic cryptography that has emerged, let's say, and hardened
over the last decade or so. But the idea that we're scaling the L1 now because we can
means that previously we couldn't. It almost means that scaling the L1 would have been the first
option if we could. Like, is scaling the L1 better than scaling the L2?
And if we could back in, say, 2018, Snark's was unleashed then,
that would have been the default path rather than L2s?
I think so.
I think had we been able to scale, let's say, five years ago with ZKVMs,
that's probably what we would have done.
But it would not have been sufficient.
We still would have to gone down the path of data availability
to unlock the millions of transactions per second.
that we need to welcome all of finance.
And so in some sense, there's like a different ordering that will have happened,
but I think we still will have gone down the difficult path of working with Snarks for execution
and working with data value sampling for bandwidth.
And you can kind of think of these two tools,
snarks and data availability as being two sides of the scaling coin.
It's basically bandwidth and compute, which are the two primitive resources.
sources that a blockchain will consume.
Some people will still be confused by that answer, Justin, because they will say, wait, wait
a second, why couldn't you scale Ethereum in 2018?
And they'll point to you other layer one chains today that are indeed scaling.
I mean, you've got some L1s that are saying they can scale to 10,000 transactions per second
and beyond.
Some are in the million time range.
I think Solana is doing, you know, thousands of transactions per second at peak, at least,
and they promised far more.
So was this a skill issue?
Like, why couldn't Ethereum scale?
Yeah, it's a good question.
So a lot of the high-performance L-1s have relatively poor decentralization.
So on the order of 100 validators.
So, you know, Monad, for example, roughly speaking, ballpark, they have 100 validators,
say 100 validators, you know, B&B chain, even less.
And not only do they have a small number of validators,
but also the barrier to entry to becoming a validator is to have a server in the data center
because you need to store a lot of states,
need to have very reliable and high-throughput internet connection,
you need to have lots of RAM, you need to have fast CPUs.
And that's the kind of thing that's more difficult to get at home.
And it's also the strategy that Solana has taken.
So Solana, on average, there's about 1,000 user transactions per second.
And they have less than 1,000 validators.
And if you look at the map of where these validators are,
it's almost entirely in data centers.
And the vast majority of them, more than 50% of them, are,
leave in two data centers that are like just a few dozens of kilometers apart from each other
in Europe. So it's highly, highly concentrated. And that's not something that we have tolerated
on Ethereum. Can you name the constraint then? So it seems like Ethereum is not tolerating something,
not tolerating validators running inside of data centers. So there is some self-imposed constraint on the design
here, name that. What is that constraint versus, like, why can't we just capitulate and start
running things in data centers like some of the other chains have started to do?
We care about home internet connections and commodity hardware, like a laptop. And part,
part of the reason has to do with lifeness. So, you know, recently we had an AWS outage.
Various chains went offline. That's not what we want.
with Ethereum. But we're kind of very paranoid to the point where we want to have a security
model where even if all data center operators in the world decide to attack Ethereum simultaneously,
it still has uptime. And there's roughly call it 100 data center operators in the world.
So this is not totally far-fetched. And I guess another difference is that we're trying to have
best in class uptime, right?
We've had 100% uptime since Genesis,
and part of that is not cutting corners.
And I think what some of these other chains have done
is they've been willing to cut corners
in order to get higher performance.
Introducing FRAXUSD,
the genius aligned digital dollar from FRAX.
It's secure, stable, and fully backed
by institutional grade real world assets,
custody by BlackRock, super state, and fidelity.
It's always redeemable one-to-one,
transparently audited and built for payments,
defy and banking.
The best of all worlds.
At the core is FraxNet,
an on-chain fintech platform
built to align with emerging U.S. regulatory frameworks
where you can mint, redeem,
and use FRAXUSD with just a few clicks.
Deposit U.S.DC, send a bank wire,
or tokenized treasuries,
and receive programmable digital dollars
straight to your wallet.
FRAXNet users benefits from the underlying return
of U.S. treasuries
and earn just by using the system.
Whether you're bridging, minting,
or holding, your FRAX USD works for you.
FRAX isn't just a protocol. It's a digital nation, powered by the FRAX token and governed by its global communities.
Join that community and help shape FRAX nation's future by going to frax.com slash R slash bankless.
FRAX, designed for the future of compliant digital finance. Imagine a world where traditional finance meets the power of blockchain seamlessly.
That's what Mantle is pioneering with blockchain for banking, a revolutionary new category at the intersection of TradFi and Web3.
At the heart is you are, the world's first money app built fully on chain.
It gives you a Swiss iBan account, blending fiat currencies like the Euro, the Swiss franc,
the United States dollar, or the Rambi, with crypto, all in one place.
Enjoy real-world usability and blockchain's trust and programmability.
Transactions post directly to the blockchain, compatible with Tradfai Rails, and packed with integrated DFI futures.
U.R transforms Mantle Network into the ultimate platform for on-chain financial services,
unifying payments, trading, and assets like the MI4, the M-Eath protocol, and functions FBTC,
backed by developer grants, ecosystem incentives, and top-de-exempts.
distribution through the UR app, reward stations, and by-bit launch pool.
For M&T holders, every economic activity in UR drives value back to you, embodying the entire
stack and future growth of this super app ecosystem.
Follow Mantle on X at Mantle underscore official for the latest updates on blockchain for banking.
That's X.com slash mantle underscore official.
Ethereum's layer two universe is exploding with choices.
But if you're looking for the best place to park and move your tokens, make your next stop
Unichane.
First, liquidity.
Unichane hosts the most liquid Uniswap V4 deployment.
on any layer two, giving you deeper pools for flagship pairs like ETHUSDC.
More liquidity means better prices, less slippage, and smoother swaps, exactly what traders
crave.
The numbers back it up.
Unichane leads all layer twos in total value locked for uniswap v4, and it's not just deep.
It's fast and fully transparent.
Purpose built to be the home base for defy and cross-chain liquidity.
When it comes to costs, unichane is a no-brainer.
Transaction fees come in about 95% cheaper than Ethereum main net,
slashing the price of creating or accessing liquidity.
Want to stay in the loop on Unichain.
Visit unichane.org or follow at unichain on X for all the updates.
When we talk about going from doing a 500X in terms of gas throughput from where we are now
at 2 megagas a second to 1,000 megagas a second.
A 500x is not just something that you can engineer.
The reason why we are doing this today is because Ethereum is going through something
closer to an evolution rather than like an engineering upgrade.
And some of the chains that we just talked about have always been like engineering first.
And that's where some of the performance benefits has come from.
Like Solana has been very engineering heavy.
And they have just produced well engineered nose and execution clients.
And then where is that software best expressed to be its best formal in a data center?
Put the heavily engineered things in a data center.
And that's where a lot of the modern scaling chains of 2020 through 2025 have gotten some
of their throughput.
Now, Ethereum has been patient,
but in order to get that 500X,
it's not really an engineering thing.
It's more of like an evolution.
A new path has opened up with some of the stuff
that you've talked about, Justin,
with the whole ZK era,
where it's not necessarily just engineering,
but it's actually cryptography
that is opening up a path to do something like a 500X.
And so that's kind of,
and that's always kind of been in the back pocket of Ethereum
from day one.
That's always been like the theoretical scaling strategy.
And in recent years, I think you and people in the Ethereum Foundation will be like,
okay, this path is now clear to us and now we are ready to take it.
That's kind of my diagnosis of the last few years.
Is that right?
Yeah, that's right.
Really the key on lock here is just cryptography.
And in terms of the requirements that we have for the cryptography, those are also extremely
high.
So one of the things that we care about, for example, is diversity.
This is complicated cryptography, and we want to have the same kind of diversity that we enjoy today at the consensus layer and execution layer with the consensus teams and the execution teams.
So I'm hoping that we can have five different ZK EVMs with uncorated failures.
Another strong requirement for the cryptography is called real-time proving.
There's this idea that when a block is produced, the proof for that correct.
corresponding block needs to arrive before the next block.
So the latency needs to be under one Ethereum slots, which is under 12 seconds.
And then another requirement that we have beyond the security and the latency is the power draw.
So going back to this comment around the data centers, we don't want the provoers to themselves be in data
centers because now you've introduced a new choke point.
And so what we're hoping to have is on-prem proving.
And by on-prem, we mean on-premises in a home, in a garage, in an office.
And the specific number that we have in mind is 10 kilowatts.
So just to give you an order of magnitude, the toaster will consume one kilowatts.
So it's the equivalent of 10 toasters.
And it's also the same as a Tesla home charger that will draw roughly 10 kilowatts.
And so if we can have millions of home chargers around the world, then it's reasonable to have this requirement for the provers.
And one thing that is worth stressing is that unlike consensus, which requires half of the participants to behave honestly,
we only need one honest prover for the whole thing to work to work out.
So that's why there's very different hardware requirements on the consensus participants.
Here we want to have the lowest barrier to entry as possible, think a Raspberry Pi or a laptop
because it's a 50% honesty assumption.
But for the provoers, it's a one of an assumption.
And it's okay to bump it up.
Okay, so we're starting to unpack almost the beast mode layer, the execution layer,
with some of those components.
But I personally am still not ready to get there.
actually. So I still have some questions. What you just described is a stack that allows us to still do
the blockchain validation or verification outside of a data center from a home internet connection.
And I still kind of want to know why, or like what use cases are important for that.
You said part of this is about liveliness and uptime. And indeed, Ethereum has had
10 years of uninterrupted uptime. And that's fantastic. But there are other properties that
decentralization and uptime kind of imbues. One of those, quite famously, with Bitcoin
and Ethereum, as you've come and argued on bankless, is the property of having your cryptocurrency
be a store of value asset. So Bitcoin is still on the cryptography, 1.0, it's not doing any of the
snarks thing. That's not really in the roadmap, but it has maintained very low throughput,
very low through TPS, but also very low node requirements. So you can, you can run a Bitcoin node
from your house. It is not a data center chain similar to Ethereum. But I just kind of want
to know why, because for Bitcoin, they're very clear on why. It's because Bitcoin is a store of
value. It's because it's a digital gold and everybody needs to access it. Now, we've argued on
bank list that at, you know, 10 transactions per second, some of that access will probably take
the form of, you know, micro strategy and ETFs, and you won't be able to do things in defy that
you can if you're actually scaling your base layer. I don't want to rehash that, but I do want to
ask the question of why? What use cases on the Ethereum L1 are important? Vidalik wrote a blog post
talking about slow defy. Is that one of them? Is the store of value use case? I'll just add one
other dimension. We've had people come on the podcast and say, the Ethereum roadmap is flawed
because they obsess over decentralization. They obsess over having nodes being able to run in
somebody's home. If you remove this obsession, you could scale a lot faster, and they don't
understand the reason for the obsession. So what use cases is like Ethereum over-provisioning
itself. What use cases are most conducive to this decentralization, I'll call it,
obsession constraint that Ethereum has self-imposed.
Is it defy?
Is it store of value?
What is it?
It's store of value.
It's moneyness.
And you can look at it empirically speaking.
You have the number one money, Bitcoin, which is the exact opposite of beast mode, right?
It's a piece of crap, right?
It's like, you have a...
Wait, what's a piece of crap?
Bitcoin the asset?
The blockchain, the...
blockchain, the chain, sorry.
So you have.
Which even Bitcoiners will say that the Bitcoin blockchain is an encumbrance upon BTC, the asset.
That's actually aligned with Bitcoin or philosophy.
And yet it's a $2 trillion asset.
And then you have something in between Ethereum, which is trying to get some performance
and some robustness and credible neutrality.
And we're a $500 billion asset.
And then you have something that leans entirely on Bismo.
like Solana, and they're a $100 billion asset.
And those of the newer chains that are leaning even more on beast mode have lower valuations.
And so I think the moniness requires this memetic premium.
And there's just the market empirically has told us that robustness, fort mode, uptime, credible neutrality, moving slow.
having these long-term guarantees are something that are things that are extremely important.
It makes sense to me that store of value would be the primary use case for something like
a Bitcoin or Ethereum, because if you just think about it from a user perspective,
you want to put your value in a place, you can go into a cave for 10 years and come out
and it's still there. That's store of value. You're actually storing value across time, right?
And Bitcoin kind of has gotten this right.
But I think when you're talking about store of value,
it's also bare instruments, you know?
Like, so for instance,
I don't know if I care about USDC on Ethereum as a store of value.
I put that on Ethereum and go into a cave for 10 years and come out.
I don't know what could have happened to USC.
It could, you know, Jeremy, earlier I'm sure it's in gray hands,
whatever.
Laws could change, you know, it could depeg.
something bad could happen to USDA.
So the store of value use case is really like centered around the crypto-native assets
on Ethereum, chiefly, ETH.
Like, ether is the asset primarily for store-of-valueness on top of Ethereum.
So when I think about the tangible use cases, and you say kind of store of value,
the things that require max decentralization, it's probably ether the asset,
and then maybe a handful of kind of D-5 primitives.
That's what I think.
And it's not so much the real world assets except as they act as a trading pair for something
like ether.
That's how I see it.
But this is why I'm curious to like understand how you see it, Justin, and how people within
the Ethereum Foundation see it.
Like what are the apps that are going to be most important on the layer one?
Different people have different opinions within the Ethereum Foundation.
But I would agree with you that.
Ethereum's most important application is being money.
And that's from which all of the applications derive.
If you want to have loans, exchanges, if you want to have prediction markets,
it's all to a large extent predicated on having this strong money.
And this is especially true with these power law distributions and winner-take most situations.
I've tried to argue that a single chain like Ethereum can handle all of finance.
To a large extent, the reason why we have so much fragmentation at L1 is because Ethereum
hasn't grown fast enough to absorb all of the innovation.
But now we have a credible roadmap to just absorb the entirety of it.
And when you look at monetary premium, it's winner-take-most.
You need to somehow convince society that your asset is the most legitimate one.
And if you look at competitors, for example, you look at soldy assets, that's just been disqualified for being money, in my opinion.
Like the fact that it had 10 outages over a handful of years just disqualifies it immediately.
And so the most important thing is just staying long enough in the game and not to get disqualified.
And now we have these two basically assets that are competing, Bitcoin and Ether.
And I think in a few years, Bitcoin will get disqualified because of its blockchain as well, not because it failed Beast Mode, but because it failed Fort Mode.
It will not be able to secure itself with the dwindling issuance.
adwindling issuance is kind of the bare case for Bitcoin then?
Yes.
If you look at transaction fees, right now it's about half a percent of all of the revenue
that miners make.
So 99.5 percent comes from issuance.
And we know that that fraction is going to zero, right?
We're having every four years.
And right now, Bitcoin is secured by three Bitcoin per day of fees.
Three Bitcoin per day is just not enough to secure Bitcoin.
So we've talked about the dichotomy between,
between Beast Mode and Fort Mode.
And now I do want to kind of like maybe name our biases
just because me, Ryan, Justin,
we all came into crypto, into Ethereum 2017 and earlier.
And that's truly when decentralization, Fort Mode, was the game.
And that's kind of like our generation of crypto.
The newer generations of crypto truly value Beast mode far more than Fort Mode.
I think like anyone that has come into crypto 20, 21 or later
probably has a disproportionately low amount of their portfolio of Bitcoin
versus people that came before 2021.
And something that I want to, like, even though we were talking about,
like, yeah, the whole idea is money.
The fort mode is your entry ticket into being inside the competition of money.
Nonetheless, user preferences post-2020-ish,
has really preferred beast mode.
And transaction, transaction volume, transaction fees has
slowly the pendulum is shifted towards chains that go fast,
chains that do beast mode.
And so while-
I would add, David, regardless of constraints, right?
No home validator type of constraint.
Yes.
And so I don't want us to like talk down too much about beast mode
because actually that's what Ethereum is trying to do in 2025 and beyond.
It's like we feel like we've covered Fort mode.
And now we can unlock beast mode.
And Beastmo has a lot of value.
That's where you get global, composable finance all on one chain.
That's where you fit all of humanity, all the finance all on one chain.
That's where you get user adoption and all of the great things that come with a smart contract chain.
And so it's while I'm in this camp, we're all in this camp of like kind of Fort Mode is the cool thing that blockchain is really uniquely provide to the world.
Nonetheless, user preferences has been shifting away from Fort Mode and into Beast mode as.
has blockchains become mainstream adopted?
And now Ethereum's strategy is to aggressively penetrate that market with some of the
technology and the strategies that we're going to talk about in the remainder of this episode.
Yes.
So I do agree that Ethereum is trying to chart a new territory for itself with Beast Mode.
But one thing that I want to highlight is that if you have Beast Mode without Fort Mode,
it's very shallow activity.
And one way to actually measure this is to look at the meme coins on Solana.
where a lot of the activity happened.
And we have over 10 million meme coins on Solana.
And the aggregate market cap of all the meme coins on Solana
is less than $10 billion,
which is an absolute drop in the bucket.
So yes, there was a lot of...
Is $10 billion a drop in the bucket?
So, you know, relative to stable coins, for example,
like a single stable coin on the firm L1 Teva is over $100 billion.
So that's just one use case 10 times bigger.
You can look at loans on Avid, that's like also tens of billions of dollars.
But you can also look at the Affaira market cap that's 50 times bigger.
There's one asset, 50 times bigger than 10 million assets combined.
Maybe we'll talk more about stable coins later in the episode when, you know,
because I do have this outstanding question of to what extent does, do stable coins actually
need the beast mode with decentralization security guarantees of Ethereum the L1?
but let's reserve that question for later.
And let's take this in the sections that you laid out.
So back to what you were saying earlier in the episode,
when I asked the question, what is lean Ethereum?
You said it's Snarks.
That's the magic cryptography that we've unlocked.
It's Beast Mode, which is scaling Ethereum on the L1 and the L2
from a transactions per second perspective.
And it's Fort Mode, which is defending decentralization,
the lowest barrier to entry possible.
for someone to run a node.
So let's take the rest of the episode
and get into each of these sections.
Snarks.
Okay, this is magic cryptography.
Justin, we had you on an episode.
I think this is maybe the first episode
that you did with bankless.
This must have been about four years ago.
And you had this principle
that has stuck with me since,
which is basically like,
the true way blockchain scale
is with cryptography.
That's the first,
choice. That's kind of how Bitcoin was able to do what it's doing. And then when cryptography fails,
you go to economics and crypto-economics. But the gold standard is if you can scale with
cryptography in any kind of protocol design or mechanism design, then you do scale with
cryptography. Both Bitcoin and Ethereum were based on cryptography that has been popular for a while.
I don't know, I'll call it the cryptography that had Lindy in the 2010s, right?
That's what Ethereum has been based on so far.
Now enter Snarks.
Give us a history of the cryptography that Ethereum is based on today
and this Snark's-based upgrade.
And what is this new cryptography?
Yeah, so since Bitcoin, the cryptography that we've been using is extremely primitive.
And it's two different pieces of cryptography.
The first one is called hash functions.
That's the thing from which you can build blocks and chains.
It's the thing that you use to mercilize the state.
And basically it's lots of mercil trees.
And then the other piece of cryptography is called digital signatures or just signatures.
And that allows you to authenticate ownership and sign transactions.
And nowadays, looking back in 2025, this is Stone Age cryptography.
relative to what we have today.
And this new primitive snarks really unleash a whole new design space for blockchains.
And in particular, they allow us to solve this dilemma.
Some people call it a trillama between scale and decentralization.
We really can solve this age-long trade-off by basically having the validators verify these
these short proofs and these proofs behind them having as much execution as as as as the
the block builders and the chain can can can absorb so if you look at the the the two basic
resources that we have to scale the first one is execution we have stocks for that and the
other one is data and here we have the data value sampling
Now, in addition to wanting to have these two unlocks from a scaling perspective,
we also want to make sure that the cryptography that we have is long-term sustainable.
And what that means in our context is post-quantum secure.
So today, for the data of sampling, we've taken a little bit of a shortcut.
We've deployed cryptography called KZG, which is not post-quantum-secure.
And so we need to have some sort of a plan to upgrade that.
And this is where Snarks also help beyond just scale.
They also help with the post-quantum security at the data layer.
I think back four years ago, you were talking about Snarks.
And the term you used, in fact, I think we titled that episode is Moon Math, right?
It is kind of this emerging moon math.
And it's been out for a while just for people who are not cryptographers, okay?
I mean, we don't need to get into the details of what snarks are and what can do.
I think for a lot of people listening, it's sufficient for them to understand,
oh, this is moon math, and it's been used in practice for a while and it's reasonably safe.
When we say snarks and ZK, because you used the term ZK EVMs earlier,
is ZK and Snarks, are they one in the same? Or like, how come we use ZK sometimes and now you're using
Snarks today? Like, people maybe don't understand the differences between these terms.
Sure. So the technical term is Snark. The S stands for succinct. You can think of it as being
small. And then the Nark part, non-interactive argument of knowledge, that's just a fancy mumbo-jumbo for
proof. So snark is nothing more than a small proof. Now, it turns out this technology
snarks also give us for free another property called zero knowledge, which is relevant in
the world of privacy. But we're not using that property for scaling. We're only using...
So how can we call them ZK EVMs? It's so confusing. They're not private. We don't do a very good job
of naming in this industry. Should they be called snark EVMs? Really? They should be called
snark EVMs, yes.
Okay. We won't win that fight today. We're not here to play that game.
It's a lost fight.
How long have Snarks been out there? So all the first generation chains today, all the chains
that we have in production, Bitcoin, Ethereum, that's all been using more primitive
cryptography, as you said, hash functions, digital signatures.
What's this experiment called Zcash? And the Z, I think, stands for zero knowledge or snarks,
right? They use some of that tech. And that's been live since, I don't know, 2014, something.
something like that. Zcashers, correct me on the dates here. I guess how robust is this technology?
How lindy is it? How in use is it? Are we sure we're ready for snarks for prime time now on a chain
that secures almost a trillion dollars in value? Right. So Zcash was launched, I believe, in 2016,
nine years ago. And when you look back, they were absolute pioneers, but they were also
DGens, like cryptographic DGens. They deployed the cryptography, which was, you know,
It was like building rockets, right?
It could explode in their face.
And actually, there was a moment in time where it did explode, right?
I don't know if you remember, but like a few years ago,
Zcash had this critical bug where anyone could mint an arbitrary large amount of ZEC.
Right.
And because it's private, we don't actually know if that happened or not,
if the bug was exploited or not, right?
We don't know for sure.
Exactly.
And so, and one of the big things that,
we have to do is solve this security issue. And there's broadly two solutions that are
satisfactory for Ethereum. The first one is to have diversity of snarks. So you have five different
snark providers and you accept a block to be valid. For example, if three of these snarks return
valid and you can move on in a very similar way that we have execution and consensus layer
diversity. The other way forward is what we call formal verification, where we just pick a
single proof system and we audit it to the point where we have high guarantees that there's
literally zero bugs. So it's a little bit like writing a mathematical proof of correctness
of your entire SNOC implementation. Now, unfortunately, we're a little too early for that
end-to-end for more verification, but we've started the work. So,
Last year, we announced a $20 million formal verification program, which is led by Alex Hicks.
And a lot of progress is being made.
And my expectation is that this decade, maybe in 2029, 2030, we will have an N-to-end formally verified snark, which has zero bugs.
Now, the other thing that I want to mention is that Zcash had an extremely simple use case, which is just transfers.
And what they did is that they wrote so-called custom circuits.
So they were getting their hands very, very dirty with these snarks.
But the modern approach is what are called ZKVMs,
which is basically a way to make use of the power of snarks without being a snark expert.
So a typical developer, like a Rust developer, for example,
can run, write typical programs and compile them to the ZKVM.
And this is actually one of the requirements in order for the SNOC technology to be mature enough for the L1.
And the reason is that we want to take the existing EVM implementations and compile them to the ZKVM.
So for example, our EVM, which is the EVM implementation within REF, which is one of the execution layer clients.
We take that, we compile it to the ZKVM.
the ZKVMs. We can take EVM1, which is another implementation, compiled. There's EFREX,
there's ZK Sync OS. And there's also implementations that are written in Go, for example,
GF has an EVM implementation, Nevermind has an implementation in C-Sharp. And we want to take
as many of these implementations as possible and compile them to ZKVMs. And that is a very recent trend.
It's something that has only really existed for one or two.
years. But we feel fine, I guess, relying on Snarks as a core technology for Ethereum at the L1 layer. I mean, they're not as mature as hash functions, which have been around since what? I don't know the 90s, like decades, right? 1970s, 1980s, something like this? Digital signatures as well. I mean, these are very hardened cryptographic primitives. Snarks are what? 15 years old?
So theoretically speaking, something like 30 years old, but in practice, Zcash was one of the first
projects to bring them in production, and that's about 10 years old.
Okay.
But we feel fine about Snarks as a tech stack now.
And in general, are snarks kind of commonly accepted in deep cryptography circles as like,
yep, this works, the math checks out, can't be broken.
Yes, but there's like snarks and there's this, there's snarks.
So we have all of the requirements.
have real-time proving his requirement.
We have diversity.
We have the ZKVM aspect.
And we also have the real-time-proving requirement
and we have the requirement of 10 kilowatts for lifeness.
There's a Elon Musk quote that I think is relevant here that I like,
which he says the most common error of a smart engineer
is to optimize a thing that should instead be eliminated.
I want you to take that metaphor as to like,
why are we doing this?
So we're talking about the snarks
and the math behind them
and how they work.
Let's actually zoom out
and talk about the why
because this is actually doing the thing
that would make Elon Musk happy.
It's eliminating a whole entire component
which other chains have chosen to optimize.
Can you talk about that a little bit?
Absolutely.
So today when you run a validator,
you're running two separate clients.
You're running a consensus layer client.
So at home, I'm running one called Lighthouse.
And you're also running an execution layer client.
and what I'm running at home is called GIF.
And really what we want to try and be doing
is removing the bottleneck to scalability.
And in this case, it's GF.
Like, GF literally on my computer,
can't process a giga gas per second,
partly because, you know, the hardware is not adequate,
but also the software is also not adequate.
And what I'm hoping to do at DefConnect in about 25 days
is shut down my Geph node
and completely remove that bottleneck.
And instead of relying on GF to tell me that blocks are valid,
I will be downloading ZKEVM proofs.
And it doesn't matter how big the blocks are.
From my perspective, the proof is always the same size.
That's the S is not.
It's distinct.
And yeah, that resonates very much with Elon's quotes,
which is that we shouldn't be optimizing Geph.
we should just be removing it completely.
So that brings us to, I think, this whole lean execution,
kind of trade and to talk about that in more detail.
So we have this new Snarks magic cryptography
that allows us to scale Ethereum in general,
in particular, we'll talk about maybe scaling the L1 here
and allows us to do that in the constrained way
that Ethereum has always operated.
And so something that you're talking about, Justin, is replacing Geth, which is your execution client.
So this is the whole beast mode thing with a ZK EVM client.
So rather than use the old way of doing a validator, the new way, I think, shifts the role of a validator from executing every single block.
right, like every single transaction and every single block
to instead of executing,
verifying that a block has been, I guess, executed correctly?
Can you describe that in more detail?
Because this is the part where we're talking about beast mode,
we're talking about scaling the L1 here,
we're talking about lean execution for Ethereum,
and a core technology here is ZK EVMs
that change what validators are doing
and they're moving from executing everything to just verifying things.
I don't know that I have an intuition for how that works, why that's possible,
and how we can do this while preserving decentralization.
Can you give it to me?
Absolutely.
So the process of verifying a block is extremely intensive.
The first thing that you have to do is download the block.
And that already is a bottleneck, right?
If the blocks are too big, you just literally can't download them.
you're on a home internet connection.
But once you have the block,
what you need to do is you need to load
the most recent version
of the Ethereum state.
And that is on the order of 100 gigabytes.
But of course, if we would dramatically increase
the gas limits, it could be terabytes,
tens of terabytes. So that's a problem.
And then once you've loaded the state,
you need to go execute the transactions.
And for that, you need two resources.
You need a CPU with lots of
cause and you need a lot of RAM.
And in addition to all of this, you need to maintain a mempool and a lot of peer-to-peer
connections.
And you also need to store the history, which also can be hundreds of gigabytes.
So all of this like crazy machinery, we just completely remove it and we just verify
a snog proof.
It's stateless.
You don't need to keep a copy of the state.
it's historyless.
You don't need to keep a copy of the film history.
It's ramless in the sense that you don't need gigabytes of RAM.
You might need, you know, 100 kilof hits of RAM.
You don't need many calls.
You just need one core.
And it could be a very weak device.
And actually, the new meme that I'm hoping to introduce is that of a Raspberry Pi
pico.
So the pico suffix refers to this $8.
piece of hardware relative to the normal Raspberry Pi, which is about $100.
And I believe that at least, you know, as a fun experiment, we could have validator run on a
Raspberry Pi Pico.
And if that's the case, of course, you know, more people will be more familiar with, say,
smartphone, you could run on your smartphone.
It could run on your smart watch, for instance, right?
The Raspberry Pi Pico is even, like, much more constrained than those environments.
And so, of course, it could run on those things, no longer a laptop.
Exactly.
And this brings me to another aspect of fourth mode, which is from the perspective of the users.
Today, as a user, whenever I'm consuming Ethereum State, I have to do it through an intermediary that is running a full node on my behalf.
And so that might be infura, that could be Metamask, it could be Rabbi Wallet, whatever.
Like, it could be EFAScan.
And I'm basically trusting these entities to tell.
me what the state of Ethereum is. What if instead I could just directly verify the correctness
of the FM state within my browser tab, like on my phone, like within the app, and it costs
nothing. It's instant. Well, now I'm not subject to all of the failures of these intermediaries.
If, for example, Infura goes down, well, I can still make transactions. If Infura or Metamask wants
to censor certain types of applications, maybe OFAC transactions.
Well, now they're no longer in a position to do the censorship because they're not
intermediating as much.
Maybe ether scan gets hacked and now someone puts forward a malicious front end and tries to
drain a bunch of people.
This is the kind of thing that should be harder to do once users have more sovereignty
over what is the valid state of the chain.
Okay, so this is why Snarks, which is the ZK and ZK EVMs as we establish, achieves both Beast mode
because it unlocks a bottleneck, which has been execution, and gets us to the potential of something like an Ethereum layer one with 10,000 transactions per second.
Simultaneously, it achieves fort mode, which is what is fort mode?
That's defense.
This is more people can run nodes from anywhere on the most basic of consumer hardware.
So the reason Snarks is so powerful is because it's a double-edged sword and allows us to achieve scale while also achieving not just maintaining the current decentralization of running an Ethereum node, but like making it even better, making it such that you can run an Ethereum node on a smartphone or a watch.
Okay, but what we have done in this ZK EVM kind of set up
and sort of the new execution client that you're talking about
and some of these aren't in a development.
They will talk about what that means today.
But what we have done is something important
is we have moved these validators from executing every transaction
to verifying them.
But somebody's doing the execution, right?
Who's doing the execution now in this world?
And why is it okay to just have validators just verify,
rather than execute and verify as they have been doing.
Are the executors now a centralization vector
in the whole Ethereum blockchain supply chain?
Yeah, great question.
So we do have a new actor, which is called the Prover,
and the Prover is responsible for generating the proofs.
And there's two regimes that we are going to be deploying in production.
The first one is the optional proofs regime,
where we're relying on altruism,
we're relying on various actors
to just generate the proofs for free,
for the network,
and we're relying on individual validators
to opt in to verify those proofs.
Now, this is a great proof of concept,
but eventually what we want is mandatory proofs.
What does that mean?
It means that as a block producer,
meaning as the entity that builds the block and proposes it,
I'm responsible for generating all of the corresponding proofs.
And if the proofs are missing, then that block is invalid.
It's just not going to be included in the canonical chain.
And now we need to look back at the incentives.
We're no longer relying on altruism.
We're actually leaning on the rationality.
And the reason is that the block producer is receiving fees, M.V.
And if they were to miss a block, they will also get a penalty for,
for missing that block.
And just when you say block producer,
block producer and Prover are synonymous in this world?
So they are not, but they, not necessarily, I should say.
So they could be bundled as one entity and vertically integrated,
but what I expect will happen is that we're going to see a separation of concerns.
So even today, there's a separation of concern between the proposer and the builder.
and what I expect would happen is that the builder
would outsource the proving to dedicated proofers.
A little rusty on this, okay?
Prover, builder, sorry, proposer, builder,
Prover, validator.
Okay, run us through the whole supply chain again
of how a block goes into the chain and Ethereum today
and then what this future state is going to look like.
So today you have at every slot
a randomly sampled validator
that is called the proposer
and they will get to
decide what block
goes on chain. That's the thing
if you run in a validator, you're running it at home.
Yes. But there's an important caveat
which is that the proposers
are assumed to be
not sophisticated enough to build
the most economically valuable
blocks possible. And so instead
they will delegate to
more sophisticated builders that will do
that on their behalf.
And that is called PBS, proposer, builder separation.
And we have something called Mev Boost that helps us with this separation.
What I expect will happen going forward is that we would tack on yet another role called
the Prover.
And the builders would go outsource the proving jobs to these sophisticated Provers.
Now, it turns out that the builder landscape is fairly centralized.
And so it's reasonable potentially to expect that actually these two roles in practice
will be bundled for at least a large subset of the blocks.
Why is that okay?
Why is it okay that builders and potentially proveers in the future are centralized?
But we're taking all of this pain to make sure that validators can run on a smart watch.
So there's a few answers here.
The first one has to do with the honesty assumption.
So in order for consensus to run smoothly,
you need 50% of the consensus participants to behave honestly.
And this is an extremely high bar.
It's very, very difficult to achieve.
So today, we have 10,000 consensus participants,
10,000 validator operators.
And having 5,000 of them,
and behave honestly is a toll order, but we have achieved it.
By the way, 10,000 validator operators, these are independent validator operators,
because some people see numbers like into the millions of validators,
but you're saying 10,000 and they don't understand why you're saying 10,000,
when they see numbers like a million validators on Ethereum.
Yeah, let me explain that.
So for many years, there was this constraint that an individual logical validator was 32EF,
and we have indeed a million of these 32-if entities,
but what often happens is that there's a single operator
that controls multiple of those validators.
Now, recently, we've had this upgrade called MaxEB,
where we increase the maximum effective balance to 2,000 eph.
And so what we're starting to see is actually consolidation of the validators.
If a single operator controls multiple validators,
they can now squish them together into bigger and fat of validators.
And actually, if you are an operator and you're listening to this podcast,
I do encourage you to consolidate because it's good for you and it's also good for the network.
But if you look at the individual operators, there aren't a million, there's something closer to 10,000.
I've seen estimates like 10,000, some say as high as 15,000.
This is in the same realm as the next most decentralized network in crypto,
as Bitcoin. I've seen estimates for Bitcoin around 15 to 25,000 nodes, something like that. And that's
the thing we're keeping decentralized. Anyway, I just wanted to clarify that number, but continue
with the flow, if you will, where you were, where, you know, I guess the question was,
why is it okay that block builders and provers in the future are very few, are these centralized
entities? One reason that has to do with the fact that it's an honest minority on the proving
side of things. You only need one single
prover to be available
for the builders in order for
the chain to keep on going.
And we've taken this one of N
extremely seriously. So N
is at least 100 because there's
at least 100 data center operators that you can
go pick from. But even that
we're not satisfied with. We want N to be
orders of magnitude larger. And this is
why we're going with the on-prem
proving where
crazy people like me could
run a prover from home. And this is something that I intend to do. So that means all we need is
Justin or some crazy person like Justin and everything's fine. Nothing is corrupt on the chain.
No invalid block gets through to the other side. It's a fallback for liveness. So what would happen
in the worst case, if we were to rely on data centers, is that we'd be running at one giga gas per
second, and then from one day to another, the government are saying, hey, data centers,
please stop the proving process, and we would fall back to something much lower, let's say,
100 megagas.
And the reason we would fall back to 100 megagas is because that's the most that could be
done outside of the cloud.
And that would be very bad in terms of, you know, providing guarantees to the world because,
you know, we want to have this guaranteed throughput.
We want to be up only.
We don't want to be degrading the liveliness of the chain.
And so we only want to be in a position where what we deploy in production is something that we can guarantee even in a World War III type situation, which is a very tall order.
But it's something that the technology is able to deliver thanks to recent innovations.
Which, of course, that liveness guarantee is very important for the store of value use case, right?
even if transaction throughput drops
in these extreme edge case scenarios,
store value is still alive
because you can go access your value on chain
and do something with it?
Let's talk about Provers a little bit more.
So you said you might have the ability
to run Provers at home.
That's good,
but you also expect the Prover functionality
to be more centralized
in, I guess, the majority of cases.
As I understand it, like Provers,
that's like a much larger hardware profile, right?
And it's some specialized hardware
because they're crunching some numbers
or they're doing some moon math.
Anyway, you're saying yes but no.
Yeah, where am I wrong there?
What does this actually look like to be a prover?
Yeah, so it is unusual hardware
in the sense that most people won't have it.
But it is made out of commodity hardware,
specifically gaming GPUs.
So 16 gaming GPUs, for example, the 5090 that came out recently,
that is enough to prove all of Ethereum in real time.
And I intend to basically build a little GPU rig at home.
And a bunch of AI enthusiasts are doing that because it's the same hardware that you need for AI.
Now, in addition to lifeness, which is one of the questions that I asked a lot around decentralization of proofers,
the other very important consideration is censorship resistance, especially when we
will be increasing the gas limit.
Because the way that we enforce such a resistance today,
assuming that all of the builders and the whole MEPP pipeline is censoring,
is by relying on a small altruistic minority of validators
that are willing to force include transactions from the MMPOOL
without going through the builders.
And we have this proposal called Fossil,
which basically
increases by roughly 100x
the total throughput of this forced inclusion.
Today we have about 10%
of the validators
that are doing this altruistically,
but with fossil, we will have
16 validators at every single slot.
So it's all the slots
as opposed to just 10% of the slots.
And within a slot,
there would be 16 includeers as opposed to just one.
So in some sense,
it's 160 times
more opportunities for forced inclusion.
And that is something that is important to do as a prerequisite
before growing to very high gas limits.
That means that the builders, the provers,
none of these more centralized components,
we'll call it centralized in air quotes.
You know, like entities can actually censor anything on chain.
So you preserve the,
and you actually strengthen post-fossil,
the censorship of resistance guarantees of a theory.
I believe fossil is maybe slated for next year at some point in time. I know this is all squishy,
but fossil is probably going to happen earlier than some of the other things that we'll be talking
about today. Okay, so ZK EVMs, you take something like the execution client, something like
Geth, and there are many different execution clients. You said you're running Geth today, and you get a ZK
EVM version of an Ethereum execution client. Maybe the best way to kind of fit these pieces together
where the execution client, you know, turns into a verifier from executing every single block,
is to talk about your at-home setup and what you're planning for DevConnect.
Okay. So as I understand it, there are many different ZKEVM clients that are in development.
I presume you're going to maybe select one of those.
And then it sounds like you're also going to the additional step of maybe running,
your own provers at home.
So tell us what's the Justin Drake experiment
that's going to happen by DevConnect?
And then maybe we'll fit this into the broader role map.
But what are you doing?
Right. So the provoer is going to have to wait for Christmas.
I'm thinking of a Christmas present,
which is a cluster of 16 GPUs.
But in the shorter term, in November for DevConnect,
I'm hoping to run my validator by, as you said,
downloading ZK EVM proofs, but it wouldn't be just a single one.
It would actually be as many as I can get my hands on.
And the number that I have in mind is five.
Five ZKEVM clients.
Proofs, yes.
So there's these five different proof systems.
And at every slot, there would be five corresponding proofs for each, well, one proof per
client.
So there would be a total of five of them.
And these are proofs that are very short.
short and very fast to verify.
They take, for example, 10 milliseconds to verify.
So if you have five of them, it just takes 50 milliseconds.
It's not a big deal.
So I would download all of these.
And if three of them agree, then that's my source of truth.
And the reason why I'm downloading more than one is because some of them might be buggy
in the sense that it's possible to craft a proof for an invalid block.
So that's why we want multiple of them to agree.
And some of them might have what are called completeness bugs or crash faults.
So there are some circumstances where the proof system just can't generate a proof
because there's some sort of a bug in the system.
And so that's why I wouldn't require all five proofs to agree.
It's okay if two of them just never appear.
I would still be able to move on.
And so it's a new way of.
of thinking about client diversity, because today, the way the client diversity works is that
it's across validators. So you look at the whole population of validators and say, okay, 20% are
running client A, 20% are running client B, et cetera. Whereas here, the diversity is internal
to a single validator. And that's possible to do precisely because we have SNOCs, because
it's so cheap to be running multiple ZKEVMs. And that's one of the reasons. And that's one of the
reasons why I actually believe that ZKVMs are going to give us a step up in security
relative to what we have today.
So reason number one is that we have internal client diversity as opposed to external
across the validators.
Two, the barrier to entry to be a validator is going to be lower.
So we're going to have more decentralization, more security.
Another point is that we're going to be deleting tens of thousands of lines of code.
So today, you know, I'm running this execution layer client.
And all that I really need is the core of the client, which is the EVM implementation.
That is the logic.
All of the stuff around it managing the mempool, the history and the state and the peer-to-peer networking,
a lot of that code can just be deleted from my perspective as a validator operator.
And there's also something called the Engine API, which is a bit of a technical thing.
but it's basically the communication bus
between the consensus layer
and the execution layer.
Historically, there's been a bunch of bugs
in that interface.
And that's completely going away
because, again, I wouldn't be running
an execution layer client.
So we're getting to this point of minimalism.
And actually, that feeds a little bit
into the lean Ethereum meme
where we're trying to be as minimal
and cut all of the fat
so that we stay lean.
Okay, just so I understand
and what you're actually running here
and how this fits with some other things
I've seen within Ethereum.
So you said you're planning to run
three different proving systems,
ZK EVM proving systems.
So right now I understand the execution layer,
again, the thing that we want to get to Beast mode
as being a client like Geth.
Let's say that's what you're running today.
And then the ZK EVM version of this
is this like Reth,
which is another Ethereum client,
plus one of these three,
proving systems, ZKEVM proving systems that I'm showing on screen.
And for those not looking at this, this is a website called eithproofs.org.
And on etheproofs.org, you can see the progress of different ZKVM proving systems.
Fit this together to me. Are you running Reth plus one of, like three of these, let's say,
proving systems? Or do these proving systems kind of replace Reth? Like, what exactly are we
talking about here? Yeah, great question. So what we want to do is preserve the diversity of EVM
implementations, also known as execution layer clients. So we want to have REF, but we want to have
various other EVM implementations. And in terms of the clients that are, you know, the most ready,
we have one called EFREX, which is a newer Russ client by Lambda class. We have one called EVM1.
and we have one called ZKSink OS,
which has been implemented by Matterlabs.
And each one of these four
can be combined with a proof system.
So what I might do, for example,
is run Zisk with EVM1.
I might run PICO with REF.
I might run OpenVM with EFREX.
And what I want to try and do
is basically have as much diversity as possible,
both the execution layer
AVM implementations
and the ZKVMs.
I got it.
Okay, so it's just a side question.
So the reason we have all of these different
execution clients,
and some of those I hadn't even heard of before,
you know, Geth is maybe one that many people have heard of.
Reth is like a Rust implementation of that
from paradigm, so they've, you know,
they've engineered the heck out of it.
Is the reason we have all of these different clients,
implementations because Ethereum has like a hardened spec.
Most other networks don't have like more than one client implementation.
I'm wondering how Ethereum has like doesn't.
The only chain that has a spec, an only chain that's like meaningful level of adoption
that also has a spec.
Yeah.
So what most chains will do is that they'll have a reference implementation without spelling
out the spec.
And so if you want to recreate a second client, you have to look at the implementation.
you have to look at the implementation and try and copy it bug for bug, if you will.
And it's an extremely difficult process.
And it's part of the reason why FireDancer in the Solana ecosystem hasn't yet shipped,
despite it being three, four years that they've been working on it.
Solana just doesn't have spec.
And it's a similar situation with Bitcoin.
But having a spec is nice, but it's not sufficient.
we also need to have a culture that encourages diversity
and ultimately recognizes the value that comes with it.
And the value to a very large extent is uptime.
Like historically speaking, we've had many individual clients fail
and they're replaced within a few hours, within a few days.
And while they're being replaced,
the other clients effectively act as fallback.
And then another reason why diversity is important
is because it provides diversity,
governance layer. So the Oracle devs plays an important role in Ethereum upgrades and the fact that
no single client team has, you know, on due weight is a very healthy thing to have. And then the final
reason why diversity, in my opinion, is extremely important is because it allows us to have
many different devs, hundreds of dev, all simultaneously understand the guts of Ethereum.
I think bankless is very famous for its quote that the most bullish thing for Ethereum is to be understood.
And I think when you say that is from the perspective of the user, of the investor, of the application developer.
But I think it's still very also very much true from the perspective of the client devs.
Yeah.
And it does propel Ethereum on this course of anyone can build a client in the world because they can read the spec.
They can build the client.
They have the dev chops.
And so all of these clients are sort of competing.
with one another too in terms of innovation, adding new features. And that's a beautiful thing.
Okay, so we have these, maybe these upgraded ZK EVM ready clients, the execution clients,
the Gets of the world and such, even though Geth is maybe not ready for that. You name some others.
And then we have this, what is going on on Eith proofs? Because this is something separate,
I think, right? Or is it separate? We have a whole like competition here to get real time proving
down below 12 seconds, I believe.
So what's happening on ETH proofs?
Why is this important?
And how does this fit in your home setup?
Yeah.
So on EF proofs, most of the focuses on the ZKVMs.
And we allow them to pick their favorite EVM implementation.
And the vast majority of these ZKVMs actually use REF, our EVM,
because that is the one that's most appropriate.
I got it.
with one exception, which is AirBender from ZKSink, which is using ZKSink OS, which is their own implementation
of the EVM.
Now, for the demo, I'm actually going to be downloading proofs from EVE proofs, and I'm not
going to be too picky on the EVM implementation.
It's mostly a proof of concepts on the ZKVM side of things.
But eventually, when we have the mandatory proofs, we're going to need the FM community to come
to consensus on a canonical list of ZKVMs and to...
corresponding pairings with the EVM implementations.
And one of the things that you said, Ryan, is that when we have diversity, we have an
opportunity for competition.
And I think this is a very healthy aspect here, which is that we would more likely than
not be picking the five fastest EVM implementations that are most snark friendly so that we can
still have this property called real time proving.
And, you know, Geph historically has been the leader.
They were literally a monopoly, a Genesis.
That was the only option available.
And, you know, they've had this reign for the last 10 years.
And I think the fact that there's this competition is a breath of fresh air and should lead to lots of innovations.
This competition specifically, perhaps people, our listeners have seen headlines.
If you're in deep crypto, you know, in Ethereum, you probably have.
of some of these teams
achieving some sort of milestone
and I think they call it
like proving the EVM
under 12 seconds
and this seems to keep getting faster and faster
I think Sucinct was a major team
to do this at first and they're like
we got under 12 seconds
and now there are other teams
I saw a team a couple weeks ago called Brevis
and now they've reached new milestones
here. What is this race
to prove the EVM
at a certain speed and why is this important and like are we there yet?
Yeah, so the reason why it's important is because it unlocks the hope for the gigagas
frontier. So like it's literally providing more likely and not like trillions of dollars
of value creation for the world because we're going to unlock the gas limit.
And from the perspective of the ZKVM teams, it's, it's,
It's a way to prove that technology and also have a shot at being part of this canonical list of, for example, five ZKVMs that would be baked in to every single validator and a tester on Ethereum.
And actually, every fully verifying node would have these five ZKBMs baked in.
And so, you know, right now, I maintain this tracker and list of ZKBMs.
There's about 35 of them that try and cater for various use cases.
But out of the 35, there's a big competition.
And now we've narrowed it down to about 10 that are candidates for being selected as canonical for the L1.
And why is it important the speed under 12 seconds?
And how is that improving so rapidly?
So the way that Ethereum works is that you have a block that's produced.
and then within the rest of the slot,
the attestors that are voting for the tip of the chain
need to know that the block is valid.
And so in order to keep this property
that the validators are voting on the top of the chain,
they need to receive the proof of validity
before the next block arrives.
And the next block arrives within one slot,
which is 10 seconds.
practice, they actually need to provide the proof faster than 12 seconds. It's 12 seconds minus a small
delta because there's all of the propagation time to propagate the proof. And so the number
that we have in mind is actually 10 seconds. So that is the goal. And we want basically all
economically relevant blocks to be provable within 10 seconds. So there is this notion of a
prove-a-killer, which is an artificially built block that takes a long time to prove more than 10
seconds. But what would happen with the mandatory proofs is that it wouldn't be rational for the
block builders to generate these proof of killers because they would be shooting themselves in the
foot. They would be dossing themselves because they wouldn't be able to generate the proof.
That would lead to a mislot. They would lose the fees and the MEVE and they would also get penalized.
I see. So it's a defensive mechanism.
Can we talk about how we get from point A to point B here?
Point A being where we are currently in Ethereum with no blocks being verified to where we want to be in Ethereum where this is like the dominant equilibrium or like almost all of the blocks are being verified.
And we have successfully initialized Ethereum with this ZK proving technology.
Historically, as Ethereum has made hard forks, that's when we've done the big upgrades to Ethereum.
We hard forked into proof of stake.
we hard forked into 4844,
all of the big upgrades to Ethereum
have come in this very step function.
Like we just hard forked the upgrade in.
That is, as I understand it,
not how this is going to happen.
This is going to be different.
Maybe you can talk about how we get
from point A to point B,
which is like the integration
of all of the ZK metric
that we've talked about into the chain.
How does it actually happen?
Absolutely.
So the rough roadmap that I have
is a four-step roadmap.
So phase zero involves
a very small subset of the validators, think 1%,
opting in to verifying proofs that are altruistically generated.
For example, generated in the context of eF proofs,
which is to a large extent just marketing budget
from a lot of these ZKVM proofs.
The downside of one of the downsides of phase zero
is that me as a validator opting in
will be losing the timeliness rewards.
So there's a special reward
in the theorem called the timeliness rewards, which is given to those who attest immediately to a block.
And I will be losing that because I'll be attesting a few seconds late. And so this brings us to
phase one where we have delayed proving or delayed attesting or it's also called delayed execution,
where basically instead of having to attest immediately when the block arrives, you have more time.
You know, think of a whole slot basically to attest. So even if
If it takes a few seconds for you to attest, it's all good.
You'll be getting this timeliness reward.
And at that point, I expect the number of validators to opt in to go from roughly 1%
to something closer to 10%.
Why 10%.
That's because it's when it starts to become incentivized.
It's incentive compatible, exactly.
Yes.
And it's actually, you know, you have incentive to do it because now you don't need to
run a new, buy a new hard drive, you know, when the state grows too big and you don't
need to upgrade your computer if it dies.
Or, you know, I could just sell my MacBook that I'm using to validate and just buy a
Raspberry Pi instead, for example.
In any case, what I expect to happen is that the weakest validators, those running from
home, would opt in to this mechanism.
And the much more sophisticated validators think the coinbases, the Binances, the Lido's,
they would keep running the usual way.
And they'd opt in because it's a lower hardware footprint.
Yeah, exactly.
And from that point onwards, we can start increasing the gas limit, right?
Because now we have two types of nodes.
We have those that are verifying the proofs.
We can increase the gas limit for them, no problem.
And then we have the sophisticated operators that are running on more powerful hardware
than just a laptop.
And for them, there's just a bunch of buffer to increase the gas limit.
So already in phase one, there's an opportunity to be more aggressive
with the gas limit.
And then phase two is where a lot of the magic really happens,
which is the mandatory proofs,
where we require the block producer to generate the proofs,
and everyone is expected to be running on ZK EVMs.
Is that a hard fork?
Yes, but it's a hard fork that only changes the fork choice rule.
So it's a very minimal hard fork.
It's just one that says that when you attest,
you should only attest after verifying that the proofs exist and are valid.
So it's not a difficult hard fork.
It's actually a fairly simple one.
And then there's phase three, which is the final one.
But here you need to project yourself maybe five years into the future,
which is what we call enshrined proofs,
where instead of having a diversity of five ZKVMs,
we just pick one and we formally verify it end to end.
So we have high conviction that there's literally zero bugs in that enshrine verifier.
And that unlocks all sorts of, it simplifies the design, first of all.
But it unlocks things like native validiums, which is, I guess, maybe a topic for a different day.
Okay. So five years, and that's after five years of just like battle testing of the technology.
because I think we kind of more or less expect bugs along the way during these phases.
And we just have to play whackamol for a while, five years,
before we feel that it's sufficiently battle tested to actually make it a formal part of the Ethereum layer one
to truly unlock all of the magic that the snarks give us.
Exactly.
We're assuming that every single individual ZKVM is broken,
but in aggregate as a group, it's secure.
and this phase two, where we have mandatory proofs,
you can think of it as being semi-enshrined,
where we have in some sense an enshrined list of multiple ZKVMs,
but there isn't the one that we're putting all our eggs in the basket.
So the theorized way that this works is that the weakest nodes,
the slowest nodes, the individuals, you know,
verifying Ethereum via Starlink in their camper van in some park,
Park somewhere off grid.
These people, the slowest nodes of the whole entire group, are the ones that upgrade to
the system first, and they go from the slowest to the fastest.
They kind of like leapfrog everyone.
And as the technology gets more robust, more ready, more hardened, more efficient,
it starts to march upwards up the chain of the next slowest node, the next slowest node
until we're in kind of like the median node.
and then what starts to be left of the old legacy execution clients,
Ethereum nodes, are the data center nodes,
the coin-based nodes, the Krakken nodes, the Binance nodes,
the people with heavy, heavy infrastructure with a very large footprint
that are just like of the node distribution of Ethereum
are the ones that just happen to be in the data center.
And they're like kind of the last to go because they have the most buffer,
most bandwidth.
And then at some point in time, they'll flip two because we actually just fork it
into the Ethereum Protocol.
That's kind of the plan.
Exactly right.
Can we talk about this and how this meshes with the idea?
There's a blog post not too long ago from Donkrad who talked about the idea of a 3x
increase for Ethereum in terms of gas limit every single year.
And I want to show maybe a slide.
I don't know where this came from.
Actually, this looks like some Justin Drake candy works.
I bet it's from one of your presentations, which kind of.
goes through this. And so right now, after I believe, we do two gas limit increases for Ethereum
this year, or was it just one? We've done two. We went from 30 to 36 and 36 to 45.
That's right. Okay. 36 to 45. Okay. And the idea behind Donkrat's post, I believe, was some sort of
kind of social commitment, roadmap stacking hands for the Ethereum community to attempt to
scale Ethereum 3x in terms of transactions per second and gas limit every single year.
Okay?
And so if we were on track for 2025, by the end of this year, we would be at 100 megagas.
It looks like we're going to be maybe you said 45 or maybe we get to 60 or something like
that.
Yeah.
So with Fusaka, which is coming in December, we'll be able to increase the gas limit.
I'm told that 60 is safe and maybe we can get a little bit more 80, maybe 100, I don't know.
But yeah, when I did those slides, which was a few months ago, Tomash was trying to set within the FIM Foundation the goal of getting to one mega gas gas limit by the end of this year and trying to keep this three X pace that Dancrat originally suggested in his EIP, 7-938.
Now, 3x per year, I think, is kind of a sweet spot between, you know, doable and ambitious.
So it's quite a significantly faster than Moore's law, but it's not completely impossible.
And, you know, Dancrat's proposal was to have this 3x per year over a period of four years.
And importantly, it's something that would happen automatically.
So today, the way that we do gas limit increases is extremely laborious.
What we need is the individual operators and the consensus layer clients to set new defaults
or for the operators to change the default configuration in order for the gas limit to go up.
So it's just at the social layer, extremely expensive and requires a lot of coordination.
What Dankrat suggested instead is that at every single block, the gas limit increases a tiny, tiny bit, just 1 or 2 gas.
I see.
So that once we've gone through the social coordination of doing it once, it just happens automatically.
And my specific suggestion is to increase the four years to six years, because after six years of compounding 3x per year, you get the 500x that we need to get to 1 gigagas per second.
Okay, and so let's talk about that a little bit more and mesh that with kind of the lean Ethereum idea.
So the reason we've been reticent to hit the accelerator on gas limit and throughput has been it will start to increase the requirements for validators and kick our home validators off the network and drive Ethereum more into data centers.
And that's not where we want to be.
Now, I guess the rescue or the landing pad of a lean Ethereum is as we increase gas limit, you know, maybe 3X every year, the home validating nodes, the non-data center, you know, nodes in Ethereum, they can then migrate to a ZK EVM and run that on a Raspberry Pi or smartphone or very cheap hardware.
So prior to having a ZK EVM solution, those validators would just be gone forever, basically.
And we'd become more centralized, fewer validators, more data centery.
But because they have a ZK EVM, as that tide rises, they can be among the first to hop to the frontier of a ZK EVM.
So this has opened up the playing field to allow Ethereum to consider increasing the gas limit on a more regular
basis and maybe up to 3x every year. Is that roughly the story? Yeah, that's it. Okay. And then one other
question I have in the weeds here, there's gas limit and then there's throughput in these two sides.
The thing that we're increasing is gas limit. Is that correct? And our gas limit right now is
different than the mega gas that we're actually doing. You said we're at two megagas per second,
I think earlier in the episode, but then we have a gas limit of what, 45?
Yeah, so let me explain the math.
There's like two complications.
The first one is that we have 12 second slots,
so it's 45 million divided by 12.
And then there's another complication,
which is with EAP-159, we have a target and a limit
where the target is twice as low.
So you have to divide by another 2x.
So if you take 45, you divide it by 12,
and then you divide by 2,
that's how you get your two megagas per second.
It's a little bit unfortunate because, you know, in some sense, the gas limit is artificial
because it depends on the slot duration.
And we do intend to reduce the slot duration, for example, for 12 seconds to six seconds.
So my preferred mental model is to think in terms of gas per second,
which is quite close to the TPS concept as well.
Those phases, 0, 1, 2, final phase 3.
You said, you know, getting to 3 might take 5 years.
Do you have a timeline idea?
I guess 0 technically kicks off, you know, with you,
maybe among the first running this hardware in about a month or so next month.
What about the timeline for the rest of this?
Yeah, so 2025 for phase 0 and then one year for every other phase.
So phase 1, 2026, phase 2, 27,
in phase 3,
2028, for example.
I think that's a reasonable time.
Okay.
ZK EVMs allow us to increase block size,
allow us to scale throughput.
Real-time proving is something we've talked about.
We're under 12 seconds.
Block times on Ethereum are 12 seconds right now.
Is part of the beast mode to get that down to 6 and below 6?
And how far can we push that?
And how does that fit into kind of ZK EVMs?
Do we basically have to wait
until ZK EVM Provers are fast enough to get us safely under six seconds,
and then we can drop the real type, like the block space production to something like six.
Like what are the puts and takes of the constraints there?
Yeah.
So it turns out that the proposal to reduce the stud duration is somewhat in competition
with the ZKVMs because it's going to,
we're going to have overall less latency to do the proving, and it's going to,
going to make things harder.
But I still think even if we were to reduce the slot duration to six seconds,
we'll be able to get there, no problem.
It would just delay things by a number of months, maybe six months.
And so it's a decision that the committee has to do.
Like, do we want to reduce the slot duration at the expense of delaying ZKBNs by six months?
I guess it's above my pay grade to make that particular decision.
But if you're willing to project yourself multiple years in the future, for example, 2029,
I'm hoping that we can go beyond the six seconds.
In the beam chain talk less than a year ago, I was trying to advertise four seconds.
And recently we had this workshop in Cambridge with a bunch of researchers.
And we actually came up with a new idea that could unlock even faster slots,
potentially two second slots.
So I don't want to overpromise this,
but I do think we'll be able to go under six seconds
in the context of lean consensus,
which is a rebranding of the beam chain.
Okay, so I guess in both cases,
whether we're increasing gas limits
and making the blocks bigger,
having them house more transactions,
or whether we're decreasing slot times,
that's all toward the same goal
of getting towards gigagas, right?
both of those kind of numbers increase our gig.
Is that wrong?
No, no, no.
So reducing the slot duration doesn't change the throughput.
So if we were to reduce, if we were to, for example, go from 12 seconds to six second slots,
we would correspondingly reduce the gas limit by 2x.
So the total throughput.
Okay, it's the reverse.
Yes, of course.
Okay.
Yeah.
That's why these two things are at odds.
I mean, on paper, reducing the slot duration is actually,
neutral because at the time of the fork, the slot duration reduced a by factor of two,
the gas limit reduces a bare factor of two, and these these canceled out. But in terms of the
engineering to get real time proving, yes, they are a little bit at odds because every second of
prove a time that we have is actually very valuable. And it just means that the ZKVM teams will just
have to work that much harder to squeeze things down. Crypto is risky. Your sleep shouldn't be.
Eight sleep's mission is simple. Better sleep through
cutting edge technology. Their new Pod 5 is a smart mattress cover that fits on the top of
your bed. It automatically adjust the temperature on each side so you and your partner can both sleep
the way that you like. It's clinically proven to give you up to one extra hour of quality
sleep per night. Eight Sleeps Pod 5 uses AI to learn your sleep patterns, regulate temperature,
reduce snoring, and track key health metrics like HRV and breathing. With a new full body temperature
regulating blanket and built-in speaker is the most complete sleep upgrade yet. Upgrade your sleep
and recovery with Aesleep.
Use code bankless at 8Sleep.com slash bankless to get up to $700 off the Pod5 Ultra during their
holiday sale.
That's 8Sleep.com slash bankless.
You also get 30 days to try at risk free.
Link in the show notes for more information.
Bit Digital, ticker BTBT, is a publicly traded ETH treasury company that combines the two
biggest metas of our time, Ethereum and AI compute.
BitDigital believes that ETH will power finance and AI compute will power everything.
Bit Digital gives you direct exposure to both.
Bit Digital holds more than one.
150,000 ETH with institutional grade staking and validator operations.
On top of that, the company owns roughly 73% of white fiber, an AI infrastructure business
that runs high-performance GPU data centers that adds a meaningful exposure to the growth
of AI compute with over 27 million shares.
This is an ETH treasury backed by real operations designed to capture staking yield today
while positioning for the future of intelligent computing tomorrow.
The ticker is BTBT.
This ad is not financial advice.
Do your own research.
Learn more about Bit Digital and try.
their MNAV calculator at bit-hifendigital.com.
That's bit-hyphen digital.com.
Bankless is being compensated by BitDigital for this ad.
You can find out more information by clicking the link in the show notes.
In the fullness of time, when this technology is just completely mature, haven't been,
we just eliminated the time constraint anyways?
Yes.
So like say five plus years.
Like right now we're really talking about like how can we get this integrated as soon as
possible.
And that's when like one second really matters in terms of block time.
But in the future, one second won't matter at all, right?
Can you talk about that?
Yes.
And in the end game, what I'm envisioning is that we have SNOC CPUs that generate the proof
as they're doing the computation.
So you have a typical CPU that's running, let's say, at 3 gigahertz.
Not only would it be doing the computation at 3 gigahertz, it would be producing a proof
at the same time as it does the computation.
And you can think of a CPU core, for example,
RIS 5 core as being one square millimeter of silicon on the dye.
So it doesn't consume much space.
And nowadays, we're able of building chips easily with, let's say,
100 square millimeters of dye area.
So you can imagine the future being that you buy your CPU,
it's a pretty big chip, 100 square millimeters.
One percent of it is used to do the raw computation.
and 99% of it is used to do the proving in real time.
But here we don't mean in real time with Ethereum time, which is one slot.
We're meaning in terms of like...
In parallel.
CPU time, which is like nanoseconds.
Right.
Yeah.
Interesting.
Okay.
The one piece of your home setup, I just want to understand.
So you're going to be at first, you're running kind of a ZK EVM type setup, as we discussed.
Running Provers at home, okay?
So that's your Christmas present.
You get these GPUs, you know, Santa's been good to you.
You've been a good boy, I guess, whatever this is.
But it does require some power, some energy to run at-home provers.
And as I understand it, some of the teams are working to make that more efficient.
So can you talk about if you wanted to go to the length of running your own
prover at home as well, what is like the energy output required?
This is basically electricity of your home required today.
And then what does it need to be moving forward to make sure that we have some level of decentralization to this Prover Network?
Yeah, absolutely.
So 10 kilowatts I mentioned, it's about 10 toasters.
It's also an electric car charger.
It's also like a very powerful electric oven or a powerful water heater for your shower.
So this is something that has been installed and you don't need to kind of buy a new house, I guess,
in order to draw 10 kilowatts.
Now, the GPUs that we're talking about,
these gaming GPUs,
and they draw about hundreds of watts each,
the maximum rated power draw is something like 500 watts,
which is half a kilowat.
And so what I have in mind in terms of the size of the cluster
is 16 GPUs.
So 16 times 500 watts, that's 8 k kilowat.
And then you need to have a buffer for,
the host machines and the cooling because you're going to need fans or whatever to circulate air
and that's also going to consume electricity.
So what I'm intending to do for Christmas is buy a cluster of 16 GPUs, connect them to my home
and my home internet connection and basically be producing a block, a proof for every single
if you're in block in real time.
Now, if you had asked me two months ago, when would I be able to do?
do this demo, I would have told you, you know, maybe six months in the future. But the pace of
snarks is just so incredible that today 16 GPUs is enough. So we've already achieved the
requirement that we sell ourselves of 10 kilowatts. And we have multiple teams that have achieved
that. You mentioned PICO. And just yesterday, another team, the ZISC team, basically achieved that.
I mean, technically they used 24 GPUs, but, you know, it's getting very close.
to the 16.
And we have various other teams,
you know, for example,
the AirBender team,
I expect the succinct team
to also get to 16 GPUs.
And so come November 22nd for DevConnect,
we will see, you know,
how many teams have achieved this 16 GPU milestone.
And I'm expecting it to be at least two,
hopefully three and maybe, you know,
maybe four of them.
And if you want to participate in this demo
So in real time, you can sign up to Eve Proofs Day.
So that's eifproofs. Day.
Unfortunately, the venue that we have is limited to a few hundred people.
Our waiting list is close to 300 people at this point.
But do sign up nonetheless because we will be releasing more tickets.
Is that 10 kilowatts going to be, basically, is that going to go down?
So once you get your Christmas present, right, you're going to be running these privileges,
your electricity utility bills are going to spike, right? So it's like running a Tesla, you know,
charger 24-7 basically. So you're going to be paying a little bit of extra. Is that just going to be
the cost of running approval? Or can we get it down from 10 toasters to like one toaster?
Yeah, that's a great question. So there's two aspects here. The first one is that as the ZKVMs
improve and you need fewer and fewer GPUs, that's going to be an opportunity to increase the gas limit.
And so really what we want to be doing is keep increasing the gas limits so that we're always at the 10 kilowatts.
So that's staying at 10 kilowatts.
And this is how we get to one giga gas per second.
The other thing that I want to mention is that, you know, this crazy altruistic phase is not really representative of what will happen eventually,
which is that we don't really need the fallbacks to be proven.
every single block all the time.
We only need them to activate whenever there's a problem.
So if all of the cloud providers suddenly go offline,
well, now the block builders can use me as a prer,
but I'll only turn on in the one in 10,000 blocks where that's necessary.
Most of the time, I'll be consuming zero electricity
other than sufficient electricity to be connected to the internet.
And so you can think of it as like some sort of
like reserve army that's only activated when necessary.
I like that analogy.
I like that a lot.
Before you fire up your own Provers,
who's going to be doing the Proving?
By the way, in this whole setup,
are Provers incentivized?
Do they get a share of block rewards as a portion of this?
Or like, what's in it for them?
Yeah.
So ultimately, the Provers are incentivized by fees and MEV,
and they're going to be paid by the block builders.
Now, one thing,
that I think is worth coming in with eyes wide open is that the fees are ultimately
going to come from the users. And so the users are actually going to have to pay more for their
transactions. And specifically, you're going to have to pay a transaction fee, which is going to cover
the cost of proving. But the good news is that the cost of proving for a typical transaction is
a hundredth of a cent. So for most applications, you know, you won't even realize
that there is an extra proving fee
which is being added on.
What I expect will happen is that, you know,
the MEV will be much larger
and also the data availability fee
will be larger than that.
And of course, as the ZKVMs improve,
we're going to go from a hundredth of a cent
per transaction to a thousandth of a cent.
And so, yeah, that's ultimately how the incentives work out.
Just transaction fees in MEV,
no consensus rewards,
issuance rewards.
So one thing that we can do as mechanism designers is instead of leaning on rewards,
which I think are totally unnecessary because we have the fees and we have the MEPV.
We can lean on penalties.
So we can have a setup where if for whatever reason you don't generate the proofs and you're
acting maliciously, then you get penalized.
And the number that I have in mind is one if.
So you miss an epharm slot, boom, you get penalized one if because that should never happen,
especially in a context where we have this upgrade called APS,
a tester-proposer separation,
where we remove the proposer from the equation,
and we only have sophisticated entities,
the builders and the provers,
and those should basically have extremely high uptime.
And, you know, there is a negative externality
to a firm missing blocks, right?
It means that in some sense,
if you're like skips a heartbeat and that's not good.
And so putting a price on missing a slot, I think, is healthy.
And it's something that we can do once we have this APS upgrade.
Because today we make this assumption that a bunch of validators are running on home internet connections.
And every once in a while, my ISP just messes up and I don't have internet.
And we don't want to be having this one-eaf penalty just because you're offline and you got unlucky.
But for builders and provers, yeah, we can slap them.
with one, you feel a problem.
Wow, okay.
So that is Beast Mode for scaling Ethereum the layer one.
I got to confess, Justin, I'm not understood it until this conversation that we've just had.
Now I understand it a lot more.
So many other things we didn't touch.
And we're already at almost two hours.
So we can't cover everything here.
But very quickly, when you introduced Beast Mode, you also said it's not just in five years, right?
This could happen in five years, for instance.
And it will scale up.
It's not just Big Bang after five years.
is that something we're at 10,000 transactions per second.
But this plan potentially gets us to 1 gigagas, 10,000 transactions per second on Ethereum
Layer 1 by 2030.
Okay.
We're also simultaneously scaling data availability through kind of the dank charting setup we have
right now so that L2s can get to tarragas per second.
So does that have anything to do with CK EVMs and everything we've been talking about?
Is that just happening in parallel?
And, you know, every chance we get, we're just expanding the fast lane and the state
availability and blob space effectively for L2s.
It's happening in parallel.
But I do want to highlight one thing that bridges these two worlds, which is called native roll-ups.
So a native roll-up is one that has the same execution as L1, but in L2.
And one of the massive advantages of native roll-ups is that you don't have to worry about bugs in your snark-prover or your fault-proof game.
You don't have to worry about maintaining equivalent with the L1, which itself is an evolving thing with every hard fork and every EIP.
And maybe the most important thing is that you don't have to have a security council to deal with these bugs and with this maintenance.
And the amazing thing about the technology which is ZKVMs is that for the L2s, we can effectively remove the gas limit entirely.
The bottleneck is only data availability.
And the reason is that the L2s, they can be generating proofs for their transactions off-chain
using very powerful
provers, they don't have
the 10 kilowatt requirements.
They can have much bigger
provers if they want to.
And then the only thing
that they bring on chain
is this tiny proof
that the validators can verify.
And that is the next generation
of roll-ups.
And we're very, very fortunate
to have Luca Dono from L2Bit,
who is a big believer in this idea
and is championing the EIP process
and all of the technical
legwork in order to deploy this on maintenance.
When does that happen in this whole timeline where we talked about the, you know,
the zero through three phases of getting ZK EVMs out there?
So one reality of Ethereum is that we have this governance process called ACD,
and we have a very limited number of opportunities to do hard forks.
Like historically, we've had one hard fork per year.
We're trying to double this to one hard fork every six months.
but even within a hard fork
there's only so much that you can do
and it turns out that many different developers
want many different things
and so there's 10, 20, maybe 30 different competing proposals
and in any given hard fork you can only clear the queue
like three, four, maybe five, five at a time.
And so that leads to all sorts of externalities,
one of them being that it's just very hard to predict
what will go through the or call death process.
And another externality is that it just leads to a lot of frustration.
Like you can think of ACD as kind of being this meat grinder or this soul grinder
where you have starry-eyed, enthusiastic developers that come in
and they kind of come out frustrated and jaded because, you know,
their EIP hasn't been selected for a very long time.
I mean, one example here could be, you know, fossil, right?
Like, fossil is something that, you know, we have the EIP, all the research has been done.
All of the, a lot of the work has been done.
And yet it's still not being included.
It's been discussed for inclusion in Fusaka, has been discussed for inclusion in Glamsterdam,
and now it's being pushed and pushed and push.
And so it's difficult to be able to predict some of these things.
And this is actually part of the reason why I'm so excited.
about Lean Consensus, because lean consensus is a governance batching optimization where, you know, for an
extended period of time, we're just doing pure R&D. And so we can have, you know, this really
exciting, fast-paced R&D. And then what we propose to ACD is something that is, you know,
significantly better than what we currently have, let's say, 10 times better. And, you know, it will
take a long time for it to go through for ACD, but when it goes through, we will be batching together
let's say 100 EIPs that would previously be unbundled. And so instead of having the long-term
future, the end game of Ethereum, if you will, being spread out over over decades of small
incremental upgrades, we have an opportunity to batch something bigger upgrades on the time frame
of four years or so.
Mostly we've been talking about kind of the lean execution layer because that was the big part.
I didn't understand and that's the big part going to beast mode and scaling layer one.
We talked a little bit about lean consensus, I think, and kind of the fort mode from the perspective of all of this can be run from like a smartphone or a smart watch.
But are there any other pieces in lean consensus?
because this is another layer of the Ethereum stack
that you want to make sure people understand today.
Because when you say Lean Ethereum,
you're talking about Lean execution layer
and scaling that, Beast Mode,
and you're also talking about Lean Consensus.
The lean consensus piece, I think, is maybe less sexy,
but maybe in some ways it's more important
than you just alluded to one of the ways
that most of us users don't see why it's important.
What else is in the Lean Consensus piece
that we have not covered?
and why is it important?
So Linne Theorem is actually three different initiatives.
Within L-1, you have three layers, the consensus layer, the data layer, and the execution layer.
It's easy to remember because it's C-D-E.
Now, we haven't even touched on the data layer other than saying that it needs to be post-quantum-secure.
But, yeah, there is indeed a lot happening in the consensus layer.
The headliners are, one, replacing BLS aggregate signatures with a post-quantantantam equivalent.
Two, having much faster finality.
So instead of it taking two epochs, which is 64 slots, it might take only two slots or three slots.
Another big improvement is significantly reducing the slot duration.
And then the final improvement is just like ZKVMs,
we can snarkify the entirety of the consensus layer so that, you know, the really weak devices, the browser tabs, the phones can can fully verify not just the execution part of Ethereum, but also also the consensus layer.
And so when we're building bridges, for example, between L1s, that's the same kind of technology that would be used as well.
And then what you alluded to is this opportunity to do things differently in terms of governance.
So we've been doing this small incremental upgrades.
We've been accumulating 10 years of technical debt.
It's an opportunity to refresh.
And part of the reason why I'm excited about Ethereum is not because we've had 10 years of uptime,
but because we're going to have another 100 years of uptime.
And in the next 100 years, we're going to grow our total value secured to hundreds of
trillions relative to what we have today, which is just one one trillion dollars of of total value
secured.
And I think the orcal death process as it is structured today is a little bit the tail wagging
the dog, right?
The 10 years history, that's the tail.
You know, we've accumulated a lot, a lot of technical debt.
And the dog is the next hundred years.
And I think what lean consensus is all about is just rebalancing it a little bit so that
the next hundred years where, you know, all the finance will be a bill on top of Ephirum,
has a, that the vision has a chance to materialize. And that's going to require some big changes
at L1. And so in some sense, Lean Ethereum is an invitation to be bold, to be ambitious,
and to think about the next hundred years more so than the last 10 years.
Justin, as we wrap this up, maybe this is a good opportunity to ask another question.
And as I think about the context for this whole discussion where I see Ethereum going,
it's really about upgrading the Ethereum network to Snarks.
So is, you know, Ethereum like Bitcoin is originally based on cryptography like 1.0,
blockchain cryptography 1.0.
Snarks is cryptography 2.0.
And so now we're applying snarks and making, I think you've used the terms,
which I didn't fully understand at the time, snarkifying the entire stack.
That's what this is.
That's what Lean Ethereum actually is.
it's upgrading the entire stack to cryptography 2.0, the snarks generation of cryptography.
And some networks might follow in those footsteps. Others might not.
Tough to say what Bitcoin will do, but probably they'll ossify and stick with
cryptography 1.0 for a long time. I guess the context of this, though, is,
will we be able to do this fast enough? You were talking earlier about the ACD meat grinder
and how Ethereum is so large, so many moving pieces,
it can feel hard or even frustrating for developers
because they're like, why can't this happen faster?
And so are we able to scale fast enough
to beat centralized competitors,
particularly competitors with some deep engineering teams?
And I think part of maybe what this question is reacting to
is we had one of the original scale Ethereum
EIP proposal authors,
Dankrad, recently departing.
part the EF for, you could call them a competitor to Ethereum, maybe that's simplifying things.
They're certainly going to contribute back to the Ethereum ecosystem as well, the form of the EVM,
and other things. But this is a new company recently raised $5 billion in funding. So they have
deep pockets. It's called Tempo. They are working with Stripe and invested in by Stripe.
So they got clearly access to tradfile and stablecoins and all of these things. And it seems to be
the case that they're going to be implementing some of this roadmap using Reth.
I mean, it's a paradigm team, right?
They're going to be speed running some of this roadmap.
And maybe that helps Ethereum in some ways, but also maybe in some ways it competes against
Ethereum.
And from a talent perspective, certainly Donkrad has done so much for Ethereum, obviously.
But is there a brain dream happening with some of these more centralized corporate chains?
And are you worried about that?
You're talking in terms of hundreds of.
years, but will we have the talent to sustain? Are we going fast enough to beat some of these
competitors and implement this vision? Yes, I think that just zooming out, there has been a
brain drain. It's real, it's significant, but it's actually not in the direction that you expect.
There has been a massive brain drain toward Ethereum, and yes, we have lost one dankrat,
but I think we've gained 10 dankrats.
So since I gave my beam chain talk less than a year ago,
there's been dozens of people that have come on board the FEM Foundation
or have been working externally through all sorts of lean consensus teams.
And the amount of talent that has come in in the last few months
is absolutely mind-boggling.
If you look at what Dancrat was,
was doing, he was doing a hardcore applied cryptography in the context of, of, of, of
sharding. And, you know, there's several, you know, applied cryptographers that I'm, I'm working
with on a daily and weekly basis now, including, you know, Tomah and Emil, you know, Giacomo and Angus,
and, like, all of these people are of extremely high caliber, like, at least as, as, as, as, as, as, as,
as good as Dancrad.
They don't have the reputation
because they haven't been at it
for seven years.
But in terms of raw talent,
I think we have it.
And these are people, again,
that were not on my radar
even a few months ago.
And then on the coordination side of things,
we've brought on,
you know,
Will, who just keeps on impressing me
every single day.
We have Ladislaus, Laos,
we have Sophia.
And there's also people
who are not doing
either the hardcore cryptography or the coordination.
So there's, for example, Felipe doing the specs.
There's like Raul helping with the peer networking.
There's Kev doing ZKBMs and like FARA working on Eve proofs.
And when you zoom out, a lot of these people, you know, came to Ethereum.
So for example, Will and Fara came from Bitcoin.
Kev and Sophia.
Sorry, not Kev.
Camille, who's one of these.
the coordinators of one of the consensus team and Sophia, they came from PolkaDOT.
We have Kev who came from Aztec.
We have Raul who came from Falcoyne, Tomar who came from Kakarot and the Stocknet ecosystem
and Angos who came from Polygon.
You get the idea like there's much more incoming brain drain than there is outgoing.
Now, in terms of the reason for this brain drain, I think it has to do with things that competitors like Tempo just don't have.
Like, Fatalik has this famous quote that's, you know, a billion dollars is not going to buy you a soul as a blockchain.
And, you know, we have community.
We have vision, ideology.
We also have this amazing technology.
And you mentioned that, you know, you think Tempo might leapfrog and use ZKVMs.
I'm not holding my breath on this.
You know, my base assumption is that they're going to have a very small number of validators running in data centers.
And actually, you know, I asked Ancrat, like, how many validators do you think Temple will have at launch?
And I'm hoping I remember this properly, but I think his answer was four, like four validators.
And, you know, like community is, is, is.
is very different as well.
Like I,
one thing that was very stark to me was,
you know,
when,
when Dankrat left,
you know,
there was a massive outpour of,
of,
of,
of,
of,
of,
of,
of,
of,
gratefulness for all of the work that,
that Dankrad had done and,
and his massive contribution to,
to,
to,
to, to,
thank Sharding.
And then you look at the,
the,
the Stripe side of things and, and, and, you know,
, it's like really sad that, you know,
Patrick, you know, the, the founder of, of, of Stripe, of Stripe,
kind of made this tweet to his half a million followers saying, hey, welcome Dankrat.
And his tweet got like three retweets.
Like there's like there's no community in Tempo.
There's, there's very little soul.
And, you know, I'm sure Dancrat has, you know, all sorts of reasons for, for leaving
the Ethereum ecosystem.
But the fact of the matter is that there's a massive brain drain towards Ethereum.
And I guess another thing worth mentioning is that I've,
this a reasonably high chance, called it like double-digit percentage, that tempo is actually
in some way part of the Ethereum ecosystem, even if today they're not ready to acknowledge it
explicitly. In my opinion, the incentives will be such that all of the L-1s will want to become
L2 so that they can tap into the network effects that Ethereum has to offer. Just yesterday,
actually or before yesterday, Ethereum crossed $100 billion of Teather on L1.
And if you want to do payments, you need to have access to stable coins.
And there's a lot of network effects around stable coins on Ethereum.
So it wouldn't surprise me if in a couple of years time,
tempo announces that they're pivoting to becoming an L2.
And Dancret comes back to the Ethereum ecosystem.
Do you have a take on why there haven't been more L2s,
some of these corporate chains?
why are they going with L1s instead of L2s?
It's not just tempo.
If it was just tempo, maybe you would say that,
but circle going that direction, also plasma,
kind of the tether founding.
There's been a lot of new L1s,
and the Ethereum like take has always been what you said,
which is like why be an L1 when you can be an L2,
it's cheaper, you know, better network effects.
Why hasn't that borne out yet?
Yeah, I mean, we have seen this L1 premium.
And I think, you know,
part of the reason,
is that there's this new design space which has been unexplored. And so people are maybe
valuing the unknown, like very large potential. I don't know. This is just a speculation.
But I think Tempo, you know, as you mentioned, they've raised $500 million at a $5 billion valuation.
I think they've done an excellent job at farming the L1 premium. And now that they've secured,
you know, their $500 million, I think they could, they could, you know, safely pivot at,
to doing the correct incentive-aligned thing,
which is to tap into the maximum amount of network effects.
I certainly do recommend that they keep part of that treasury,
let's say at least 1% to make an emergency pivot
if to an L2, if they don't become successful as NL1.
Justin Drake, this has been fantastic Lean Ethereum.
The next steps for this are what?
DevConnect and you're going to give a presentation?
Dev disconnect.
Of the death node.
Nice.
Talk about the next steps and what people can do to kind of stay abreast and get involved.
Yeah.
So I'm hoping that Death Connect is an eye-opening moment where as a community, we can all agree
that we want to go down this ZKVM path.
There's a few, I guess, stranglers who are not yet fully convinced.
But I think what's happening now is that we're disagreeing on timelines as opposed to
to fundamental.
So I think the most skeptical people will tell you that ZKVMs are something for, you know,
2020 or 2030.
But I think what's happening is that over time, more and more people are getting bullish
on the timelines.
And, you know, one, I guess, fun story here is that Kev, who leads the ZKABM team,
historically, at least, you know, a year ago was, I guess,
skeptic about CKVMs.
You know, there was a lot of open questions in his mind.
And it's been really beautiful to see his thinking evolve, you know,
week by week as he's been able to tick off every single unknown and risk that he had had in his mind.
And I think, you know, Kev is still like not fully convinced on the exact timelines,
but if the technology keeps on progressing
and the way it has been progressing in the last 12 months,
then I think the timelines can only shrink from here onwards.
Now, one thing that I want to stress is that there will be a tipping point
where the ZKVM technology has reached parity with L1 throughput and quality.
And from that point onwards, what I expect will happen
is that the ZKVMs no longer become the bottleneck,
meaning that the ZKVM technology will improve faster than the 3x per year,
which is, I think, the fastest that we can hope to upgrade the L1.
And so the burden will go back to the traditional non-moon math engineers
to optimize databases and networking and things other than cryptography.
We will end it there.
Justin Drake, thank you so much for joining us.
Absolutely.
Thanks for having me.
nation, got to let you know. Of course, crypto is risky. You could lose what you put in,
but we are headed west. This is the frontier. It's not for everyone. But we're glad you're
with us on the bankless journey. Thanks a lot.
