Bankless - Ethereum’s Next Big Upgrade: Pectra, Fusaka & Beyond | Tim Beiko
Episode Date: March 24, 2025We’re joined by Tim Beiko, Ethereum Foundation coordinator, to break down Ethereum’s next major upgrades and the future of scaling.We explore the upcoming Petra hard fork, which introduces validat...or consolidation, increased blob space for rollups, and EIP-7702 for improved account abstraction. Tim also shares insights on Fusaka and Glacier Dam, Ethereum’s roadmap for scaling, and how Ethereum’s upgrade process is becoming more efficient.------📣SPOTIFY PREMIUM RSS FEED | USE CODE: SPOTIFY24https://bankless.cc/spotify-premium------BANKLESS SPONSOR TOOLS:🪙FRAX | SELF SUFFICIENT DeFihttps://bankless.cc/Frax🦄UNISWAP | SWAP ON UNICHAINhttps://bankless.cc/unichain⚖️ARBITRUM | SCALING ETHEREUMhttps://bankless.cc/Arbitrum🛞MANTLE | MODULAR LAYER 2 NETWORKhttps://bankless.cc/Mantle🌐CELO | BUILD TOGETHER AND PROSPERhttps://bankless.cc/Celo🏦INFINEX | THE CRYPTO-EVERYTHING APPhttps://bankless.cc/Infinex-----✨ Mint the episode on Zora ✨https://zora.co/coin/base:0x0d142a1a30adcbc06c3c2a537d2cd1a9b7901c35?referrer=0x077Fe9e96Aa9b20Bd36F1C6290f54F8717C5674E------TIMESTAMPS0:00 Intro5:43 Current State of EIPs & ETH Hard Forks8:01 Why Security Is So Important12:43 Naming Conventions of ETH Hard Forks13:55 Pectra Upgrade & Its Effects16:15 Maxeb Upgrade & Its Effects24:29 Hard Forks, Bandwidth & Scaling28:51 Other Updates Regarding Pectra31:40 EOAs & Smart Contract Wallets36:17 Smart Contract Wallets' Effect on Layer 243:15 Timeline of Fusaka & Glamsterdam48:09 Vitalik's Tweet on Fusaka51:15 Alcore Devs' Plans54:25 What's EOF59:18 Other Updates Regarding Fusaka1:00:41 Uncertainty Regarding Glamsterdam1:02:18 ETH's Slowness & Developers1:05:19 Misconceptions Around ETH1:07:32 Scaling the Layer 11:14:52 Summer of Protocols1:18:13 Closing & Disclaimers------RESOURCESTim Beikohttps://x.com/TimBeikoEthereum Roadmap & Hard Forkshttps://ethereum.org/en/roadmap/EIP-7702 (Account Abstraction)https://eips.ethereum.org/EIPS/eip-7702Ethereum Object Format (EOF)https://ethresear.ch/t/ethereum-object-format-eof/------Not financial or tax advice. See our investment disclosures here:https://www.bankless.com/disclosures
Transcript
Discussion (0)
For 2025, we need Fusaka on the layer one with Pyr Das, ideally with 48 blob target.
And then he falls up saying, let's aim to get a Fusaka test net with these blob parameters running the day after Pectra goes live.
There's all these proposals saying like we should a hard fork every six months or every quarter or whatever.
This sounds good in theory, but imagine we're doing this, say, for the merge.
And it's like, okay, we mix the six months deadline. Do we just like ship a bunch of random less useful stuff?
Obviously, it's still the highest important priority to ship the merge. If it's like, we're going to ship the merge, if it's like, we mix the merge,
If it's going to take an extra six months, because there's something fundamentally hard about the merge, we should still be willing to pay that cost.
Welcome to Bankless, where we explore the frontier of internet money and internet finance.
And today, we're exploring the next steps in the Ethereum roadmap.
Ethereum is alive.
It is an organism that is constantly evolving and adapting through a slow, iterative process of debate, deliberation, and rough consensus.
And the Coliseum for where this all goes down is the Ethereum all core devs calls.
This is where priorities are hashed out and EIPs are either marked for inclusion or cut and left on the floor.
With me today on Bankless is Tim Bako, who is a coordinator of these all-core dev calls.
He is right in the middle of everything, traffic controlling conversations and priorities from the various components that make up Ethereum while also ensuring that things do keep on progressing.
With the Pectra hard fork on the horizon, I thought it was time to get Tim on the show to give the Bankless Nation an update as to what is included with Pectra, as well as what's coming over the horizon with the next Ethereum up.
upgrades Fusaka and then eventually Glamsterdam. And if these names feel both odd and familiar to you,
they are. And we'll explain how these names of Ethereum software upgrades come to be in the first place.
In this episode with Tim, we talk about Ethereum's increasing aggressiveness with scaling a data
availability and why scaling the Ethereum layer one is harder. How Ethereum's staking mechanics
are getting an overhaul and WTF is Ethereum object format and what does it do for improving
the developer experience on Ethereum? If you appreciate this podcast, please.
subscribe. If you already subscribe, perhaps also leave a rating or review. Those five-star reviews
do help us get to the top of the charts so we can help the world go bankless. And if you are
already a bankless citizen who is listening to this on your ad-free premium RSS feed,
make sure that you get the private RSS feed loaded up into your Spotify. The album artwork for
every episode is just so incredibly pretty. Shout out to our designer, as well as all the early
podcast releases and perks that come with your premium feed. Let's go ahead and get right into
this episode with Tim. But first, a message from some of these fantastic sponsors that make this show
possible. You may have already heard about Infinex. Infinex has, in my opinion, the nicest cross-chain
swap and bridge feature that you will find anywhere. It is called Swidge, Swap and Bridge, and we're
going to show you what it looks like. First, we're going to log into my Infinex account with a
pass key. Now, there's no seed phrases in Infinex. This is just a one-click set up with
biometric pass keys. But in addition to that, my Infinex account is fully non-custodial. So bam,
I just logged in. It was two clicks, and I'm already into my Infinex account. So let's go make a
I'm going to go swidge my USDC that is on base and I'm going to buy Barra chain, which is a
completely different chain.
So we're going to switch this.
I'm going to press that button and then Infinex is going to execute this order, this cross-chain
order for me.
And now it is done.
But actually, I'm not really feeling bearish anymore.
So I'm going to go from Barra to Penguins.
I'm going to buy Penguin on Salana.
So I'm going from the Barra chain to Solana.
See, no transaction signing, no gas to worry about.
You just switch across whatever chain that you want with Infinex.
That was so easy.
go check out Infinex and try your first switch today.
The Arbitrum Portal is your one-stop hub to entering the Ethereum ecosystem.
With over 800 apps, Arbitrum offers something for everyone.
Dive into the epicenter of Defy,
where advanced trading, lending, and staking platforms
are redefining how we interact with money.
Explore Arbitrum's rapidly growing gaming hub
from immersed role-playing games,
fast-paced fantasy MMOs to casual luck-battle mobile games.
Move assets effortlessly between chains
and access the ecosystem with ease.
via Arbitrum's expansive network of bridges and onrifts.
Step into Arbitrum's flourishing NFT and creators-based,
where artists, collectors, and social converge
and support your favorite streamers all on chain.
Find new and trending apps
and learn how to earn rewards across the Arbitrum ecosystem
with limited time campaigns from your favorite projects.
Empower your future with Arbitrum.
Visit portal.arbitrum.io to find out what's next on your Web3 journey.
Imagine a world where your day-to-day banking runs on a blockchain.
That's exactly what Mantle is building.
powered by a $4 billion treasury and poised to become the largest sustainable on-chain financial hub.
As part of their 2025 expansion, Mantle is introducing three new core innovation pillars
that bridge traditional finance with decentralized technology.
First is their enhanced index fund, aiming for $1 billion in AUM by Q1.
It provides optimized exposure to Bitcoin, E, Solana, and USC, complete with built-in yield opportunities.
Next, Mantle banking promises to revolutionize global value transfer through seamless blockchain-powered
banking services, bridging crypto into your daily life. Finally, Mantle X blends AI with Defi to deliver
an intelligent, user-friendly experience for everyone. And the best part is that this is all in
addition to their already launched products like Mantle Network, ME, and FBTC. Ready to step into
the future of finance, follow Mantle on X at Mantle underscore official and join the on-chain revolution
today. Bankless Nation, very excited to introduce Tim Bako. He is a coordinator at the Ethereum
Foundation. He's a central information hub for all the various components of the Ethereum Protocol.
and helps kind of guide the hard fork upgrade process for Ethereum, as well as the all-core
devs call.
Tim, welcome to Bankless.
Thanks for having you.
So I really want to kind of just get a download on the kind of the current state of thinking
as it relates to EIPs that are going to make it into Ethereum.
And, of course, any informed listener would know that these go into Ethereum in chunks.
We call these chunks hard forks.
The next hard fork that we are currently preparing for is called Pectra.
But then there are other hard forks that happen after that as well, such as Fusaka.
and Glamsterdam. We'll actually talk about how these names come to be. But maybe Tim,
maybe you can kind of just zoom out a little bit and kind of give us a lay of the land of
what things seem to be being prioritized, generally speaking. So when we look into the updates
that the all-core dev meetings are discussing, what kinds of updates are rising to the surface?
Like what is the ACD calls of Ethereum, all-core devs calls? What are they prioritizing?
Thanks. Yeah. I'd say the first thing, which is kind of obvious to some people,
but probably doesn't get emphasized enough is security.
So first and foremost, the thing that kind of differentiates the theorem from other chains
is just this extremely robust approach to security.
And in practice, this shows up as like the chain has never gone down.
And there's an immense amount of work that goes into this.
So if you're listening to like some random segment of an awkward desk call,
you're probably hearing people just like debate, is this thing safe?
How is it going to break?
And if it broke, why?
And how can we fix it?
So while this isn't like a feature as much as some of the other stuff we'll talk about,
it's sort of being mindful of this because it is kind of the thing that then dictates all of the other priorities.
Like, why can't we do this specific thing that seems obvious?
The answer is usually because there's like some security property we're trying to work through
to make sure that it's actually robust with the point on the theorem.
But with that being said, I'd say like the most like prominent theme of the past year or so
has been around scaling, both on the execution and consensus layer.
So peer-dos is the main thing people are talking about now.
But even before that, we've been working these past few months on the Pectra upgrade,
which will also scale the consensus layer blob data.
So we can get into any of those, but like high-level theme security,
and then more recently in the past year or so, scaling has been coming up more and more.
You know, I expected the answer, like if I had to guess, was going to be scaling.
But I want to unpack that a little bit more.
The idea that security is like this kind of common denominator for all future, like,
properties of Ethereum. Like, there are some properties that Ethereum does not have that we want it to
have, and so we can, like, prioritize those. But I really like the illustration of, like, well,
yes, but first comes security. Color in that just a little bit more about, like, why security
is so important when it comes to the all-core devs calls and why everything, all updates to
Ethereum have to pass through that security filter first. So, sure, it's like, if you think
Ethereum should be this robust, durable platform, which secures billions and potentially
trillions of assets, you want this thing to be safe. And the way that say, you know, non-cryptal
handles this is through centralized third parties, right? Like your bank has some security control,
but if those fail, they'll send like a lawyer to fix it up, right? And you often see these cases,
like some bank accidentally sends someone like $100 million, and then they just refer it to
transaction. And so, you know, they obviously have checks in place to ensure that they're not
sending you $100 million to random people all the time. But if and when it happens, there's like
this legal system and other ways that they can kind of go off-chain to resolve this.
On Ethereum, we kind of don't have this luxury, right?
Like, we have the chain and when a smart contract is deployed on it or when someone has
some eth on it, there is no other mechanism by which we could go and change sheets.
And this has become more true over time, right?
If you look in, you know, blockchain's early histories, you know, Bitcoin obviously had the
inflation bug and they had the fork for this.
The Ethereum had the Dow.
And I think what you learn through these experiences is these events are like extreme.
extremely contentious for the community and the bar for them to happen just keeps getting higher and higher and higher.
So we need things to be secure because we should assume that we're not going to be able to intervene.
And I was looking up to Dow recently and people like to kind of talk about it and it's like this very intense part of Ethereum's history.
And I think it's kind of less appreciated today and just how big of a deal it was.
So to put it in perspective, when the Dow hack happened, the Dow had as much ETH as all of the L2s today.
and the Wrapped East contract.
And that's kind of the scale that it took, you know,
to have some sort of social intervention on Ethereum.
So if you told me, you know, tomorrow I wake up at like all of the L2s
and the RAPEath contract are brick,
they're like, yeah, potentially we would be considering, you know,
some way to solve this.
But for anything like below that threshold,
we need Ethereum to actually just work.
And that is why, like, it has to be secure.
So if we didn't care at all about security,
we could get scalability.
And there's a lot of other chains who just forked Ethereum's code,
10x the gas limit, lower the slot times.
And those chains work, but you can't sync to them.
You have to have manual operations where, like, you know, people come in and they're like,
literally restart the chain from time to time.
So if you want to avoid all of this, then you need to find ways to scale, but not compromise
kind of this ability of the network to keep going.
I think there's a large number of reasons as to like why Ethereum upgrades slowly.
One is like caution for that security reason.
One is like, well, it's a decentralized ecosystem.
There's multiple clients.
it's just harder to upgrade a multi-component system.
But maybe one of the reasons why Ethereum's tends to upgrade slowly
is because, well, we hold security to the highest degree possible.
Yeah, I think that's true.
And I think that's not to say there's no process improvements
or things we couldn't do better.
But I think in general, like over Ethereum's entire history,
the reason why upgrades are, say, between six and 12 months
rather than six and 12 weeks is because there is like a pretty thorough testing cycle.
And I think the other thing is,
if you look at all the stuff we did not do, right?
There's way more EITs that have been proposed than the ones that have been actually deployed on chain.
The common denominator is usually there's something broken from a security perspective about them.
And it can often take years to iterate on a design to get to some version of this idea,
this feature that we feel is actually going to be safe on MayNet.
And in the current hard fork that's coming up, Pectra, there's this EIP 7702 that helps with account abstraction.
This is like a great example where the first drafts of this EIP probably date from something like
2017, 2018, and it's like over this process of many, many years and rounds of iterations,
we've come up with some construction that we feel is safe and also kind of delivers the value
of this future. Okay, so let's get into the near-term hard forks. But before we do that,
let's be one more level meta about it, because I want to talk about the naming convention.
I think we did this last time I had you on the show, but I think it's worth tracing over.
The next hard fork that is incoming, that is Stoon TM, T.M is Pectra, followed by Fusaka,
followed by Glamsterdam. These are not real words.
Can you talk about the naming convention for how Ethereum hardforks are named?
Yeah, so after the merge, we now have two parts to Ethereum.
There's a beacon chain, the execution layer.
The beacon chain would use star names for their upgrades, and the execution layer
uses DevCon city names.
So, you know, we still kind of refer to those component parts using that way.
So the Pectra hard fork has like the Prague execution layer change and the Electra consensus
layer changes.
But for users and for the community, it's just like one hard fork that happens at the
same time, so we mash them together. So Pectra is just Prague and Electra. Fusaka is Fulu and
Osaka. And Amsterdam is, I forget what the G-star is, but then Amsterdam was the city for the
First Deaf Connect. That's where these things come. Right. So it's a portmanteau between a city on
planet Earth and a star in the galaxy somewhere. Yeah, correct. Yeah. Okay, cool. So Pectra is coming up next
and that is the near-term hard fork. As a vibe, how would you characterize Pectra? Like,
what are the things that Pectra is prioritizing?
How is the theorem going to be changed by Pectra?
So on the consensus layer, the two biggest things are, one, the increase of validators' maximum balance.
This means that we can consolidate validators into a smaller number of them.
So when we talk about scaling, a big bottleneck for scaling the consensus layer is how much bandwidth it uses.
Ethereum has like a very big validator set.
I believe we have over 1 million validators.
Each of them need to like sign a bunch of messages and then send those.
messages to all the other validators, which is just like a lot of bandwidth usage.
But in practice, you know, even though we have a bunch of 32-Eaths denominated validators,
many of these are run by the same entities, right?
Like Lido, you know, has, say, I think, 30, 40 different node operators.
So like one of those node operators obviously has way more than 32-Eth.
So with the Pectra upgrade, they'll be able to combine this into chunks of up to 2048
east.
And the value that you get from this is twofold.
One, large operators can kind of generously combine their validators together to reduce the bandwidth that they use on the network.
And this means more bandwidth for blobs and other kind of scaling stuff, which we'll talk about after.
And then the other nice part is if you're a smaller validator, say you have only 32-Eath.
Today, if you have, say, like, 50-Eth, you would have to have like your 32-Eth, you can stake that, but you can't do anything with your other 18-Eth.
So with this feature, you'll be able to set up a validator with an arbitrary amount of ETH between 32-Eth.
to in 2048. So for anyone who's on the more like Solostaker side at the spectrum,
it means they can compound the rewards effectively for like any amount up to 2000. And when you
get to like 2000-Eath, you know, you can split that into 2000-Eath validators. So there's not
really like a ceiling on compounding there. So this feature is really nice because it allows big
validators to kind of reduce their bandwidth footprint on the network and then give small validators
more compounding. So that's one of the big ones. Before we move on, can we actually just
hang out at Max E.B? I think it's one of my favorite.
upgrades to Ethereum in a while. And so just to really trace over what you said, right now,
before Pectra goes live and before Max EB is live, Ethereum validators are these discrete 32-Eth
chunks. And you can only stake 32 ether. You can't stake below that. You can't stake above that.
And so if you have 320 ether, you would need to spin up 10 validators. And you could, in theory,
do 10 validators on the same machine, on the same piece of hardware. Yes. And then they would be
spitting out 10 times as many messages in theory as they would really need to, because
it's, you know, 10 times as many messages going into other people's validators coming back
into your machine. And so with MaxEB, we're able to consolidate this, like, 320th down into
one validator that has like 10 times the weight, but still just one machine. Can you just
trace over, I kind of explained it a little bit, but just like how it reduces bandwidth,
and then what that unlocks for Ethereum? Yeah. So the reason it reduces bandwidth is
assuming the validator sets stays the same, right? Like say we have a million validators today,
if we know that like 30% of those are grouped in like much larger entities.
Take say Coinbase, for example.
Coinbase right now, I don't know what's their exact share of the stake, but say they had 20%.
It means they have like 200,000 validators and they're constantly sending messages from like
the same servers or, you know, like a small subset of servers from 200,000 entities.
But then with this, they can effectively reduce that by not quite 100x, something like 80x,
to go from like, you know, 200,000 validators to like 200,000 divided by 2048, which is maybe say
a thousand.
So much lower number.
Yeah, exactly.
Like, yeah, about any excess.
And then because of this, it means every one of those, like, big validators now is only sending
one message, but yes, their message has, like, a higher weight to it.
So instead of getting, like, you know, 10 pins from 10 different validators, you get one
ping from one validator whose weight in terms of attestations and whatnot, you just text.
And maybe like a very like naive way to think about it is like every validators currently sending like a message that says like one vote all the time. And then with Max CBB, you can have a same message, but you say 10 votes instead of one vote. And that means that over the internet, like as you're gossiping these messages through all of the peers, you can greatly reduce that number. And for people who maybe don't have an intuition here, it's not like you're sending this message just the one person either because all of the validators send their messages to many different validators. Right. So.
It's more like you're sending this message like 50 times over or something like that.
And then you go from sending this message from every one of those people 50 times over to like still sending it 50 times over, but you're sending one message in each of those things.
Does that make sense?
Yes, yeah, totally.
Do we have any intuition or any guesses about the reduced bandwidth footprint that Ethereum will have on the internet post this update?
Like, do we know how much reduced bandwidth Ethereum will consume as a result of this?
We don't have those yet in part because like it's hard to estimate.
Like, you could estimate, like, if everyone did it perfectly, what would be the max saved?
So we'll have to wait and see a bit like, okay, you know, to what extent do Coinbase and, you know, under entities consolidate?
There are some, like, moderate risks to consolidation.
Like, you wouldn't want to consolidate everything on, say, like, a single server, even if you had, like, you know, more than 2048 East.
So I think we're going to want to see, like, okay, what the people actually do, what sort of best practices emerge in terms of an institutional use case.
but there's clearly some value in consolidation
because today, you know, to say, again,
take Coinbase's example, say they have 200,000 validators,
they're not running 200,000 independent servers, right?
Like maybe they're running, you know, say ballwork,
say they were running like 100.
Then you could imagine, okay,
there's like some amount of risk that they're already taking
where those 100 servers are kind of correlated
and so they can consolidate their validators
without necessarily increasing that risk.
So that's something we'll have to wait and see.
And then the other part of Maxib,
that is very good is
these smaller stakers
get compounding for free, right?
Like they start to auto-compound their stakes.
And so what this looks like is,
you know, I had 32-Eth when I deposited
and now I have, say, 34.5Eth.
My votes now counts for 34.5Eth
and when I'm earning rewards,
I'm earning rewards on that 34.5 Eth.
And that will see kind of instantaneously.
So it's kind of nice
that the feature works for like both ends
of the barbell in the way.
And then I think the last part
of this conversation
is what this really unlocks for a theory
with future upgrades.
You identified bandwidth
as a big bottleneck for Ethereum.
It's not the only bottleneck,
but it's a big one.
Once there is reduced bandwidth
consumption by the Ethereum
protocol at large,
what doors is that unlocked for Ethereum
in the future when it comes to scaling?
Yeah, so the big reason
we want to use bandwidth
for, in the medium term,
is to scale the amount of data
we can get L2s to use,
which we call blobs.
So in the Pectra Hard Fork,
we are also going to increase
the number of blobs by 50%.
So right now,
every block can have three blobs
in them on average
and up to six
in kind of the worst cases.
In Pecta, it's going to be six on average,
and then up to nine in the worst cases.
So anytime we can kind of shave off some bandwidth usage,
it gives us more room in the protocol,
if you assume that you're kind of keeping bandwidth requirements constant
to, like, deliver more data to users of Ethereum.
And there's another thing we're doing in Pecta as well
to cut some bandwidth on the execution layer,
but you can think of this as like all these kind of small gains
that we can compound,
we can then directly translate that into more scale
for data, which is what ends up pouring all the L2s on Ethereum.
Okay, so the bandwidth there we are saving due to MaxEB.
In the same hard fork, in Pectra, we are able to take that bandwidth savings and apply
it to bandwidth spend that we can spend on data availability.
Is that correct?
Yeah.
Yeah, basically.
And it's a bit optimistic, again, because we don't quite know how much bandwidth will save
because we haven't seen consolidations.
So there were some cases we were concerned about of, like, worst-case scenarios.
And to address those, we've made a change on the execution layer that caps the highest
bandwidth block size on the execution layer. So we feel that because we have consolidations
coming in and that we have kind of these mitigations for the worst cases on the execution layer,
we're able to now increase the blobs security on the theorem. Right. And just so I make sure I know this,
so when Pectra goes live and we go from three blobs, a block to six blobs to block immediately,
we're going to have a doubling in bandwidth just from blob space. But then the savings from
max E.B is actually going to come over time because we will need validators.
actually kind of update to this new form factor.
And so we don't know how much bandwidth savings we're going to get.
And it will take time.
And then maybe by the next hard fork, we'll actually be able to say like, oh, we actually
maybe even underestimated how much bandwidth savings are going to get.
And then we can add that slack to further blob space in the next hard fork, which is Fusaka.
Is that right?
Yeah, that's correct.
And then I think to add a bit more context is we knew we had a bit of wiggle room on the bandwidth
side before coming into this fork.
But again, we had some concerns about like some, like,
worst-case scenarios. So by having some wiggle room and patching these worst-case scenarios,
it means we can comfortably increase the blob size now. And then, yes, to whatever extent we have,
like, more bandwidth in, you know, reserve by the next hard fork. We can still use that
to scale Ethereum. But we also have other plans for how to scale Ethereum more efficiently
for Fusaka, which we can talk about later. Yeah. Okay, so we just hit two birds with one
and so. And we talked about the incoming blob space upgrade. We're getting a doubling in the average
blob space going into Pectra. And that's just kind of how it's going to be with all future
hard forks is like we're going to identify that we have some amount of bandwidth slack in the system
and then the number of blobs that we get per block is going to just increase according to what
the all core devs feels like is safe. So right now we have three blocks per block with Pectra
we're getting six. And then that's just kind of how it works. We're going to get a higher number next time
and it's going to be a downstream like kind of calculation of what we think is safe in terms of
bandwidth consumption. Is that right? That's correct. But there's more we can do.
to go even beyond that.
So one way to think about
Ethereum's data availability arc
is before we had these blobs,
all we had was call data on Ethereum.
Call data is effectively data
that you store on Ethereum forever.
So it's quite expensive to use.
And this is why we came up with these blobs.
Blobs are data that is stored on Ethereum
but gets deleted after a couple weeks.
And because most L2s only need the data there
for their exit window and fraud proofs,
they were kind of overpaying
by storing their data on Ethereum
forever when they actually only needed this data to be up for a few weeks.
So this is how we got kind of the first 100x cost reduction in data for L2s, which is what
we have today.
And we can keep improving that amount of like ephemeral data, but in terms of just like a
qualitative change, we went from data is stored forever to data's only stored for a few weeks.
The next step we'd like to take is today, even though this data is only stored for a few
weeks, every single node on Ethereum stores it.
What we'd like to do in the future is move to a world where every node only stores a substance,
instead of the data, but then pings other nodes on the network to check with high probabilistic
guarantees that somebody else has the data. And so what we can say is like, you know, say we have
a million validators, we can partition so that we have some fixed numbers of copy of that data
and that the validators are always checking like, okay, is there someone who can prove that they have the
data? And this is kind of another qualitative leap where we go from everyone has the data forever to
everyone has the data for some amount of time to everyone has a subset of the data for some amount of time
and uses cryptography to check that other people have it. And this is the main thing we're working on for
the Fusaka upgrade. It's called Pyrdas, but that's kind of the high-level idea of going from
not only do we have blobs that take up some amount of bandwidth, but if we're able to only download
a subset of the blob data, we kind of go from, okay, every node needs all the blobs to every node
say one-tenth of the blobs data, and that gives us effectively 10x increase at the same amount
of bandwidth. So maybe just to make sure I understand that. There's kind of like two ways to
grow blob space on Ethereum. One is just engineering, which is I think what we're doing with
MaxEB. We're just engineering more efficient systems out of Ethereum. And then there's like
cryptographic step function changes, which is what you're saying with Pyrdas. In addition to that,
where with engineering, we're going to get a doubling. With Pectra, we're going to go from three to six.
great optimization, which is exactly what we want.
But then with cryptography, we are going from like one to ten or even something much higher
than that. And both are happening in parallel.
Correct.
Yeah.
And then, you know, so the plan, you can think of like, before Pectra actually introduced
the blobs, Deng Kud.
So it's like, Deng Koon was kind of like research getting into production.
And so we got this like step function change.
And then in Pectra, we found like some optimizations and we can kind of improve things.
And now we have this other kind of research to prod feature called Pyrdas.
going into Fusaka.
And then even when PyrDAS ships,
there'll obviously be a bunch of optimizations
we can do to it.
So we expect we'll ship PyrDAS
with some initial amount of blobs
that'll be a bunch higher,
some numbers being discussed
or something like going to 48 on average
instead of six to give you a feel
for the order of magnitude
or the order of difference.
But then once we have, say,
48, turning on PIRDAS,
there'll probably be a bunch
of engineering optimizations.
We can then do to bring that
to some like higher number
that's not like, you know,
a 5, 10x increase.
So I think it's like a good mental model
to think about it.
Okay, beautiful.
All right, I feel like we've actually covered a decent amount of Pectra so far and a little bit beyond.
Going back to Pectra, are there any other updates making it into Pectra that are worth highlighting?
I know there's often just like a bunch of little things that maybe like are too in the weeds to talk about.
But is there anything else, any other stones that we should unturn around Pectra.
I guess one last big one is EIP 7702.
So this effectively is the first in-procal version of account abstraction we'll have on Ethereum.
And like I was saying earlier, it's something that's been in the works for a really long time.
And the way EIP 7702 works is that you can use any account on Ethereum, so any regular EOA,
but you can delegate to an account which adds smart contract functionality to it.
So, for example, you could have something like I have my EOA and I want to delegate to something
that gives me the equivalent to a safe.
And this is kind of neat because it means you can use your existing account with all of
your assets and everything there, and you can effectively choose like, oh, this is like the smart
wallet I want to run under this account.
And you can change this over time.
So maybe I have like, you know, my EOA today,
I want to delegate it to some sort of safe-esque wallet.
So I can do that.
But then maybe there's like some other feature that I want to use or like on another
wallet I want to use, say, auto approvals up to $1,000, right?
Like I want to have some smart wallet that just gives me a better U.S.
I'm able to then use 7702 for on that separate wallet or to change a delegation on my
previous wallet moved to a different sort of smart account.
So this is a very nice model because it kind of.
kind of gives the users the flexibility to choose what they're delegating to, to revoke that delegation as well, to say, like, oh, you know, I don't like this thing anymore. I can go and move back to, like, not being delegated or delegated to some other feature. And it kind of gives the ecosystem the ability to keep innovating on this, right? Where maybe, you know, we expect there's not going to be like hundreds of these different smart wallets, but say there's like, you know, five or ten that are developed in the next year that are, you know, well audited, well tested and whatnot, and they're great, people use them. But perhaps to, you know,
years from now, someone comes up with like a new idea or a new version, they can just go and,
you know, ask people to change your delegation to them. So it's still something that like
kind of allows for innovation in the space and doesn't lock you in to say, oh, you delegated once,
you know, you use the safe once and now you're kind of stuck with it, which is how a lot of smart
wallets work today because effectively, you know, if you have like a safe account deployed,
that is like the smart contract, like the address is the safe. So 7702 is more like your EOA has
a pointer to a specific type of smart
account. And so this is the other thing that'll be going on in Pectra. That's kind of a huge deal.
Yeah, this is relatively new to me. So my understanding, correct to me if I'm wrong, is that my EOA,
which for listeners who don't know what an EOA is, externally. On address.
Externally owned address. Wow. Yeah, it's been a while. It's generally own address.
And that's just like if that's your Metamask, that's your ledger, that's your Rabby wallet.
It just means that it's an account that is owned by something external as in your private keys is,
I think what is being referenced externally. And then a contrast of that is a smart contract wallet,
which is the owner of a smart contract wallet
is going to be an EOA, not itself.
Okay, so with this, what you were describing,
it kind of sounds like there's going to be some very useful addresses out there
with these bundle of features that I'm going to be able to point my EOA to
and basically have those features be like absorbed into my EOA.
Or like for some reason I'm getting like having this mental model.
You could think of it as almost like a plug-in for your EOA.
If that makes sense.
That's what I needed.
Yes. And so I just point my EOA at some smart wallet out there that has been deemed useful by the community,
and all of a sudden I have those features that correlate to my EOA. Is that right?
Correct. And then you're also able to change that in the future, which is the other important parts.
Because today you could just set up like a NOSIS safe and, you know, NOSA are great,
but you can't change your NOSA safe to like something else, right? It's fundamentally a NOSA safe.
Whereas in this model, you're able to say my EOA has like plugin A and I don't like plug-in A and I don't like plug-a.
A anymore, I want to go to plugin B, and you can do this and still keep the same address and still
keep the same private key that secures the EOA.
So it sounds like just a hybrid benefit model of just like, there are something, properties about
my EOA that I like, but it doesn't have any smart contract features.
And this kind of sounds like it's trying to thread the needle of being the best of both worlds.
Yeah, correct. And this is kind of why it took so long to get to these types of solutions,
where it kind of blends a lot of what smart wallet developers want with kind of the practical
realities of, okay, users have EOAs today, how do we minimize the migration costs, and also
how do we do this in a relatively safe way? And so one example on the safety side was, you know,
should we allow this, when you delegate to a plugin, should it happen just on Ethereum or
across all of chains? And we felt like it should happen on Ethereum. And that if you're delegating
to another chain, you know, you should have to like click the button twice. It's worse on a
U.S. level in a way, but it also means that say you're using some random L2 and they have some
sketchy delegation wallet and you happen to delegate to it, you don't get all your assets
drain from every chain forever. Or imagine, you know, like some other ecosystem copies Ethereum,
which often happens, and they decide that like some stuff is safe, that the Ethereum community
doesn't think is safe. If you're using your same address on, say, chain A and chain B,
delegating to something bad on chain B won't automatically mean you lose all your money on chain
A. So this is kind of one of these examples where like, okay, thinking through the actual
security implications was really important. And I think in this case, we've, yeah,
Yeah, then that a really nice compromise between giving you a lot of value and, like, functionality to users,
more also making sure that there's, like, some level of security in the system.
Is this update something like a foot in the door for future improvements on the Ethereum smart contract wallet, like landscape?
Or do you think this is something closer to, and we are done here?
Let's wrap a bow on this and call this good.
I think we'll see more.
I think there's a lot around account obstruction that's, you know, not part of this.
At the very least, the entire arc around not having
ETH to pay for gas and having a wallet that manages that
is not part of this. Obviously, if you're using an EOA, it assumes there's
eth in the balance. The other part around the account of traction
is, okay, what if you want to actually change your private key on an account?
That also is completely out of scope from this.
So I think 7702 effectively gives us a very practical feature
to address most user use cases, which is like,
I want to delegate to some smart account, I want to do auto approvals,
I want to have some social recovery.
So you can address all of these.
It doesn't solve everything.
It's not like some silver bullet.
But we felt it was better to ship something useful short term
than to try and come up with the perfect solution
and wait for another five years,
especially given that people have been wanting something like this
for probably already five years.
Yeah.
I think one of the parties of people that have wanted something like this,
the most are layer twos.
Yeah.
Because layer twos have had to roll their own code
when it comes to building smart contract
while it's into their chains.
because we all understand that the smart contract walls are very powerful, very beneficial.
We're eventually all going to have them.
Layer 2s who have been faster to move in the Ethereum layer 1 have just beat Ethereum to the punch
and just roll their own code and built their own smart contract wallet straight into their chain.
But as a result of that, just like the layer 2 fragmentation problem is that they have all built
their own solution and now there's no standard.
So now the Ethereum layer 1 has some semblance of some standard.
How does this conversation carry into layer 2s?
How does this impact layer 2 roadmaps when it comes to smart contract wallets?
I think, yeah, it helps to say there's like a standard there.
If there's still a lot to unpack in terms of standardizing account abstraction on layer
twos, I do think like, yeah, all these questions around, okay, gas payments or, you know,
what is like the default flow you onboard people on or just interoperability across different L2s or dates we will continue to spend time on.
But yeah, I think for layer one broadly and then for layer two as a default, 7-ZO2 is like a really good practical step.
But yes, you will keep hearing about account abstraction debates.
foreseeable future. I have heard about smart account extraction debates since I have been in Ethereum
since like 2017. Yeah, right. Yeah, yeah. People working on this for a real long.
Okay, Tim, I think that wraps Pectra more or less. Again, there are many other EIPs that are getting
into Ethereum in Pectra, but many of them are small. This is kind of is how hardforks work.
Are those all the big ones that are worth discussing? Yeah, I think, yeah, for all practical purposes,
yes, there's about 10 EITs, I believe, in Pectra, so there's plenty more there.
If you're a smart contract dev, there's things like the DLS pre-compile that are coming in.
If you're a validator, there's also a bunch of other tweaks around the deposit queue
and using execution layer with all credentials.
So there's definitely more to explore.
I wrote an article about it, and there's, I believe, going to be a section on eS.org about the upgrade as well.
So, yeah, if people want to go deep, there's a lot to go into.
But at a high level, maxi be increasing the blobs and 7702 are the biggest kind of new features coming in.
Let's talk about timeline because we are currently in the test net.
phase of Petra. So some of Ethereum's test nets are being Petra-enabled. They're getting pectraed as we speak.
Where are we in the timeline for Petra making its way all the way to the Ethereum Layer 1?
How soon can we expect Petra?
Yeah. So if you follow test nets, you'll notice that there have been a few issues.
Yeah, there's some drama.
So we ran into issues on TestNet mostly due to different configurations between test nets and the Ethereum
magnet. And the reason we have these is in short because there's no incentive and proof of stake to run a test.
So we kind of have to come up with these ways, like, okay, how are we going to test stuff if no one wants to run a node?
So we have two test nuts right now, Sepolia and Holeski.
Sipolia has a whitelisted validator set.
So because we wanted to be stable, we said only like client teams, infra teams and like, you know, whoever else is like a professional operator that wants to do this can run it.
So to actually run a supposedly validator, you need like a special ERC 20 token, which means the deposit contract on Sipolia was different.
And so we ran into some bug because of like how the deposit contract on Cipolia was structured relative to maintenance.
Before that, though, we ran under another issue on Holeski where Holeski has an open validator set,
but we wanted the ability to like mint some amount of ETH and kind of configured the network a different way,
because on Gordy, we ran into all these issues around like not enough test net eth going around.
So we fixed all of this, but then when deploying the Holeski test net, because of some changes,
the deposit contract address was different.
And we also ran into some bugs because some clients like effectively assumed that it was the same.
And so because of this, like the testing process has been a bit like curve vault with like relatively minor configuration issues that ended up kind of delaying things.
And then because they affect a validator set, especially on Holoski, they ended up kind of making it much harder for validators, especially staking pools and people with more complex setups to test the entire lifecycle on the network.
So to address this, we actually launched a new test net today called Hoody.
So the idea is to have this new test net from scratch,
have it have an open validator set that people can use.
And this is kind of our last step towards testing Pectra before we go to MainNet.
And the idea is that we not only want Cordeves to test Pectra,
which we've been doing and can still do on Cipoli and Holeski,
but we want people like, say, Lido and Rocket Pool to test this,
especially because there's so many of these features like Max CBE,
but also some other features around how exits and withdrawals are processed,
that it's really important for them to get right before this goes live on Maynup.
So TestNet launch today.
The Pectual Hard Fork is happening on this TestNet next Wednesday,
which is March 26th for people listening.
After that, we expect it'll take a few weeks for staking pools
and other people to just test their entire infrastructure on the network.
They have to go and redeploy everything and make sure that it all works.
And for all of these staking pool projects, it's not just a set of smart contracts, right, because they run all these validators, they have oracles.
So it's a fairly complex setup. So that is kind of the last part of testing that we want, is we want to make sure things work well for them before we move to Mainnet.
If we found something that, you know, broke how staking pools, manage exits or something like that, we'd obviously want to reconsider when we'd move to Maynet.
So assuming all of this works smoothly, the earliest date we said we would fork Mainnet was 30 days after Hoodey has.
this Pectra fork activated. So, you know, from March 26th, that would be late April.
It's possible that we have a few weeks after that if there's like some testing to be done,
you know, or we realize there's like some minor issue to fix. But I think we'd be looking
if things go relatively well sometime around May for Pectra. Okay. So May is the happy case for
I'd say it's like, yeah, the base case. Like, you know, maybe we find some minor issues or
stuff like that. But, you know, if we find some major issues, I think completely wrong,
it would go beyond that. I do think at this point we've, Pectra's been tested quite
the early, I don't want to bet on this. Like, I feel like if I say, you know, we're all good,
then we will find something else as soon as I drop up this call. But I think base case is like May's
reasonable timeline. Okay, so May is the base case. And then after Petra is Fusaka,
and then after that is Glamsterdam. Just like, I know timelines as soon as we get any
further out, then one hard fork starts to become incredibly hazy. And we don't really want to like
set any expectations. But understanding that, can we just like draw a like,
hazy cloud timelines for Fusaka and Glamsterdam?
So for Fusaka, I think there's a couple of things that are really important.
One is we've been working on Fusaka for quite a long time now.
So the two main features for Fusaka are PIRDAs, which we talked about a bit earlier,
than EOF, which is a big overall of the EVM.
Both of those things we've been working on in parallel to Pectra for like over a year now.
So we have dev nets, you know, they're running.
The specs aren't completely frozen, but, you know, we have pretty stable specs
that are being iterated on and kind of getting the finishing touches,
which is very good because it means we're coming into this next hard fork
and we have the bulk of the code for the hard fork after for Fusaka already written.
I think there's a very strong desire across the entire community,
both core devs and everyone else, to see a larger blob increase before the end of 2025.
So that's been the rough target for Fusaka.
So saying, you know, like we want to ship Pyridavs, we want it to happen this year,
I think what this means is if there's any other feature,
in this fork that say we're to delay Pyrados, we would just remove it. So we'll probably
keep this scope for Fusaka relatively small. We want to finalize the scope for Fusaka around April
this month. So the hope is that, you know, when Pectra has gone live on Mainnet, we already know
everything that's included and, you know, nothing else is being considered. But then even for those
other things, if we find some weird issue or bug or delay, we would probably err towards removing
them to make sure that Pyridav can ship in 2025. And then I think the risk there is, if there's
something unexpected with Pyridaz itself. Obviously, Pyridaz is the most important thing to ship.
We will do everything we can to ship it this year. But the reason the hard for timing conversation
is hard is because so many of these things have a heavy R&D component to them. So, you know,
if there's like an unknown unknown, it can be hard to predict like, oh, you know, we didn't
know that this issue would show up because this thing did not exist three months ago. I think
the fact that we've been working on it for over a year means there's a reduced chance of that.
But yeah, you can't completely write off the chance of like, okay, you know, we don't.
we move forward with PyrDAS, we see that there is this thing to fix with it.
And like, the quickest past the scale in Ethereum is still to fix this thing.
So I'd say everyone's trying hard for 2025.
It's unlikely anything else than PIRDAS would be like the cause of a delay.
But if something happened with PIRDAs or some like big unexpected twist, obviously that can always delay things.
Selo is transitioning from a mobile first EVM compatible layer one blockchain to a high performance Ethereum layer two,
built on OPE stack with eigenDA and one block finality.
All happening soon with a hard fork with over.
For 600 million total transactions, 12 million weekly transactions, and 750,000 daily active users,
Sellow's meteoric rise would place it among one of the top layer twos, built for the real world,
and optimized for fast, low-cost global payments.
As the home of the stablecoins, Sellow hosts 13 native stable coins across seven different currencies,
including native USDT on Opera MiniPay, and with over 4 million users in Africa alone.
In November, stablecoin volumes hit $6.8 billion, made for seamless on-chain FX trading.
Plus, users can pay gas with ERC 20 tokens like US.
SDT and USDC and send crypto to phone numbers in seconds.
But why should you care about Sello's transition to a layer two?
Layer 2's Unify Ethereum.
L1's fragmented.
By becoming a layer 2, Cello leads the way for other EVM-compatible layer ones to follow.
Follow Sellow on X and witness the great cello happening where Sellow cuts its inflation in half as it enters its layer 2 era and continuing its environmental leadership.
In the wild west of Defi, stability and innovation are everything, which is why you should check out Frack's finance.
The protocol revolutionizing stable coins, DFI, and Rolex.
The core of Frax Finance is FraxUSD, which is backed by BlackRock's institutional biddle fund.
Frax designed FraxUSD for besting class yields across Defi, T-bills, and carry trade returns all in one.
Just head to Frax.com, then stake it to earn some of the best yields in Defy.
Want even more?
Bridge your FraxUSD over to the Fraxtal Layer 2 for the same yield plus Fraxtil points
and explore Fractyl's diverse layer 2 ecosystem with protocols like Curve, Convex, and more,
all rewarding early adopters.
Frax isn't just a protocol.
It's a digital nation, powered by the FXS token
and governed by its global community.
Acquire FXS through FRAX.com or your go-to decks,
stake it and help shape FRAX Nation's future.
Ready to join the forefront of Defi,
visit FRAX.com now to start earning
with FRAXUSD and staked FRAXUSD.
And for bankless listeners,
you can use FRAX.com slash R slash bankless
when bridging to FRAXL for exclusive
FRAXEL perks and boosted rewards.
Introducing Unichain.
Built for DFI.
empowered by Uniswap, Unichain is the fast, decentralized layer two, designed to tackle blockchain speed and cost challenges.
With this Mainnet Now Live, you can enjoy transactions at up to 95% cheaper than the ETH layer 1,
all while benefiting from an impressive one-second block time that will be getting even faster very soon.
Unichane is the first layer 2 to launch as a stage one roll-up on day 1.
That means it comes with a fully functional, permissionless proof system.
From the start, increasing transparency and further decentralizing the chain.
More than 80 apps are joining the Unichain community,
including Coinbase, Circle, Lido, Morpho, and Uniswop.
You'll be able to bridge swap borrow, lend,
and launch new assets, and more from day one.
Built by Uniswap Labs, the team behind the protocol
that's processed over $2.75 trillion in all-time volume with zero hacks.
Unichain truly enhances defy experiences
with faster, cheaper, and seamless transactions,
even across chains.
And soon, the Unichain validation network
will allow anyone to run a node and earn by securing the network.
Visit Uniswap.org and swap on Unichain today.
I want to read a tweet from Vitalik that I'll also share on screen here. He tweeted out, this is on March 1st, so about a little over two weeks ago. He says, for 2025, we need Fusaka on the layer one with Pyrdas, ideally with 48 blob target, a 48 blobs per block. And then he falls up saying, let's aim to get a Fusaka test net with these blob parameters running the day after Pectra goes live. Now, I don't know, Tim, if there was any previously established like expectation on blob count for Fusaka.
are prior to this, but this was, in my interpretation, something like a doubling two and a half
X more aggressive blobs per block than I expected to have by the end of this year. Maybe you could
check me on that. And then also just kind of like fill us in on what the reaction was around the
all-core devs space around this tweet. Yeah, I think people broadly agree with this. Maybe some of the
caveats are one, you know, whether the DevNet goes live the day after Petra goes live or not.
Again, we already have a PIRDAS dev net up, so we are getting close to this.
And I think at a high level, the thing that's still somewhat undecided is is the best way to test this to start with the highest possible blob count, say like 4872 and then kind of work backwards from there.
Or should we try to run PIRDAS with, say, you know, the same number of blobs, make sure that works, and then ramp up, ramp up and see what breaks.
I see the value in both approaches, but we haven't quite made a call yet.
So you could imagine something where we decide to test peer dash with like a lower number of blobs.
And, you know, we're like doing, I don't know, say 1836 or whatever.
And that goes well. And then we try 24, 40. And then that breaks. And we feel pretty confident in 1836.
It might be the better move to just ship this. And then in the fork after that go from like 18 to 42 or whatever.
So I wouldn't necessarily commit on this specific number. I think everyone agrees that we want to get there.
but it's really hard to know, like, okay,
is there some weird point where things break
between what we have now and that number?
And we should figure that out before we ship it.
One other idea we've been considering is maybe we should ship Pyrdus.
If we're, like, pretty confident that, you know,
we can reach this high number, but we still want to make sure
we could ship Pyridus with kind of like an escalator increasing blob count
where we say, okay, when the fork goes live,
maybe there's like actually not even a blob increase,
but then two weeks after that, it goes from six to eight,
and then two weeks after that from eight to ten or something like that.
And this could be kind of nice because it would maybe let us make sure that at every step of the way,
things are smooth.
And then if there is some major issue at some point, you kind of have enough time to intervene.
So the exact rollouts to get to that number is still being discussed.
But I think broadly people would agree we should have this in 2025.
We should go for like a fairly aggressive number.
But we want to make sure that we don't just overshoot and then miss out on like some, you know,
2-3x improvement over the start of school because we were trying to go for closer to 10x.
Understanding that the all-core devs community, group of people, are like cautious by default.
I'm still getting a hint, some of like rumblings, that there's just more appetite for aggressive
numbers, my aggressive scaling numbers. Maybe you can comment on that vibe, like loosely.
Like, what is the appetite for aggressive scaling numbers or just like more aggressive shipping
around the all-core debts community? So I think there's definitely like your renewed
I don't know, interest or say like accelerating.
That said, you know, the blobs have been live for like a year, right?
Like a year ago, we shipped like this reduction from Cal Data to Blab.
This year we'll ship hopefully PIRAS as well as kind of this increase with Fusaka.
So I think it's true that it's always been kind of a priority to scale.
Maybe the thing that's changed recently is realizing that this should be, if not only the
only priority, but like part of a much smaller subset of priorities.
I feel like the perception or the thing that kind of gets awkward does
is not just like people don't care about scaling,
but they care about scaling and 19 other things.
And then being able to focus and saying like,
okay, scaling is the only thing or like one of the few things
that actually matters in the short term
and we should double down on that.
I think that sort of like vibe shift has happened.
Definitely on the consensus layer.
I think on the execution layer as well,
there's been more thinking about like what is the right role
and what is the right thing to optimize for.
So now that we have, you know, L2s in production, I think everything around the MV space as well has kind of settled a bit.
Where for the past couple years, it wasn't quite clear what, you know, proposer builder separation would look like.
What would the bottlenecks be?
Now it at least feels like we have some good understanding of one is like the topology of the execution layer.
And we can actually identify Bollonex and be like, okay, you know, this is the thing that's preventing scale there and this is the thing that's preventing scale there.
And start addressing those.
I think historically, yeah, it's been hard to do this because there's many moving parts,
and the execution layer has less of a clear lever or clear thing to tweak.
You know, we have the gas limit, but the gas limit is kind of an aggregation of all these
different things, right?
Like all the upcodes, all the state growth and all of that.
And now I think we understand much better, like, okay, these are all the inputs that go into
it.
And these are the things that will break at these levels.
And so we can kind of prioritize solving them.
But yeah, renewed focus, I think it would be the right way to frame it.
And that's it, you know, it's still like an open process, right?
There's still people who disagree with this.
There's still, like, different perspectives.
I think this is something we want to be cautious of not losing, where, you know,
Ethereum is very valuable because of this like plurality of thoughts and of contributors.
So while you want to, like, focus people, if you overdo that, then it's easy to just
have everyone kind of feel excluded and, you know, you end up being like a very small circle
and everybody else is left.
Okay, so we've pinned down Pectra.
We're in the process of pinning down Fusaka.
Fusaka is mostly peer-dass and increased blob counts.
But then also this thing called EOF, which I have just learned as of this morning,
it's called Ethereum Object Format.
I don't know what that is.
Maybe you could enlighten me.
What is EOF?
And why are people so excited about it?
Because when I asked Twitter about what I should ask you on this episode, Tim, EOF was a
pretty common thing that was brought up by people.
And I don't know what the hell it is.
So please, inform me.
Yeah.
So the best non-virtal machine expert analogy I've heard is
Ethereum's virtual machine is kind of like a 1950s computer,
and EOF makes it like a 1990s computer.
So the...
It's not 2020s.
No, no, we are like, yeah, literal decades.
And I think this is correct because, you know,
computers are very hard, they're complex,
and Ethereum needs to be super secure.
So it's reasonable to say, like,
you want to have a limited feature set.
Like, we don't want to do everything that modern computers do
on the Ethereum virtual machine because we keep finding bugs in those.
And there's a lot of debate around EOF,
but I think, like, at a very high level,
this is kind of the gist is, do you think, like, a 1950s style VM is sufficient and, like, minimal enough and we can just build everything else on top?
Or do you think we should actually take the time to improve it to go to something like the 1990s?
Which is not to say we bring it to, like, the absolute cutting edge of computer science, but it's like we add a bunch of things that are kind of pretty table stakes that the EVM lacks.
And some, like, you know, pretty high level parts of this is just, like, separating code and data in the EVM.
So right now, when you have like a smart contract,
the code and the data that stored that code can be kind of all mixed together.
And for a bunch of tooling, it's much easier if you're able to say,
okay, like this is the code and this is the data that goes into the code.
And, you know, current EVM contracts do not have things like that.
There's a bunch of just like specific up codes that, you know, we lack that make, you know,
just writing compilers and writing programming languages on top of the EVM easier.
The other thing as well with EOF is there's a bunch of things that,
we would like to ban in the EVM for different reasons that you sort of can't do on the main EVM because contracts use them.
But now with EOF, I kind of get this like fresh start where we're able to say, okay, like now you can't do those things anymore in these types of contracts.
And maybe to zoom out a little bit, like the way EOF works is it's effectively like a versioned type of the EVM.
So you'd see like if a smart contract has an address that starts with these specific things, it's an EOF contract.
and if not, then it's not one.
And that's kind of the high-level idea.
It's just saying, like, okay, for, like, these new types of contracts, which are upped-in,
we're not, like, forcing anybody or, like, deleting any old contracts, we just have, like,
more functionality.
We have, like, some better kind of data management of what's in the contracts, and we kind
of remove some stuff that we feel is just, like, not super valuable and kind of problematic
to a lot of the longer-term Ethereum roadmap items.
So is this mainly an upgrade to just the developer experience for,
people who are writing solidity or like whose life is enhanced the most here. Yes, and it would be
like one level underneath that almost. It's like an upgrade for solidity and other languages on top
of it, right? Like you can think of the solidity is constrained by what the EVM itself can do.
And so by improving what the EVM does or how it is like structuring the work that it does,
then we can improve the experience of the people who write compilators and write programming languages.
And then also, I think again, back to security, anyone who's actually looking and
and trying to analyze what contracts actually do,
and especially in edge cases, like auditing and stuff like that,
DOF also helps with some of this.
Okay, I'm finding it hard to analogize about this,
but I think I get it.
There's just like a very low-level structure of how the EVM works,
and we're just upgrading it.
It's just getting better.
It's going from a Toyota Camry to,
which is something very dependable and very standard,
but maybe kind of clunky,
to something like a Volvo,
which is also very dependable,
but a little bit more polished.
Yeah, yeah.
And I do think the 1990s, like, example,
I don't know if it's perfectly accurate.
I'm not like a VM expert,
but I think it gives a rough sense of the level of, like, complexity as well.
So a lot of the objections to EOF around complexity and whatnot,
and while it is true, it's a big upgrade,
it's also not saying, like, okay,
we are going to take, like, the absolute cutting edge,
you know, of like how computers are built today,
like, say, I don't know what Apple, whatever they do and, like their chips or not,
like, and then do that.
Like, we're still, you know, in relatively basic territory.
Okay, great.
So that's Petra.
That's Fursaka.
Is there anything else left in Frusaka worth highlighting?
Yeah, so we haven't finalized the scope for Fusaka,
but I think, like I was saying earlier,
one thing that's scared to everyone is we want PIRAS to ship as soon as possible.
So a few of the client teams have already signaled
what they'd like to see in the fork,
and generally the sentiment is that PIRDAS and EOF should be the only two big things.
We should maybe include some select amount of small things,
especially if they're like some security related,
they're like, you know, very, like, easy, low-hanging fruits,
but we should not try to do too much.
And actually, Pectra used to have Pyrdass and EOF as part of the single fork.
And last summer we realized that was way too much and we decided to cut it.
So you can think of Fusaka as like the two big things, EOF and Pyrdas, that were originally in Pectra,
but felt like they warranted their own fork.
And approaching it with that mindset is probably the right approach.
And like, okay, we already had way too much stuff.
Now we like ship the first half.
let's just ship the second half, maybe, you know, add a couple small tweaks, but not try to, like, do the same mistake again where, you know, instead of having like EOF and PIRAS be the ones that are moved out, we add a bunch of stuff and realize six months later that it's too much. Those two are already pretty big changes. So we should, like, yeah, stick close to that if we can.
And then after these two comes, glamsterdam. Yeah. My intuition is that Glamsterdam, we know the name, but it's just so loose and nebulous about what's actually going to go in it. It doesn't really have an identity yet. It's just kind of this placeholder name that we know.
is coming that we're going to deposit things that we want into it at a later date in time.
Is that kind of the vibe?
Yeah, that's correct.
And one thing that we're thinking through at the Alcor Deft's level now is we've gotten better
at parallelizing working on the forks over time.
So like now, you know, what's coming in Fusaka we've been working on for like the past
year.
And one sort of experiment we'd like to move towards is, okay, if in the same, the next
month or two we have the scope for Fusaka completely finalized, can we already start planning
in Glamsterdam so that, you know, a few months after that, we know exactly what's in
Glamsterdam, and teams can start working on it, and then, you know, we start looking at
the next thing. And there's a limit on this because a lot of the times, like, you want the same
people to look at it all, right? Like, you can't completely replace people and just have, say,
you know, like group A, decide what's in the fork and group B implemented, because group B
will probably have, like, high context around what should go in and the risks there. But I think
just getting better at parallelizing, like, the planning of the fork.
in the implementation so that as soon as we're done with one,
we have prototypes for the next one,
and we're kind of ready to make it the main thing.
Hopefully we can do that so that, you know,
GlamSodan is set up in the next few months in terms of scope,
and we're starting to talk about what's next
while we start to prototype GlamSodam and ship Fusaka.
Okay, and this kind of just all goes back to a related conversation
about just like Ethereum, AllCore Devs
being just more aggressive and ambitious about upgrades,
maybe not ambitious, but just learning to really make processes
for all of these things and learning,
where can we take up Slack in the development system?
because, you know, Ethereum's greatest strengths is that it's decentralized, it's rough consensus,
but that produces its, like, weak underbelly, which is, you know, it's slow.
It's slow.
And we have these, like, centralized smaller competitors that are really taking advantage of this.
And so, like, learning to have faster upgrade cadences, that energy is coming out of, like,
this angst in the Ethereum community of, like, Ethereum just goes so slow.
Why does it have to go so slow?
And, well, there's very good reasons why is it has to go so slow.
But it sounds like the all-core devs calls, they are finding some slack in the upgrades
system that they are taking out of just via operations and process. Is that right? Yeah. And I think
you can think of two causes to slowness. One is like process and operations and we should obviously
remove all of that slack if we can. And the other one is just due to like R&D being hard. Right.
So again, to take peer das an example or even something that already happened, take like the merge.
When we started working on the merge, like we knew this was the priority and we made it the priority
number one, but as you work on it, you find new stuff, you find new issues, you should still
keep working on the merge, right? It's still the most important thing. So I think a lot of the time,
like there's all these proposals saying, like, we should hard fork every six months or every
quarter or whatever. This sounds good in theory, but imagine we were doing this, say, for the merge
and it's like, okay, we mix the six months deadline. Do we just like ship a bunch of random,
less useful stuff? It's like, obviously we should not. Like, obviously it's still the highest
important priority to ship the merge. And so, you know, if it's going to take an extra six months,
because there's something fundamentally hard about the merge,
we should still be willing to pay that cost.
And sometimes this can stop being true.
Like maybe the merge is probably the most consequential upgrade,
but you could imagine there's something else we work on
that we think is the highest priority
and then we realize it's going to take an extra year.
And the fact that it takes an extra year
might mean it's no longer the first priority.
But I think that's like a different type of slowness
from saying like, okay, are we spending six months
like waiting around to figure out what to do?
And that latter type, we should obviously just, you know,
get better at that. And I think we have over the past couple years, and a lot of it has happened
fairly informally. So I think now we're at the spot where we can start to say, like, okay, this is
how the process actually should work. And we have all these components. We're like, yeah,
we're already prototyping these hard forks, you know, one after the other. We already have like this
new intuition that we should know what comes next. So let's just like make that a real commitment
and say like, okay, by this date, we finalize the scope. By this date, we expect to start prototyping.
And if there is like some hard R&D or engineering issue, like obviously we'll take the
time it needs to fix it, but it's not like the process slowing down the engineering. And so I think
again, we are trending towards that and hopefully we'll get that so in the next year. So Tim,
you probably have out of anyone, the most context for how all core devs works and how the
Ethereum upgrade process works or just Ethereum broadly. What's the most frustrating FUD or misconception
you would like to clear up? Since if you're the high context individual around your domain
and then there's, you know, 10,000 low context individuals tweeting bullshit on Twitter.
What's the most annoying thing that you hear that you would like to address?
So I think one thing that always is weird is when people talk about things being delayed,
the forward has been like a date set to them.
And this is kind of part of my like, you know, engineering unknowns process.
And I've struggled to find the right way to deal with this because everyone wants to like to know when the upgrade is going to happen.
You know, like I just said Pectra will happen sometime in May.
Probably.
Yeah, this is like my current base case based on like everything I've seen and I know,
but like it's not scheduled for me, right?
Like we don't have like a date that says like it's May 18th or it's May 7th or it's May 23rd or whatever.
And so there is a point where like we do schedule that thing and say it's like a month out.
And then it is like actually a delay to say like, okay, it was scheduled for May 13th and we found a bug and it's now May 28th.
But I think like yeah, the hardest part is figuring out how do we talk about things when there's still in the uncertainty level.
while not falling either into like,
we say Fusaka is going to ship, you know, on October 15th,
and then it doesn't.
But then the alternative is saying like,
okay, it'll ship when it's ready and it ships in 2027
and the theorem's become irrelevant.
So I think on both sides, I kind of feel this pressure of like,
what is the right level of granularity to give to stuff
based on the current level of uncertainty?
And I don't think we found the right way to communicate that,
to say like, okay, this is like a six out of ten certain
for like this window because there's like these and these like unknown unknowns.
If like I could somehow fix that, I think it would make a lot of the perception issues much better.
One thing we haven't talked much going through some of these hard forks, Petra, Frusaka, Glamterdam, is scaling the layer one.
So where does the conversation around scaling the layer one rank in priority?
What does that conversation look like?
How does that conversation fit into the future of Ethereum?
So I guess by scaling the layer one, this means scaling the execution layer, right?
Yeah, scaling the execution layer between just scaling the execution layer between just
as throughput and or decreases of block time.
Yeah.
So I think this is something that's been getting more and more attention.
Again, there's like bad ways to do this and they have like security issues.
But what I was saying earlier is now, I think we understand a bit better what some of those bottlenecks are and what the ecosystem looks like.
So there is a renewed interest in it and at least a better understanding of, okay, for example, you know, state growth was one of the things that over time like stopped us from scaling the layer one.
but then Paradine did a bunch of research last year
and realized like it's maybe not as much of an issue
in the short term up to some levels
than we thought it would be.
So like understanding like, okay,
actually history growth is like a bigger bottlenecks on node
in the short term.
State growth will become a bottleneck on nodes
in the medium term.
And then, you know, we have these other bottleneck
at different places.
That feels like it's improved a lot
in the next year,
and in the past year and we'll continue to do so.
And this is kind of the fundamental challenge here.
Again, we have the gas limit.
Like we can say, we just increase the gas.
limit, and that is kind of a very coarse lever, but it would be good to understand, like,
what is exactly the effect of doing that? For example, one other effort that's been happening recently
is thinking about can we reprice opcodes in Ethereum to make a bunch of stuff cheaper because
it's currently overpriced so that, you know, we can keep the gas limit the same, but there's a bunch
of stuff where we can increase the amount of those operations in a block, you know, for fixed size.
So a way to frame it is on the consensus layer side, we have like a very clear North Star just like, okay, how do we like scale the blobs?
On the execution layer side, the North Star is clear, the gas limits, but the mechanisms or like the inputs to it are just like much more complex than on the consensus side because the consensus layer you just have like some amount of data and you're trying to figure out how to send this, shard it, distributed efficiently.
So yes, I think it is like important to scale the L1 and people are working on it.
But does this have a clear understanding of like, okay, this is the actual short-term practical thing?
We kind of have 10, you know, different things that people are working on, and hopefully those kind of compound together.
The conversation of scaling the layer one is just so multivariate, and there's many different inputs, and that makes it complex and harder to like really optimize, whereas with scaling blob space, it's like pure das and blob targets.
And again, like, it's not like an impossible thing on L1, but I think we're now in a better spot to like actually start to tune those knobs than we're work.
couple years ago. And to give you an example, like just the idea of like, okay, where does the
block get built, right? There's something that's changed a lot because now most blocks get built
through like MEV. And one of the things that's kind of a bottleneck on this is like, when should
the block be verified? And the way that like block verification works, it's just like, you know,
you kind of build this block, you verify it all within the current slot and then you're good. But now,
because of MEV, people want to wait until the last possible second to actually build a block.
So we have kind of less time for verification, which is fine now, but if we scale the gas limit
becomes a problem.
So if we know that we have high resource builders that can build these blocks very efficiently,
should we just delay when we actually verify them and say, okay, you build a block for slot
one, it gets verified in slot two.
And that's probably a way that we can leverage the way the ecosystem has been set up
to scale the L1, but it's something that wasn't as obvious, say, two years ago.
That's just like one random example, but it's always.
all these types of things where, okay, now that we actually understand things better and the equanims are a bit, the equilibrium are a bit better set, we can have better intuitions for this.
And hopefully this means we will scale DL1 as well.
I think there's a more applied research work to be done there.
One last conversation, Tim, before I let you go, just the conversation around interoperability and fragmentation of layer two's, is any part of that conversation actually in scope for all core devs?
And if so, what is in scope for the interop conversation for all core devs calls?
because I think that's mostly in the ecosystem standards thing,
so it's outside of all core devs calls.
But to what point is that actually a conversation
that is talked about in all core devs?
So there are some cool protocol things we could do.
The challenge is they are a lot of work,
and they make a lot of complex assumptions,
so they will take years to ship.
So yes, this is something we think about,
but it's not going to happen in like the next six months.
So I think if you're looking in the short term,
then going all out on like,
what can you do at the application layer
in terms of standards is the way to go.
But longer term, like, yeah, Justin has this idea around, you know, native roll-ups, right?
You can imagine this being a direction we go in where we say, okay, there's like a common
pre-compile on a theorem that roll-ups use and that because we know they're all using this,
they can interoperate better and get kind of, you know, L1 guarantees around this.
The challenge is this is like an extremely complex piece of code that, you know, is custom to every roll-up now.
So we need to figure out like, okay, what is the right abstraction to have on L1?
How do we test this?
roll-ups are very different like performance and security guarantees than L1, you know,
they're kind of architected in like a different way.
And so it is going to be years of work to figure out what is the right set of abstraction
that kind of worse for roll-ups works for L-1.
But yeah, I believe pretty strongly we should do this.
Like I think if you think of Ethereum L1 as kind of like a utility provider for like roll-ups,
like we need better plumbing and infrastructure and stuff there.
And we should also be willing to say like, okay, after consulting with everything that existed,
we think this is the right approach
and nudging existing roll-ups to migrate to that.
I'm pretty bearish on the idea of Ethereum
creating an L-1 and try and roll-up or something like this
to go and compete with what exists.
But it should be fine to say,
hey, these are like some standards,
say contracts on Ethereum or protocol features
that give you inter-op features that give you security,
but you're going to have to migrate to adopt them.
And obviously, the migration should be possible for roll-ups.
Like it shouldn't be something that says, like,
oh, you have to completely abandon
your project and started a new roll-up. But yeah, that entire process of just designing the right thing,
getting people to adopt it, but not is not going to happen within six months. So in the short term,
I'm very bullish on just app-level and wallet-level standardization. Tim, this has been super
informative. I've learned quite a lot. So thank you for coming on and just helping me run through
some of these things. Is there anything that I haven't asked? Any, like, stone I haven't unturned about
any of the near-term hard forks in Ethereum or anything adjacent to that? No, I think we covered a lot.
Yeah, can't think of anything right now. All right. Well, well, one last thing before,
I let you go is Summer of Protocols. This is something that we did on bankless, actually,
the first Summer of Protocols, if maybe listeners missed those episodes. Maybe you can talk to them
about what Summer of Protocols is because I'm reading your tweet here. Summer of Protocols is back
for 2025. Just give the listeners a little bit of context of what's going on here.
Yeah, so the idea with Summer of Protocols is that when I was working on Ethereum a couple
years ago, I realized there's not a lot of analogs for the thing I work on, right? Like,
if you're like a software engineer and you think about like writing good software
there's a lot of ways in frameworks, how do you write good software?
If you're starting a company, there's a lot of frameworks for how you run a startup.
There's not much out there in terms of how to work with protocols.
And this comes up often when people talk about Ethereum, right?
They say, like, oh, Ethereum's like this tech platform or Ether's money or Ether's like a world computer.
And so there's like all these different aspects that represent part of it but are not kind of a full picture.
So in 2023, when we launched this, we want to figure out, can we actually understand protocols better?
And so we funded a bunch of researchers in many different domains to study protocols and try to see, okay, do they have some common learnings?
And it turns out that they did.
Last year, we wanted to test, like, does this actually work in production?
So we funded a bunch of people to, like, improve protocols in different domains.
We had some people working on, like, cryptography protocols, some people working on, like, wildfire management, some people working on plurality voting and figure out, like, okay, are there, like, more like applied things of how you do your work with protocols that we can generalize?
And we also found there was a bunch of stuff. Now in 2025, we feel like we have all this knowledge, but it's very poorly organized. It's a bit chaotic. And so we want to try and make it understandable for the world. So what we're funding is a bunch of teachers and educators to try and take everything that exists and kind of condense it in different types of classes, online courses, books and whatnot to have a sort of unified explanation of how all of this works together. So if you're interested in this, we have our kickoff this front.
Friday, March 1st, but you can also go to summer protocols.com and get all the info.
And then all the research we've published over the past two years is just up there and free
for people to read. But yeah, it has been really exciting to try and think through like, okay,
how should we think about Ethereum better? And one unexpected side effect of the program is
a lot of like really smart people who don't necessarily care about blockchain or even like
some of them just straight up don't like crypto have been like really willing to engage with us.
And I found, like, talking about Ethereum problems through the lens of, like, I have a protocol and it has these problems has been really helpful to, like, learn from them and exchange with them rather than saying, like, okay, just like a crypto thing or like a crypto economic problem that we're trying to solve.
So, yeah, that's the idea behind some of our protocols.
It's run by Venkatash Rao and Timber Shroff this year.
So, yeah, check it out.
Yeah, I'm listening to the episodes that we put out on Bakelis about some of these researchers that went out and did their research about protocols.
and then learning about kind of the connection points
between seemingly very disparate ideas
as it relates to protocols.
And a protocol is already something so incredibly meta.
Like Ethereum isn't one thing.
It's actually a collection of things
being held up by all these different client teams.
It's something very meta.
And then now we're talking about learning about protocols.
So we're already getting even more meta than that.
And so like kind of learning from these perspectives
of these researchers, protocol researchers,
it felt like kind of like touching the metal of the universe.
is like something feels very close
to like the bite code
of how this universe is working
and so I thought it was very, very interesting.
I'd never got any sort of education
about anything related to that anywhere else
other than some of the episodes that we did.
Yeah, and maybe one last thing on that,
like, VanCat framed it really well.
It's like, if you think of the fields of economics,
before we had this idea of supply and demand,
people still bought and sold stuff to each other, right?
So you could maybe walk around the world
and be like, oh, this person's selling cows
and this person is selling boats and whatnot.
And like, you could build an intuitive sense
of like, oh, more cows means like the cow price is lower.
But then, you know, you kind of have Adam Smith come in like, okay, like we can formalize
some of these concepts and say like, this is a way to think about these things that are
already happening.
This is, I think, like, the cleanest, most possible example.
I don't know that we'll get something as concise for protocols, but that's a rough
idea saying, like, you know, protocols exist.
People obviously work on them.
They improve.
But they are kind of different to like the way they're instantiated.
Like the best practices for software development, don't.
don't all apply to Ethereum. And the best practices for how you are, like, the legal practice
don't all apply to like international relations and like how you do like, you know, multi-nation
protocols in terms of conflict. So I think trying to surface like what is a better way to
frame those things is the goal. And yeah, hopefully this cohort gets us a bit closer a lot.
Tim, thank you for joining me on Bankless today. Yeah. Thanks for how to you.
Panglis Nation, you guys know the deal. Crypto is risky. You can lose what you put in.
But nonetheless, we are headed west. This is frontier. It's not for everyone. But we are
Glad you were with us on the Bankless Journey.
Thanks a lot.
