Unchained - Cheaper Fees and No More Free Lunch for Layer 2s? Inside Ethereum's Fusaka Upgrade - Ep. 966
Episode Date: December 3, 2025Ethereum for the first time ever has rolled out a second major upgrade within a year. Fusaka has gone live less than six months after Pectra. In this Unchained podcast episode, Offchain Labs Prysm Te...am Ethereum Cored Developer Preston Van Loon joins Protocol Watch founder Christine D. Kim to unpack how Fusaka would make transactions cheaper, improve the UX for users and impact layer 2 chain operators. They also discuss the relatively short time to deployment and how this is impacting client and layer 2 teams. Preston also explains why he is less nervous about Fusaka than he was about Pectra and the indicators of success. Plus what comes next after the hard fork. Thank you to our sponsors! Uniswap Mantle Guests: Christine D. Kim, Host of Ready for Merge Podcast and writer of ACD After Hours Preston Van Loon, Ethereum Core Developer working at Prysm by Offchain Labs Previous appearances on Unchained: How Will ETH React to Ethereum’s Shanghai Upgrade? Links: Unchained: Ethereum Fusaka Upgrade Clears Final Test Before December Launch Ethereum Gave Away Too Much for Too Long. Will Its Pivot Be Enough? Timestamps: 🚀 0:00 Introduction 👀 4:39 How Fusaka is scaling Ethereum's data layer without imposing big hardware requirements 💡 11:44 How Fusaka is “a big stepping stone” to Ethereum's proto-danksharding vision 💥 15:08 Why the Fusaka launch timeline is a significant milestone for Ethereum developers 17:12 How Fusaka will make signing transactions easier 👀 18:37 How Fusaka will impact layer 2 operators 🤔 22:49 Are layer 2 chains ready for Fusaka? ⁉️ 29:30 Can L2s benefit from PeerDAS without features like backfilling? 🤔 32:41 Did developers have enough time to prepare for Fusaka? 🫣 34:31 Do faster development timelines impact client diversity? 🧏 40:24 What should have been done differently with Fusaka preparations 📽 42:27 The best way to watch the Fusaka upgrade in real time 💡 44:20 Why Preston is less nervous about Fusaka than Pectra 🚦46:01 Indicators of Fusaka success 📝 46:45 Preston's risk assessment for blob parameter only hard forks Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Hi everyone, Laura here.
I'm actually on vacation this week, so we have a special guest host stepping in, Christine Kim.
Christine has been covering Ethereum's roadmap for years, and she sat down with core developer Preston Van Loon to dig into the upcoming Fusaka upgrade.
What's inside it, what it means for users, and how it affects layer 2s and client teams.
It's a great conversation, and I'm excited for you all to hear it.
Enjoy.
Are you a builder who needs to add on-chain trading?
to your product, the Uniswap Trading API from Uniswap Labs offers plug-in-play access to some of the deepest
liquidity in crypto. It's on-chain execution at an enterprise level. More liquidity, less complexity.
Visit hub.uniswap.org to learn more. Mantel is launching the Global Hackathon 2025 to accelerate
the future of real-world assets, with a $150,000 prize pool, backing from a $4 billion treasury,
direct access to ByBits 7 million plus users. This is the ultimate ecosystem for builders.
They historically move quite slow by design, right? Like it's, there's just a lot of moving parts here.
And the fact that we can get two in one year is really impressive because it means like people are finding
a rhythm and like getting through. And what I'm hearing from my peers, other core devs,
I'm not hearing excessive burnout. Like this isn't like, oh, we just worked twice as hard because we were already
you know, working 110%.
Hey, everyone.
Welcome to this special episode of Unchained.
I'm Christine D. Kim, the host of the Ready for Merge podcast and the writer of the ACD After
Hours and BTC Before Light Newsletters.
I'm very excited to host this episode of Unchained because we're going to be talking about
the Fusaka upgrade, which is coming up very, very soon.
And I'm joined by a very good friend of mine, Preston Van Blune, the founder of Prismatic Labelun,
the founder of Prismatic Labs and a full-time client developer for the Ethereum Consensus
Layer client Prism.
So, hi, Preston.
Hey, happy to be here with you.
Yay, I'm glad that we're doing this episode together because, as I was just mentioning
before we started this recording, when I was ideating the show or this episode with the
Unchained team, we knew we wanted to talk about Fusaka and also give it a little bit more
of a focus on how Fusaka is going to impact L2s.
And uniquely, the Prism client is maintained by off-chain labs, which builds Arbitrum and L2.
So I feel like, yeah, you have a really, like, you're in a position where you're building
the L1, but have like very close access to like a major L2 that you're building.
Yeah, I would say that Arbitrum or Arbitram.
Offchain Labs is our biggest customer.
So we care deeply about what's happening in L2s and how we're improving and impacting them.
So there's a lot to talk about here in Fusaka.
This is a really exciting upgrade.
Yeah.
Let's, I want to talk about the specific impact of Fusaka on L2s, but very generally,
let's talk about the kind of like biggest EIPs or biggest impacts coming up from the upgrade.
So Fusaka, as we were talking before the coronary, has about 12 EIPs in it.
Can you give an overview of like the main features, the biggest impact of this upgrade on various Ethereum stakeholders?
Like give us the birth-eye view of this, of this major upgrade.
Yeah, I think that almost all these are very exciting.
There's some that are like more, I don't know, maintenance level EIPs.
But this is a very exciting EIP set because we have.
have a really interesting headliner called Peerdas, and the theme here is Blobs, right?
We're trying to scale the data layer of Ethereum through Blobs, and do it in a way that
keeps it practical to run an element, to keep the decentralization going, and keep the
homestaker alive, and at the same time, scaling, scaling the data layer, scaling the L1,
There's a little bit of both in this upgrade.
And if you like, we can dive into each the IPs or kind of touch on them a little bit.
We start with peer dots.
This is one-dimensional peer dots.
So the idea in Deneb, two forks ago, so we introduced this blob layer, right, where you can post arbitrary data.
It's called a blob because it's sort of undefined what it is.
It could be anything.
It could be a picture of a cat.
it could be called data or the batch data from an L2,
which is more practically what we see.
See some people playing games on, like actual games on the data layer.
There's something called Blabhouse, which is kind of interesting.
But really, this peer does about supporting L2,
is the people that want to build on the L1 and need the data availability layer that we provide.
And so when we introduced this, we said,
well, let's upgrade the blocks so that people can buy six blobs.
They're all the same size, I think 120 kilobytes, and you can buy up to six of them per block.
And then everybody who runs a node downloads the blocks and the blobs and the blocks,
downloads everything, retains everything for a period of time.
And when you want to start scaling this, you want to start increasing,
and when you're asking more from the network, say, if we went to 12,
now suddenly everyone has to double the amount, right?
And it just doesn't scale very well.
So there are techniques that we have at computer science where you don't need to store everything, right?
The novel invention in Pyrreras is to use erasure coding,
which is a fancy way to say if you have a subset of the data,
let's say 50% of it, you can recondition.
construct the other 50% of it.
This is technology that was really popular,
I think in the 90s with compact disc players.
If you were jogging with your disc player
and it skipped, what they would do is if it didn't read the disk
for a moment, it has enough data to continue playing the music.
So we can use a similar idea.
Instead of having all the data, everyone having all the data
all the time, you just have a little bit of it.
And for a normal validator, let's say what I mean by normal is at home staker with one validator.
They don't even need to retain 50%.
They're only going to retain some part of that.
I think they're going to have eight data columns.
And there are either 64 or 128.
I can't remember.
But they don't have to retain all of it.
They're going to get a random number of those.
And they're going to sample.
So they're reasonably convinced that, okay, most of the data is there.
I'm convinced that it exists.
So it makes it easier to scale the blob layer, the data layer,
without having to impose these big hardware requirements, right?
You're not going to need more, you're not necessarily going to need more disks, disk-based.
So right away, when Fusaka goes live, probably is already live,
if you're listening to this now.
Node operators, the blob count is still six,
the maximum, or nine, whatever it was in Pectra,
didn't change right away.
So all of a sudden, node operators are storing less data
and achieving the same goal.
That's a huge win.
But right away, we can take advantage to that, say,
well, now let's increase the target blob count
and the max blob count,
and kind of reach pairs.
of where we were. So we'll see, you know, the Fusaka Fork happens on December 3rd, and then on December 9th,
we have something called a blob parameter-only upgrade or a hard fork. And this is a sense that
there's not a client update that goes out. You know, like every hard fork, every upgrade,
everyone has to like get their software updated. You already, if you've already, if you're on Fusaka
now, you already have this built into your client. You don't need to update.
update again. This will be the blob parameter only upgrade and that'll be on December 9th.
And I can't recall what we go to. We go to, it just increases to, oh, it increases to 10 as the target and 15 as the
maximum. And then again, a month later in January 7th, it'll increase again to a target of 14 and a
maximum of 21. So that's a pretty substantial update to the data layer. We have a lot more
capacity. That's very interesting for L2s. So what does it exactly mean? For an L2 user, fees will go
even lower, I imagine, which is kind of hard to imagine because fees are so low right now.
On Mainnet, I saw the gas price being 0.1 Gway.
I've never seen it that low in my life.
And that was amazing to see.
But anyway, we're scaling, and that's why it's so low.
Actually, part of Fusaka was that the L1 gas limit went from 45 million to 60 million as a client default.
And because people are updating early, that already happened.
And fees went way down because there's now a lot more block space to use on that.
L1. That's really great. And then where else is scaling the data layer? So L2 has benefit from
that. It's pretty cool. That's I think for context for our listeners, the food soccer
main net upgrade is happening on Wednesday, December 5th, and we're recording this on Monday,
December 3rd, two days before. So depending on when you're getting this episode,
it could be that fees are even lower by the time you're listening to this episode
because of the scaling improvements that are coming through PIRDAS.
I want to double click on one of the things that you kind of mentioned Preston,
and you said that PIRDAS, which is the headliner, the main feature in Pectra,
or sorry, not Pectra Fusaka, is one-dimensional.
Can you talk a little bit about some of the features that still have not yet been implemented for peer-d-ass to be fully,
for that data availability sampling to be fully functional?
Because as I'm aware, some of those benefits around being able to scale blobs, being able to increase throughput without impacting the home staker,
that's kind of still the goal even after Fusaka.
Can you talk a little bit about the, what exactly, what kind of features are not actually enabled through the partial pyrdust that's kind of implemented through Fusaka and still needs to be done after this upgrade goes live.
Yeah.
So what I mean by one dimensional is that when we have a blob, we split it into columns, as I mentioned before, I think 128.
And some you only need to, that's the one dimension.
They're split in the columns.
And you retain a subset of that.
multi-dimensional pyrdows or the proto-dank charting vision is a lot more elaborate.
I feel like I couldn't do it justice to explain it correctly.
I'm afraid to misspeak.
But my understanding is that you can use data from other blobs to reconstruct
other from data columns from other blobs to reconstruct the blob in question.
So you would be able to retain less data and be able to reconstruct more.
So that's kind of interesting.
But this kind of like is a building step towards that.
And, you know, when we're talking about going to like 21 blobs per block, I mean, we want hundreds at some point.
So this is, you know, there's still a lot of work to do here.
This is a, you know, a big stepping stone towards that.
And the technique we use with the blob parameter only forks is that we can, you know, incrementally ramp up to that instead of like going on day one from going to a significant update in blob capacity.
We can do it incrementally.
Nice for multiple reasons.
Like node operators have time to react if they're like hitting some resource limits, which they shouldn't in these case is quite conservative.
and the fee markets are a little more they don't have such a shock to it when it's like capacity just tripled overnight it'll be like over a couple of like in this case it'll be over a month and a half like six weeks so that's pretty nice yeah some incremental upgrades i mean to that point we were talking about how this before this recording how it's going to be the first time developers have shipped two major network on upgrades in one year
Technically, it's three because, as you said, there's the BPO-1 hard fork, which means you guys are shipping three upgrades in one year.
And by the number of EIPs that have been implemented, this is by far the biggest year yet for Ethereum development.
Pectra was the largest upgrade by far with 12 EIPs.
I believe this one, Fusaka, is the same or similar, 12 EIPs.
how significant of a milestone do you think that is that Ethereum developers are shipping
this many features in this short of a time?
It's huge.
I mean, historically we've moved quite slow by design, right?
Like, there's just a lot of moving parts here.
And the fact that we can get two in one year is really impressive because it means like
people are finding a rhythm and like getting through.
And what I'm hearing from my peers, other core devs, are not hearing excessive burnout.
Like this isn't like, oh, we just worked twice as hard because we were already, you know, working 110%.
But something in the way it was organized, just we're able to do it.
On the Fristin team, we kind of have like an A&B team internally where like some people are working on like H-Star already.
Some people are working on Glows and like, and then while we're wrapping up Fusaka.
you know, people are jumping ahead.
So kind of leapfrogging each other, like half the team is working on one fork and the next one.
So things are happening in parallel.
And I think that speaks to, you know, Ethereum community and Cordyves want to move quickly.
Like we want to scale, you know, we like to ship things.
And seeing, you know, Pectra was the biggest fork like of all time.
And then we just were like, you know, let's do it again.
And we did it again in one year.
That's really impressive.
And, you know, this upgrade hassle, just a lot of really exciting things.
Like, not only the blob parameter stuff, but they're EIPs for, like, UX that I'm really excited about.
Maybe we could talk about that.
There's an EIP that adds a free compile for the elliptic curve that is commonly used for pass keys.
So, you know, what if you said, like, I wish that everybody had a hardware wallet.
well, now if you have a phone, an iPhone or Android, now you do.
Because there's a cryptography chip in there where you have your private key
baked into the phone.
So your phone is like the hardware wallet.
And now that hardware can sign Ethereum transactions natively.
That's huge.
Like that's going to unlock like sign your transaction with your face, your face ID.
You know, that's going to be super cool.
Or your thumbprint or whatever people are using for pass keys today.
you know, passwords are kind of going away and we're kind of moving towards this model where you have this
secure enclave and some kind of like biometric. And seeing that for Ethereum is really cool. So that
benefits L1 and L2. Everyone who's using the EVM and supports the CIP, who will see a benefit of, you know,
user experience for sure. Any other particular impacts from the long list?
of EIPs in Fusaka that you think is important for L2 operators specifically to know?
Yeah, so there are a few things that impact them.
One of them being that, you know, as we're increasing the block size,
what I mean by size, like the gas limit for blocks, there becomes risk for, you know,
certain attacks, like just filling it with junk and stuff like that.
So folks have advocated for an EIP that limits the,
gas for any particular transaction.
So the new limit will be $30 million.
And I don't think that will necessarily be an issue right away, but for some L2s,
I could imagine they use a very expensive, like batch processing or transaction settlement
through call data that could impact them.
I mean, they can, there's ways around it, right, but that's something to call out.
mostly affects how L2s operate, not like,
end users won't really notice or care how it gets done as long as it's getting done.
There are other things for L2s, like the deterministic proposer look ahead.
And what this means, this EIP means is that with the consensus layer,
we don't know exactly who's proposing the next epoch until this epoch finishes.
We have a good idea, but there's like, you know, a little bit of randomness that can happen.
And so we can't make any decisions based on the knowledge until the first slot of an epoch.
And so when you have sequencers that want to work closely with L1 block producers,
you need to have that deterministic capability of predicting, well, what's going to happen?
in the next 12 minutes.
And that way you have a little bit of advanced notice.
Maybe there's some base sequencers or someone who's like working with validators
to like get their stuff committed in a particular order.
Imagine you wanted to do, you want like two L2s that like batch their things together
and so they can do like cross-chain intents or something like that.
people can get creative with the idea,
but having that gives a little bit more flexibility
on what you can build for L2s.
So that's interesting.
Another impact for L2s is the fees for blobs.
Historically, we have made a huge fumble in the fee market for blobs
where like typically, or for the longest time,
they were being purchased for like one way,
like the, or one way probably, like the bare minimum you can do.
And this didn't, this is just like a naive assumption of like, oh, we're going to find
equilibrium in the fee market.
But one way or one way, whatever may be, the small number doesn't even reflect the computational
cost to process a block, right?
So we're kind of like giving this away for free and not even charging what it costs the node
operators to process, a Baklush to process.
So now there's a new EAP that ensures that the minimum fees are at least enough to cover
the execution cost.
And then the fee markets are more predictable and they're not like, like, you know, it can
still be very, very cheap, right?
But it's not going to be like free anymore, which just makes sense, you know.
I don't think, I don't think L2 fees will really.
notice this much, but the narrative of like L2's eating the L1's lunch kind of goes away a little bit
because, you know, at least they're going to be paying for what they're using, which is just fair.
That's quite a lot of different EIPs that L2s do need to be aware of.
And as I understand, because I've been tracking like the developer calls, there have been moments
in the lead up to this main net upgrade for Fusaka, where L2s do need.
who's were kind of caught off guard by these different EIPs and their impact.
Specifically on the on the topic of like how peer Das,
the main headliner feature is going to impact them.
And I think even before this call or this recording,
which again,
we're recording like just two days before the upgrade.
I think I saw someone from off chain labs asking in the discord,
whether there would be kind of like a limit in the geth client around how many blobs
you can attach for your transaction to be even valid.
And like kind of functionality there changing that was unexpected.
Do you think L2s are prepared?
Like, are you worried at all that there's going to be any L2s that, yeah,
unexpectedly find that maybe they haven't updated their proofs for like KZG proofs for
blob transactions and they should have, but they didn't?
And kind of like this, this sort of like unexpected.
ways in which their operations may not work after the upgrade?
Yeah, this is a good point.
It's a new question, too.
The blob transaction, like the transaction you do to purchase blobs,
it does change, right?
So if you haven't supported that, it's not going to work.
I think that, you know, the way that we go about conducting these upgrades
is that we update the test nets pretty well in advance, right?
So I think, like, Sequoia was updated, like, over a month ago, maybe two months ago.
And I heard that that's, you know, when people found out for the first time, like, oh, my thing is going to break.
And I need to fix it right now because it's going to break in minutes and, you know, a short amount of time.
That's kind of the purpose of test nets.
Like, maybe it could be, it certainly could be better communicated saying, like, here.
Here are the breaking changes coming up.
I think the EF blog post did a pretty good job of outlining these.
There are things changing with like upcode gas cost.
So they're like little tweaks that it's kind of hard to,
it's kind of hard to communicate every scenario in which this is a breaking change.
People do really creative things.
and you can't predict all of it.
For the major L2s,
I'm confident that they are well aware of what's happening
and that when Fusaki activates,
it's not going to be like, oh, your favorite L2
just stopped posting to the L1.
I don't expect that to happen.
Maybe it could, but I believe that the people running these operations
are paying some attention because we've been talking about it for a few months and it's been
in the test net for a little while.
And if you haven't been running your chain in a test net for the last six weeks,
and maybe you don't know, but people should know by now, right?
That's something.
Hey, founders and developers.
If you're looking to bring on-chain trading to your product, wallet, or platform,
check out the new Uniswap Trading API from Uniswap Labs.
It's your plug-and-play gateway to global on-chain liquidity.
No deep crypto experience required, and no need to manage complex integrations or ongoing maintenance.
With the Uniswap Trading API, you'll get enterprise-grade on-chain execution,
combining both on-chain and off-chain sources for the most competitive prices.
Simply put, more liquidity, less complexity.
And this isn't just any API.
It connects directly to the Uniswap Protocol, which has securely processed over $3.3 trillion in total volume,
with zero hacks. So stop worrying about liquidity infrastructure and focus on building your product.
Get access to the same liquidity that powers billions and swaps through one powerful API.
Visit hub.uniswap.org to learn more.
Mantle has entered a new phase as the distribution layer connecting TradFi and on-chain liquidity.
To accelerate this vision, the Mantle Global Hackathon, 2025, is inviting developers to build scalable,
RWA and DFI products. Why build on Mantle? It's an ecosystem built for builders. You get direct
access to ByBits 7 million plus users for potential listing exposure, support from the $4 billion
mantle treasury, and mentorship from top VCs like Spartan and Anamoca brands. With six tracks,
prioritizing RWA's and RealFi, and a $150,000 prize pool plus grants, this is your chance to
to deploy on a high-performance modular L2.
Register now.
The link is in the show notes.
Okay, so L-2s are prepared for the upgrade.
They had over a month to prepare on those test nets, which is good.
One other concern is, do you think that L2s are going to be able to use PIRDAS to its full
capability because of the lack of features around peer dash as it relates to things like
backfilling.
I recently learned about this, actually from a member of your own team, Preston, that
backfilling is a really important feature that will allow nodes to retrieve data from blobs
if they accidentally go out of sync with the network.
And this is a feature that is not ready in all consensus layer clients, if I understand
correctly. Yeah, yeah, we think it's a really important feature. So backfill, what it means is,
you know, the typical way people bring a note, a new note online is they sync from a checkpoint,
usually the last finalized checkpoint, because that's really what you care about. You care
about the unvinylized stuff, stuff that just recently happened. So when you start from the checkpoint,
you don't have anything that happened before it. And sinking from Genesis is no longer viable. There are
actually attacks you can do to trick people about what's happening with the state of the world.
So, thank you for Jim's, this is not recommended.
It also takes forever.
Actually, running an experiment, this might see how long it takes, and I'm in like beach number three.
So not recommended.
But L2's, they need to retain these blobs, this data, usually forever.
They want to have all of the blob sense blobs existed,
and they want to have all the data columns in period of sense.
They existed.
They want everything.
So we're thinking about bringing a new node online.
What do you do?
I mean, there are techniques, you know, make your own backups and whatnot.
But how do you get it directly from the network?
Well, you start from a checkpoint sync, and then you do an operation called backfill, which is going backwards.
from that checkpoint and filling in that data that you're missing.
In particular, you want blocks and you want blocks and data columns.
This feature, we've been working on at PRISM for several months now.
It's been under heavy review.
It's been one of the biggest features we've ever implemented.
And just over the holiday weekend, we were still putting in the time to get it working.
and I'm happy to say that it works now,
and that's going to be the next release, 7.1 for Prism.
Sure, other clients will have it soon, if not already.
It's pretty important.
But there's another part to this too.
As I mentioned earlier, that you need 50% of data columns
to reconstruct a blob, and a default client only
retains 8 out of 128.
So we have this other feature we're releasing, we're calling a semi-supernode.
And what it means is you'll retain exactly 50%.
You'll need 100%.
You could, but you just kind of like have redundant data in that sense.
You could be a super node, which is just, I'm going to retain everything forever, I want it all.
Maybe some of those people exist, but you actually need 50%.
So L2's, the recommended configuration, is you can checkpoint seek with,
backfill and you should be running this semi-super node flag if running prism so that you retain
enough blobs enough for the data columns that you can query any blob that you want that you've
already synced that's pretty important when they want to recall the data and if you only had the
eight data columns you have to then go ask the network and you know the network the default for
most clients is i'm only going to retain the data column for the minimum amount of time that I need to
which is 4,096 E-Parks.
So if you want historical data forever,
you need to have something there.
In our case, that also turns off,
I believe that would turn off any pruning
because it's like an home sticker,
you really only want to,
you kind of want to push the limits
of what's the minimum amount that I need.
So you can actually delete old blobs
and old blocks and old things after some point.
But for the people that want it,
semi-supernode or super node,
supernote flags will be available soon, and you can retain all that data forever.
That's really cool to hear that those features are coming.
And just to confirm, so these features are hard fork independent.
Clients can just implement these features asynchronously from one another.
Yeah, all the building blocks will be there in Fusaka,
and we're just stitching them together to say, oh, if I start back filling,
which backfill already exists for blocks and blobs,
but not for data columns.
So that's the kind of tricky part is like you need to go find.
Because like the way it'll work is you need to find a peer that actually has that data column
because it appears you have may not have it.
And so you're kind of like sampling everybody to find it.
And yeah, that's pretty much ready.
We've been tested for weeks and I'm finally happy with it.
So we're going to ship it soon.
Do you wish in hindsight looking back,
that you guys had had more time with Fusaka,
like had more time to prepare these features,
test these features.
Like, I think that was a big topic of discussion and debate
throughout these last couple of months.
What's your take, Preston?
Well, more time comes at a cost.
So if I could have time for free, I would take it.
Like, we were certainly a little stressed at times,
and even right now, everything's done,
but it's still a little,
kind of get a little nervous every time
there's a hard fork.
You've been testing you forever and you feel really good,
but it's just still a little nervous,
a little nervous there.
So a little more time would have been nice, okay?
I guess I expected that this was going to ship in January
or like Q1.
I thought it was a little silly to do anything in the holiday season.
Like in the U.S.,
We just had Thanksgiving last week, and now we're going to hard for it.
And then the end of year holidays are coming up and people are taking time off to rest.
So, yeah, I would have liked more time, but what is the cost, right?
Like, Ethereum needs to scale and use the scale now.
We have a lot of really exciting activity happening.
People are, like, finally looking at Ethereum as, like, a legitimate place for real finance.
So I hope that my peers were not feeling too stressed out.
I empathized with you.
This is another huge fork in a year in which we already had one.
More time would have been nice, but we also need to ship and ship fast.
I was under the same impression as you, Preston.
I totally thought this was going to be a Q1 upgrade.
And then it just was not, which again, like you said, is not a bad thing,
so long as the upgrade goes smoothly and delivers on the impacts and the features that it's supposed to deliver on.
One other kind of meta question around the shorter timelines for upgrades.
I did notice there were a couple of clients, at least four in the last seven days that had to put out releases.
Pretty last minute, close to the fork date.
There was some discussion among developers of, you know, how safe is it really to ask node operators to be upgrading?
to a critical fix just a couple days before the actual hard fork day, as in the case of the Nimbus client.
Yeah, I mean, for clients that are shorter resource, you know, they don't have as many perhaps
developers as the prison team to be working on things in parallel.
Do you think the shorter timelines for upgrades means less client diversity?
Like smaller clients just will not be able to keep up or will be perhaps, yeah, like less tested, more buggy than.
and the more well-resourced clients that are able to keep up with this faster development timeline?
Well, I would say that a release between the announcement of final releases and the mainnet
release is, or the mainnet upgrade time, it's pretty common.
In fact, we would have issued one last week if we had backfill ready.
There are some fixes in there, nothing like team urgent.
usually the stuff is found in testing, but you know, we're constantly finding things that
like to improve this log message is confusing or there's no action to take, trying to like
make it easier on, like, oh, maybe there was a bug with like something important because all
features are important or we shouldn't have them.
There's a some bug of something important, but it's not like critical of the safety of the chain.
So those like really urgent ones, like hot fix or like if you found like a bug that would really, really mess stuff up, you need to get out right away.
And I think whether or not that's safe or not, it's kind of hard to say, you know, ideally you would want your client team to issue releases and quickly.
And like when they find bugs fix them.
Like bugs happen.
Like, you know, whether or not your team is large or small.
And it really does not have much in the way of, like, bugs, right?
Like, in fact, you know, you can, you can have, you can have too many cooks in the kitchen,
which can be a bug risk, like, where folks are, no one's really sure what's going on
because everyone has sort of, like, small piece of a puzzle they've been working on.
So when they're, like, trying to, or something, and then someone leaves on the team,
you're like, oh, I don't know how, like, I don't know how P to P works or, like, I don't know
how the database works.
Like, you kind of have risk there.
When you have a smaller team, I feel like those folks, such as can be more streamlined.
And then, like, I kind of have a better understanding of everything and looking more closely and a bit more methodical.
So it's not like a cut, like a black and white of like a faster.
A bigger team can go faster.
It's not always the case.
In fact, like, you know, we've, we've, over the years, like, always kind of fluctuated between like six and 10 people.
and it's not a big difference between six and ten.
We have more capacity to get things done,
but there's so much like collaborative.
Like if we had 20 people,
I don't think we could get in it twice as fast, you know.
It does help that we are able to split up a little bit
and split the work and kind of get ahead.
We kind of, I think on our team,
or in my personal experience,
feel a lot of pressure about being left behind.
Like the scenario you described of like,
oh,
this client,
minority or otherwise is not going to have this feature in time.
What if we ship without them?
And that's just like a terrible feeling, right?
Like you're trying to get your stuff done as fast as you can.
And if people are like,
oh,
let's just leave without them and see what happens.
So I feel that like kind of like pressure of like,
I need to get it done.
And when you have that kind of pressure,
sometimes you can have bugs,
bugs happen.
But I don't think that a minority
risk, I mean, a minority client is inherently more risky
than a majority client or a more well-funded client.
It just kind of depends, you know.
It just really depends.
That's a good point, though, also.
A good point about the minority clients
aren't necessarily the ones to move slower
than majority clients.
But it's also valid the concern around being left behind.
of some of the decisions were on the timeline for main net was done,
even though certain clients were not ready with their main net releases this time around.
Looking back, is there anything at all that you think should have been done differently
with Fusaka's preparations, main net preparations?
I don't know.
I actually think this one was a pretty good one.
I think this one went pretty well.
It's always things we could have done differently.
Like we have things in our team that we keep making the same mistakes.
And a mistake being like, well, our problem is that we'll like make a big change.
Like I'll just implement Fusaka all at once, right?
And then you ask someone to review it.
You're like, dude, this is like 10,000 lines of code.
Like, I don't want to review this.
So if we have more time, I think.
think people would take a little bit more to split things up and to make like smaller changes.
And then like, you know, those can go out more quickly, like piecemeal, get it done in piecemeal.
And have more time for documentation.
Although like really this fork felt like it was really well done.
Like we learned a lot from that.
Manu from our team, he was our Pyrados champion.
And he's written this like right away.
wrote this document, this wiki page.
It's called pyrdus.net.
And I reference it all the time because I'm like, how does peer does
period us work again?
Like I have to keep recalling that that information.
And the fact that he took the time like up front to write this all down,
you made review, implementation, testing so much easier this time.
Like, because everyone kind of understood it's happening.
And, you know, while Fusaka had the same amount of EIPs as Electra,
I feel that the number of consensus changes were smaller.
Like, really there's one big one, which is PeerDos,
and then, like, you know, determine the sick proposal of looking at some of, like, small things.
You know, that, and it's where, like, Electra had, like, consolidations and on this, like,
other big things that were, like, some refactoring that was a really big,
painful, like, it just touched way more of the, because there's this layer that PeerDAS does.
PeerDOS is like, let's just work on blobs.
Just felt a lot easier this time.
I don't know.
And maybe that's why we're able to get it done in December and not in January.
That's surprising.
I didn't know that it touched like less parts of the consensus layer and was perhaps
easier to ship than the Electra upgrade.
Because from my perspective, non-technical view, I'm like, oh, it's the same.
like number of the EMPs are similar. And still, like you said, PIRDUS is a major shift.
In the last couple minutes of our time together, I want to talk about what people should be
looking out for in terms of main ad activation. So we talked about its impact on L2s. We've talked
about other EIPs in Fusaka, as well as the preparation process. And thank you for juggling my hard
questions around the timeline for it.
But yeah, for our listeners that are going to be trying to, you know, watch it happen live,
what would you say is the best way to watch the Fusaka upgrade happen in real time?
Run your own node.
And if that's not for you, there are live streams, I think.
Like, should EF or the Ethereum YouTube channel is going to have something,
surely something on Twitter spaces,
spaces where we want to call it.
Those are going to be places to be.
Usually there's a call
and someone sings
a song sometimes. It's pretty fun.
I really do enjoy
the actual fork when it happens.
Although it is a little stressful
and we're always a little nervous,
it's still a fun time to get
people together and see folks
on stream and
talk about what it was.
And then the fact that
we switch to a time-based system with proof of stake.
We can schedule these things very reliably instead of waiting for a block difficulty.
So it becomes a lot more fun that way.
Well, definitely linked the live stream into the show notes of this episode.
I'll send over a couple of show notes to include for this episode.
I mean, on the topic of your nervousness around Huzaika, like compared to, in terms of
the technical risk, like the riskiness of Fusaka compared to prior upgrades. Like, how would you rank it?
How, like, are you more nervous this time, less similar? For this one, I'm less nervous in a sense that
it touched a lot more, a lot less of the software. And Electro had like changes to out of
stations and the beacon state and like a lot of different things. A lot of operations. This
was a little bit more simpler in this sense. I think that our friends,
at the ETH Panda Ops, these guys are doing incredible work, testing and building confidence.
So the more updates I see from them, I feel really good that, you know, our client is working well with other clients and they're testing it and they're giving us feedback constantly.
So the, you know, the systems that we have are constantly improving.
So I feel pretty good.
And, you know, the timeline that we have where we have months of testing, you know, Fusak has been live somewhere in a test net for months already and nothing about us happened.
So going to Mainnet will feel very similar.
Usually these events, the upgrade is very boring because nothing happens.
And that's the way it's supposed to be.
So, yeah, always nervous because there's, you know, could be a bug hiding somewhere.
But we actually do, I think, a pretty good job of finding those well in advance and keeping the bugs out.
So feeling good.
Good.
I'm really hoping for a very uneventful evening on Wednesday myself.
It's going to be like 5 p.m. local time for me.
which is great because the merge, I think, was at like 4 a.m.
Yeah.
Which was awful.
Would you, I mean, obviously for me, some of the things I'm looking for is like network
finalization after the upgrade.
Are there any other kind of important indicators of a successful upgrade that you're
going to be watching out for either immediately after the upgrade or like a couple weeks
or months after to know that it was a hit?
Yeah.
Well, we'll want to see right away that.
the new blob transactions are coming in.
People are buying blobs.
They're getting propagated through the network.
And they get saved the disk.
That's going to be really key.
That's the main feature.
And then, of course, seeing that data finalize is kind of like the ultimate test, right?
If we fork to Fusaka and there are no blobs at all, and it finalizes, that's kind of not quite the success metric.
You need both.
We need to see a finalize with.
the data that we're looking for. I expect it will be perfectly normal. Maybe, you know,
maybe there's L2 that didn't update, so there's a slightly less volume of blobs coming through,
but there will still be some and we'll see you finalize. And it'll be interesting to see
in the coming months when we do the blob parameter only updates. Like, that's another
exciting hard work that will happen. And seeing the blob fee markets, you know,
stabilized around that. It's going to also be interesting.
see how people are pricing blobs when those updates come.
Yeah, those are good flags for our listeners that want to keep tracking the upgrade,
even after it goes live.
One last question on the blob parameter, only hard forks that are going to be coming up
after Fusaka.
How like on our toes should the Ethereum ecosystem be for those forks?
Like if Fusaka goes well, do you think developers are going to have to be able to kind of like
rest easy over the holidays?
in between like the blob parameter hard forks,
talking through a little bit of the risk assessment of the BPO's that are coming up.
Yeah, I think if you're a node operator, like a validator,
it's going to be a non-event for you.
You'll want to maybe check in on your hardware usage the next day,
kind of like monitoring it.
The thing is that when they increase the blob capacity,
it's not going to like fail spectacularly.
You're going to slowly see your disk usage increase.
So you kind of want to pay attention to that.
Although the way I understand the math that we have now
is that these updates are still like at par with what you were operating today.
So you shouldn't need to, you know, go out and buy a hard drive right now.
I think you're going to be fine.
L2 operators will want to be monitoring.
those updates just to see the exciting drops in Bob fees.
You know, they'll plummet right away when the capacity does up significantly.
That'll be interesting.
Yeah, I mean, you're not going to need to update your client.
So, you know, if things are working smoothly for you and you're not in need of these exciting
features I discuss, the backfill, then you may not have to update your node for the rest
a year and you can just kind of coast through 2025 until next year.
Until the next Glamsterdam upgrade, which I'm sure we'll talk about when the time comes.
Yeah.
Well, thanks so much, Preston, for walking through the entire Fusaka upgrade with me on this
episode of Unchained.
Yeah.
Thanks again for having me.
It's been really fun.
Yes.
And I hope everyone who is watching to this, watching this guest episode of Unchained,
also found this episode very informational and beneficial to getting you guys prepared for what to be looking out for and expecting from the next major Ethereum upgrade Fusaka, which will soon be a thing of the past.
And thank you to the unchanged team for letting me take over for this episode.
I wanted to note that if you want more regular deep dives on protocol development for Ethereum, I have.
a weekly podcast ready for merge and I also have a bunch of newsletters on my substack,
ChristineDKim.substack.com. There is also an interview with Preston on the substack that just
goes into his journey as a core developer. So you can find all of that in the show notes of
today's episode. We'll also include links to the Fusaka live stream and the Ethereum
Foundation blog posts for Fusaka. If you are a node operator on Ethereum,
and still have not upgraded.
There will be links for you in today's episode to, yeah, get more information and upgrade.
But yeah, so thank you, everyone for listening.
And thank you to Unchained for letting me host.
Bye, everyone.
Bye, and see you.
Unchained is produced by Laura Shin with help from Matt Pilchard, Juan Oranovich, Margaret Curia, and Pam Majumdard.
Thanks for listening.
