Bankless - 188 - The Next Ethereum Upgrade: Blobspace 101 with Domothy
Episode Date: September 18, 2023Dom, aka Domothy, is a researcher at the Ethereum Foundation who is working on the research and development of some key Ethereum protocol upgrades like EIP4844, danksharding, and MEV Burn. We last had... Dom on Bankless about that last one, MEV Burn, and this time we’re bringing him around to fully understand an incoming new property of Ethereum that EIP4844 is going to introduce… a new resource market… called BLOBspace… kind of like Blockspace… but for Blobs! What are blobs? What is Blobspace? How is it different from blockspace and blockspace markets? What will it do for Ethereum and its rollups? These are the questions we explore with Dom today. ------ ✨ DEBRIEF | Ryan & David unpacking the episode: https://www.bankless.com/debrief-blobspace You know how we say blockchains sell blocks? Well, soon Ethereum’s going to be selling more than just blocks. It’s going to be selling blobs too. We’re just a few months out from the biggest Ethereum release since the merge and no one’s fully mapped out the implications. But it’s going to be huge. - Ethereum’s getting a new product to sell… it’s called Blobspace - The cost of transacting on L2s is about to drop toward zero… - The economics of ETH gas and the burn are about to change forever… Blobspace, EIP4844, proto-danksharding…that’s what the geeks call this new ETH feature upgrade. This is everything you need to know about Blobspace with Ethereum Researcher Domothy in this absolute banger of an episode. ------ 🎁 Check your wallet with our brand new tool: Claimables https://bankless.cc/GetClaimables ------ 📣 AAVE V3 is Here! http://app.aave.com/ ------ BANKLESS SPONSOR TOOLS: 🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://k.xyz/bankless-pod-q2 🦊METAMASK PORTFOLIO | MANAGE YOUR WEB3 EVERYTHING https://bankless.cc/MetaMask ⚖️ ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum 🛞MANTLE | MODULAR LAYER 2 NETWORK https://bankless.cc/Mantle 🦄UNISWAP | ON-CHAIN MARKETPLACE https://bankless.cc/uniswap 🗣️TOKU | CRYPTO EMPLOYMENT SOLUTION https://bankless.cc/Toku ----- TIMESTAMPS 0:00 Intro 7:20 Why We’re Doing This Episode 8:45 Blobspace 9:40 History of Blobspace 11:30 Sharding & Dead Ends 14:00 Execution Sharding 16:50 Layers of a Blockchain 21:00 Ethereum Rollups 22:50 Vitalik’s Thought Process 26:10 Ethereum Development Hidden Genius 30:00 Current State of Ethereum 34:00 What is a Blob? 39:25 Blobspace Explained 44:20 Validator’s Perspective 47:00 Data Availability vs. Storage 56:31 User’s Perspective 1:00:30 How a Blob Becomes a Blob 1:04:00 Properties of Full Danksharding 1:05:20 New Math & Tech Behind Blobs? 1:12:00 L2 Blobspace & Scaling Factors 1:15:40 Blobspace Economics 1:20:00 Block vs. Blobspace Balancing 1:23:20 L2s Competing For Blobspace 1:32:10 Ethereum Subsidiaries 1:39:00 What’s Next? 1:41:30 Closing & Disclaimers ----- RESOURCES Domothy https://twitter.com/domothy Domothy article on Blobspace https://domothy.com/blobspace/ ----- Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
Welcome to bankless, where we explore the frontier of internet money and internet finance.
This is how to get started, how to get better, and how to front run the opportunity.
This is Ryan Sean Adams, and I'm here with David Hoffman, and we're here to help you become more bankless.
You know how we say blockchain sell blocks?
Well, soon, theorem's going to be selling more than just blocks.
It's going to be selling blobs, too.
That's right, blobs.
So we're just a few months out
from the biggest Ethereum
released since the merge,
and I think no one has
fully mapped out the implications
of this, but it's going to be huge.
So Ethereum is getting a new product
to sell. It's called blob
space, that is in addition to block space.
The cost of transactions
on layer 2s is about to drop
towards zero. The economics
of eth gas and the burn are about to
change forever. We're calling
this upgrade, the blob space upgrade,
EIP 484,
proto-dank sharding. That's what the geeks are calling this new Ethereum feature upgrade.
And we want to cover today everything that you need to know about blob space in this coming
Ethereum upgrade with Ethereum researcher Domit. And this is an absolute banger of an episode.
I couldn't be more proud to present this to the bankless nation. A few takeaways here.
Number one, we go through what is blob space? Number two, we go through the history, how we actually
got here, this roll-up-centric roadmap. And number three, we go through the economics. What
does this mean for Ethereum's economics, for eth-value accrual, for eth-the-asset. David, why was this
episode significant to you? I think if there's any sector of conversation that you and I really
just love, it is the intersection of cryptography and economics. Yeah. Numbers and economic
manifestations on top of these protocols. Yeah. That's our love language. This episode is that.
So, I mean, we've talked about EIP 4844. We've talked about Proto-Dank-Shardic. Those are the same
things. We've defined it a handful of times in a number of different capacities. We've never done
the aggressive headfirst dive down the rabbit hole and come out the other side of economics.
So like we have technically scaled data availability at a technical level. That is a protocol
improvement. But how does that connect to the market's side of Ethereum? The two marketplaces,
the one marketplace that are now being fractured into two. Block space and blob space are
now two different independent markets that are being contained each inside of an Ethereum block.
What does that mean for ether? What does that mean for the marketplaces that arise around these
things? How does the equilibrium of this supply and demand of each push and pull on each other?
What does this do for layer two scalability? What does this do for economic use cases on top of
layer two? How does demand for layer one change the demand for layer two? There's so many
good stuff at the bottom of this conversation. We're going to start with the basics, the ones that
bankless listeners probably know all about 4-844 just to lay the groundwork, but then we're going to
poke out the other end of the rabbit hole into the economic side of this conversation, which I don't
know if many people have even had before other than back channels and Harvard DMs and speculation.
Yeah, going through the implications is perhaps my favorite part. And this is one of my favorite types
of episodes that we do because it's both frontier and imminent. I mean, we're just,
talking months away from this upgrade. So it's right around the corner. It's very relevant
to our here and now lives. This is not sci-fi, actually. This is not sci-fi Ethereum stuff.
This is the near-term stuff. This is the stuff we get to look forward to. So guys, we're
going to get right to the episode with Domitie. But first we disclose. Nothing big to disclose today.
Both David and I hold Ether. You already know that. We are long-term investors. We're not
journalists. We don't do paid content. There's always a link to all bankless disclosures in the show
notes. All right, we're going to get right to the episode with Domothy. But before we do, we want to
thank our friend and sponsor Cracken, which is our number one recommended exchange for 2023.
Go check them out. Cracken Pro has easily become the best crypto trading platform in the industry.
The place I used to check the charts and the crypto prices, even when I'm not looking to
place a trade. On Cracken Pro, you'll have access to advanced charting tools, real-time market
data, and lightning fast trade execution, all inside their spiffy new modular interface. Crackin's new
customizable modular layout lets you tailor your
trading experience to suit your needs. Pick and choose your favorite modules and place them
anywhere you want in your screen. With Cracken Pro, you have that power. Whether you are a
seasoned pro or just starting out, join thousands of traders who trust Cracken Pro for their
crypto trading needs. Visit pro.crakken.com to get started today. Metamask portfolio is your
one stop shop to manage your crypto assets and to tap into Defi all in one place. And the most
important part of that experience, buying crypto, obviously, Manumask Portfolio's buy feature enables you to
purchase crypto easily without going through centralized exchanges. Designed with you in mind,
you can fund your wallet directly in just a few clicks with convenience and simplicity. What happens
when you press the buy button? Rather than being limited to a single payment provider,
Metamask brings together a bunch of vetted, trustworthy providers to present you with
customized quotes for your crypto purchase. Once you've funded your wallet, you'll be able to plug
into defy with all the money verbs like swapping, bridging, and staking. But first things first,
you need skin in the game. Head over to metamask.io slash portfolio to buy crypto, the easy way.
Arbitrum is accelerating the Web3 landscape with a suite of secure Ethereum scaling solutions.
Hundreds of projects have already deployed on Arbitrum 1 with flourishing defy and
NFT ecosystems.
Arbitrum Nova is quickly becoming a Web3 gaming hub and social apps like Reddit are also calling
Arbitrum home.
And now Arbitrum orbit orbit allows you to use Arbitrame's secure scaling technology to build
your own layer 3, giving you access to interoperable, customizable permissions with
dedicated throughput.
Whether you are a developer, enterprise, or a user, Arbitrum orbit,
lets you take your project to new heights.
All of these technologies leverage the security and decentralization of Ethereum
and provide a builder experience that's intuitive, familiar, and fully EVM compatible.
Faster transaction speeds and significantly lower gas fees.
So visit Arbitrum.io, where you can join the community,
dive into the developer docs, bridge your assets,
and start building your first app with Arbitrum.
Experience Web3 development the way it was always meant to be.
secure, fast, cheap, and friction-free.
Bankless Nation, we are excited to introduce you once again to Dom, also known as Domothi.
He is a researcher at the Ethereum Foundation.
He's working on research and development of some key Ethereum upgrades that are coming down
the pipe, including EIP 4844.
That's the subject of today.
Also, full dank sharding and also M.E.V. Burn.
We last had Dom on the podcast, in fact, to talk about M.E.V. Burn.
And this time, we're bringing him on to describe this new
property that Ethereum will get post EIP 4844. And that is blob space. Yep, you heard that right.
Not block space, blob space. If you've not heard about blob space, you don't know what this means.
This is the episode for you. We're going to take you through the 101 of blob space and get all the way to the 400s level.
Dom, welcome to bankless.
Yeah, I'm happy to be here. Thanks for having me on. All right. So we're going to get into what blobs are,
blob space, how it's different from block space. And I think we want to do that in kind of
three different sections here. The first is history. We're going to talk about how we got here
and why roll-ups are really going first. And then we'll get into the technical of what blob
space actually is and then we can conclude with the economics, what all of this means. I just want
to give a quick TLDR at the very beginning of why we're doing this episode, why it's important.
And I think there's probably three reasons. Number one, blob space is a new resource on Ethereum.
Okay? And I think we think that this will be as important a resource as block space. That's key insight,
number one. Number two, blob space makes roll-ups really, really cheap and scalable. And that's going to
change everything we know about the Ethereum ecosystem. And number three, blob space is coming,
like possibly this year. Some estimates are possibly November. Of course, we don't have firm dates
from Ethereum core developers, but that might be a consensus bet. And we,
once it does, it's going to reshape just about everything economically, structurally, about
Ethereum. So that's why we're doing this episode right now and why it's timely. Dom, did I say
anything incorrect there? Is that a decent summary? Yeah, it's pretty much all correct. Before we get
into the history, Dom, can I just ask a very simple question? To understand blob space, is blob space just
block space for layer twos? Pretty much, yes. All the data that layer twos need to commit on chain
are going to go into these blobs, which is the new resources, Ryan said, into layer one.
and the layer one doesn't know what's inside these blobs. It's just there to prove that layer two
committed it so then no one can cheat. So that's all it is. Blob space is just block space for layer
twos. And right now, layer twos are actually using Ethereum block space instead of blob space.
And what we're doing with this new upgrade, EIP 4844, is we're partitioning off this new
resource called blob space, and we're making that cheap and very available for roll-ups. And so now they
can start consuming more blob space than they do block space. Is that about right? Yes. So, Dom, in order
to fully understand how we got here, how we got to blob space, I think it's worthwhile going back
into memory lane to understand the fullness of the Ethereum roadmap, because it came to a very
logical conclusion of blobs and blob space. So maybe we can go back in time, because at one point in
time, Ethereum's roll-up-centric roadmap was not a thing. We had this thing called execution charting,
which we never actually got, but we kind of are now.
Can you take us back to wherever in the history of Ethereum's roadmap is appropriate
to really understand the full context of blob space?
Yeah, so in the research space, it was always assumed that the solution to scaling a blockchain
was going to be some form of sharding in one way or another.
And back when we were at Proof-Work, it didn't really work to split the blockchain
into 64 or 2024 or whatever power of two number of mini-blockchains.
because each blockchain would have a very small fraction of the overall security.
So the plan was always proof of stake first.
And then we do this magical split into mini-blockchains that run in parallel,
where each validator is like randomly shuffled.
That was basically the general idea for execution charting to flip the trilemma of scalability on its head,
where instead of being limited by the lowest requirement node on the network,
now you would have more nodes equal more scalability.
which is what we're, as we're going to see, this is what we're going to get with EAP 4844 and full
dang sharding. So that's pretty much the research goal was to get to charting. And then we had
more research discovery along the way, but that was the earliest design to get sharding.
So, Dom, when you say sharding, that's basically just like splitting Ethereum into different pieces,
right? Like sharding it. Or any database. Yeah. Or any database. Yeah. It's a concept that
already exist in computer science for database management. Basically, it's sharded as in it's a lot of
smaller shards of the blockchain get much more scalability in parallel. So that was the goal,
but it turns out it's pretty hard to do that safely with execution charting. And it was a very
debated and area of research until roll-ups game. Yeah, I was going to ask you, and this is,
by the way, we're going to memory lane. So this is somewhere around, you know, 2019, I think.
just about.
Yeah, like that was around the timeline
where we sort of discovered
that, oh, sharding in this way,
full execution layer sharding
is going to be really hard
and it might overly complicate
the protocol
and it might take us years
to get there
and we might never be able to get there.
And so there was a lot of like,
I felt anyway,
as not an Ethereum researcher,
but like an Ethereum,
I don't know, advocate user
like investor.
It felt a little bit sad
and hopeless back in 2019. We were never going to make it this technology that we hoped would
be possible. Scaling blockchains was never going to happen because there were all of these
dead ends. Can you describe what the dead ends actually were? Why was this so difficult?
It was a very big upgrade that was planned. If you remember around 2019, that was the ETH 2.0
slogan where the goal was to have one big upgrade with proof of stake, with sharding, with everything along.
We were going to go from like the Stone Age of Ethereum.
to sci-fi Ethereum in one single upgrade.
We were going to get all the upgrades
that we've gotten in the last like four years at once.
Yeah, so one big jump.
And it turns out that's hard to do.
There's a lot of complexity involved.
And it was split up into different phases.
And then we said, okay, first we're going to launch a beacon chain,
then we're going to figure out how to actually merge it
with the current execution layer.
And then we're going to do phase one, which is just data sharding.
So no execution, just all these smaller blockchains.
They're just going to contain data.
and then we're going to figure out how to do execution charting.
So it was a lot of let's figure out as we go, but also in a safe way so we don't do something we regret
later and then break the whole blockchain because there's a lot of economic activity going on
on it.
I want to keep on going and defining execution charting just because once you understand execution
sharding, the current roadmap of Ethereum, the roll-up centric roadback of Ethereum is placed
into even better context, I think.
And so a quote from your article, Dom, that you wrote, is that sharding, execution
charting, the original plan for sharding of Ethereum. And I did like how you said that we were going
to get to sharding one way or another. We just didn't know how we were going to do it. We knew sharding
was the answer. We thought originally, Ethereum, the roadmap was like, okay, execution charting.
Turns out that was a dead end and we picked a different path. Still ended up getting to sharding,
a different flavor of sharding, which we'll get to. But in the beginning, you wrote this line in your
article about execution sharding and the technical details about it. And you said, sharding is the
shuffling of validators, the Ethereum beacon chain layer one validators that we know today,
shuffling validators randomly across distinct shards of the blockchain, each shard essentially
being its own mini-block chain running in parallel to the beacon chain, which sounds a little
bit like what we have today with roll-ups. But the difference here is that the shards of Ethereum
are actually a part of the layer one protocol. Something that has been in my mind lately is like
the relationship that a chain has with its own infrastructure.
And so execution charting to me is like state-sponsored charts, as in the protocol, the layer one
protocol actually determines what the shards are.
And that stands in contrast to the roll-up centric roadmap that we have today.
But really, to me, execution sharding is at first we started with 64 with a planned upgrade to
148 shards.
But it was the Ethereum layer one was going to be all 64 of these shards operated and managed
and produced by the Ethereum layer one protocol.
And that's where we started. Am I articulating this correct?
Yeah, exactly. So I would say that the way we're getting execution sharding, this way is more indirect with roll-ups and data sharding.
But it's kind of like a cheat code from a research perspective because Ethereum-M-Ler1 has much fewer things to do and worry about.
And then the rest is offloaded to roll-ups, which is, in my view, better than the original plan, like you said, state-sponsored chars where everything is the same.
with the same blockchain, same EVM, same tradeoffs about everything, which was like imposed on
users. And now instead of that, you can have roll-ups competing against each other to get the best
environment, the better trade-offs. Like, if you personally prefer like super speed over super
security, then you can go on a different roll-up and you have more choices and there's more
innovation and competition at layer two. There's so much to unlock here. And there's a lot of,
I know we're trying to explain blob space at the 101 level, but there's a lot of, but there's
There's so much kind of like, I guess, stacked knowledge that feels almost necessary to explain going to this episode. We have a ton of resources at, yeah, bankless to explain these various topics. But one I just want to touch upon is the three different layers of a blockchain in kind of this modular world that Ethereum is in. There's the consensus layer. And this will be a recap for some bankless listeners. But there's the consensus layer. There's the data availability layer. And then there's the execution layer.
The consensus layer defines what's true.
With Ethereum Maynet, by the way, all of three of these layers are kind of combined in the same thing.
The consensus layer is arguably what a blockchain should really be focused on doing,
figuring out what's true in the most decentralized, corruption-resistant way possible.
What's true being the order of the blocks?
Yes.
Exactly.
The order of the blocks, exactly, and the processing of those blocks.
The next layer out is the data availability layer.
And that, you know, my rough approximation of what that is,
what's happened. So you've got what's true consensus, what's happened as kind of like the data layer
of a blockchain like Ethereum. And then you have the outside layer, which is kind of where all the
activity is. That's where all the action is. That's what's happening right now. That's execution,
as we define it. And the original Ethereum, Wandado, let's call it, combined all of those three things
on main chain in just one environment, couldn't be parallelized. And now what we're doing with
roll-ups, the roll-up-centric roadmap with Ethereum, is we are, I guess, sharding out. I'm resistant
to using that word, but maybe increasingly we will kind of use that word, sharding out execution
from the main chain into these roll-ups. But the roll-ups, in order for them to be still secured
fully with the similar security guarantees as Ethereum mainnet, have to post their data,
that is what's happened, back to the Ethereum mainnet, right?
in order to get kind of the consensus benefits and the data benefits.
And when they do that, it costs money.
It costs right now block space.
And it costs kind of a lot of money relative to what they're doing.
So EIP 4844 and what we're calling proto-dank sharding,
the reason for this whole discussion is the economics change post this update
in a very roll-up favorable way.
But anyway, important for us to know that when we're talking about
consensus and data layer and execution,
all three of those are kind of separate layers
in this new Ethereum paradigm that we're moving towards.
Dom, David, do you guys have anything to add there?
I would say you got it pretty much correct.
I'll just insist more on a certain point
is that data availability right now is more implicit.
And it boils down to trustless verification.
We want everyone to be able to verify the chain by themselves
and not have to have a trust me, bro, third party in the middle.
which right now this is the bottleneck, right?
You need to be able to verify everything,
which implicitly means you need to have the data available to you
to check the state transitions and execute the transactions
so that anyone who sends you money,
then you can verify that it's legit,
and they really did send you that money,
and it's not like on some other worthless fork or something.
So let's talk about that for a minute.
So the roll-ups post the Ethereum data availability layer
so that it's not a trust-me-brough type of situation.
Exactly.
What does that mean for a user inside of a roll-up?
Does it mean they always have a way to withdraw their assets from the roll-ups?
What kind of certainty, security does that give them?
Yeah, so at its core, we say that roll-ups inherit the security of L1,
which is a very quick tagline for roll-ups.
And the way it works is you have the layer one contract that bridges asset between Ethereum
and roll-ups.
And ideally, this smart contract can and should in the future.
all be immutable and not upgradable, which is why layer one is going to enforce some
constraint on what these layer two blockchains can commit to layer one so that they can't steal
assets. And if they censored, then you can still rely on the layer one contract to enforce,
like withdrawing your asset or forcing a transaction to go through on layer two.
This is why, by the way, I think Dankrad said recently that his definition of an Ethereum
roll-up is a roll-up that hosts its data on Ethereum. And if it doesn't do that, then it's
something else. It's not an Ethereum roll-up. Maybe we use the term validium or something else for
that. Do you subscribe to that way of defining an Ethereum roll-up, Dom? He actually said this about
layer two, not roll-ups. The roll-up term is very specific. That is what he means. He says layer-two
are roll-ups. And there is no layer-two thing that isn't a roll-up according to Dancrad. And he means that,
as in, like you mentioned, veladiums that have data available.
the off-chain, not enforced by Ethereum. And once you leave this Ethereum bubble of security,
then you have more trust assumptions regarding the data availability, which leads to a sort
of semi-trust-mebrose situation. If you want to exit your assets, then you need to get the data
from somewhere. And if it's not Ethereum itself, it's going to be someone else, then you have to
rely on them to give you that data. Not only do you have to rely on them, but you also have to rely on
the mechanism, right? So you could post your data on BITDA.
if you wanted to, and Bitcoin's super trustless, but then you need to trust whoever's facilitating
that relationship. Typically, we call that a bridge. Just wanted to add that nuance there.
There's another quote in your article, Dom, from Metallic, that I'm hoping you can kind of put
us into the shoes of tall order, I know. The quote that you put in there is, it seems very
plausible to me that when full execution sharding finally comes, essentially no one will care about it.
everyone will have already adapted to a roll-up-centric world, whether we like it or not.
And by that point, it will be easier to continue down that path than try to bring everyone back to the base chain for no clear benefit and a 20-to-100-X reduction in scalability.
Basically, we could do the complex thing in scale layer one execution, but all that would really achieve is that role-up sequencers would be like, oh, cool, more data, and then barely touch the execution that we took so long to shard.
This is Vitalik saying that Ethereum layer one was always destined to only ever be for settlements, even if we sharded layer one execution anyways.
Can you kind of just put us through the thought process of Vitalik?
Why did we know that even if we did do execution charting of the layer one, that we would still ultimately end up at the roll-up-centric roadmap anyways?
The idea is that roll-ups can afford to do sacrifices that layer one can't do because layer one enforces constraints on layer two with the bridge contract.
that I mentioned earlier. So it's something that we can't just achieve the scale in the order of
magnitude that roll-ups can, even if we shard execution at layer one. And the quote about Vitalik
is that we're going to spend so much time doing execution charting, and it's not going to matter
because roll-ups are still going to be much more scalable. And even if we do scale execution,
that also means scaling data, because like I said earlier, to check execution, you need to
have the data. So scaling execution implies scaling data, but not the other way around. So to get back
into context, scaling layer one just means it scales layer two exponentially. So whether we want it or not,
roll-ups are here and they're going to be much more scalable than layer one. So the pragmatic thing to
do is to just go down that path and work with roll-ups rather than like try to be some kind of
layer one maximalist that said, no, you stay on my monolithic chain that I spend so much time sharding.
and everyone's going to say why it's so much faster on later too.
Isn't this related to the dynamic I was talking about earlier
where execution sharding is like the state-sponsored version of sharding
where every single shard is totally homogenous?
They're all the same size.
They all go at the same speed.
And then what you're saying is like, well,
and Vatelic is saying in this quote,
is like even if we have that world of execution sharding,
they're still too rigid.
The shards are still too homogenous.
We want more pluralism of core value of Ethereum.
We want more pluralism in our roll-ups.
And so even if we do execution charting,
what we get out of that is still too rigid.
We can still get more expressivity,
more optionality for roll-ups anyways
on top of execution charting.
So we might as well just lean into that.
Am I connecting the right dots here?
Yeah, it's related.
But the quote about Fittalic was really about scalability
because even an EVM roll-up
is going to be much faster than a sharded layer one.
One thing I find interesting about this roll-up-centric approach,
which hasn't been mentioned, but I think kind of implied with what you were saying, David,
is the original version of Ethereum was very top-down central planning, right?
It's kind of like the Ethereum Protocol is, you know, expanding main chain in all of these different directions.
And that, of course, has some benefits, but we've discussed many of the tradeoffs,
versus the roll-up-centric roadmap is very bottom-up, like, free market, developing its
vision going off in many different streams at once.
The Ethereum Foundation is not developing roll-ups.
Yeah, and there's something interesting with the analog that we so often use of, like, you know,
nations and, you know, how they're built.
And, you know, the first model is, how much should a government actually do, right?
It's kind of like an open question in different societies, I feel like have different answers
to this question.
But the extreme answer almost never works in any organized society, which is, like,
the extreme answer is, the government should do freaking.
everything, right? Like your clothes and run all the companies and like do all the banking and like do
everything. Okay. And like I think the most well functioning societies are those that do just the
right amount of, you know, involvement. The Goldilocks government. Yeah. It's kind of like this
Goldilocks zone. And of course, you know, there's probably an Overton window of like what sort of
works there. But this was a theorem trying to like figure out how much work kind of the central government
should effectively do here.
And where it's left off is we're just going to be,
we're going to focus on settlement.
We're going to focus on this consensus layer
and then the outer ring kind of giving a spot for settlement
for cheap data availability.
And we're going to let the free market,
the private market,
all of these experiments run in parallel in the roll-up world.
And I feel like this is almost like a hidden genius
that I didn't see at the time.
You know, 2019, the kind of the pivot
from ETH 2.0 felt to me like a crushing blow. I almost felt a little bit like Fred Wilson,
the notable VC, who said at that time, Ethereum has failed to execute. I felt a little bit of that.
Like, oh, okay, so we're giving up on the dream because it's too hard. And all of this, like,
research about sharding was for naught. And now I look back at it in 2023. I'm like,
holy shit, that was genius. I can't believe we pulled that off. And I don't even know if all of this
was intentional or like, you know, there was foresight or we just kind of got lucky with how this
experiment has played out. But the amount of innovation I see in the roll-up world and all of these
experiments being pursued in parallel. And by the way, funded. Funded adequately. I'm putting it
mildly, but like funded well by like VCs and token incentives and all of these different things
have really like accelerated Ethereum development to warp speed. Anyway, I don't know if there's
anything there, Dom, that you can kind of glom onto and respond to, but that's certainly an
observation that I've had recently. Yeah, it's a very astute observation. But in terms of being
intentional or not, I would say, it was kind of forced upon Ethereum researchers, because
many points we've already touched on, it's execution sharding is complex. And the way we're doing
with dang sharding plus roll-ups, it's kind of a cheat code, but a cheat code that's going to happen
whether we want it or not. And then we just lean into that. And as you said, it turns out that it's
like one of the best way to scale a blockchain if we want to have everyone in the world to
have access to a scalable, trustless environment. Okay, so that's a little bit of the history
and how we got here. And I think, I hope we explained, gave you a tour to force on how we got here
to a roll-up-centric roadmap. Now, take us to the current state, if you will, Dom. So right now,
we do have many roll-ups. And right now, these roll-ups, these Ethereum roll-ups, many of them,
do use Ethereum, I guess all of them, if you take Dancrad's explanation, do use Ethereum for
data availability. That means they post kind of the fraud-proof type data back on Ethereum
Mainnet, and they consume block space in doing that. It was a few days ago I checked so that
these numbers will be somewhat inaccurate. But I think layer to use, according to ultrasound money,
consume about 200, 300, you know, say 200 to 500, eth worth of block space on the daily. All right?
And this is them paying for data availability back on Ethereum. So tell us about the current state.
What's wrong with that?
That sounds pretty good.
We've got roll-up-centric roadmap.
We've got layer two experiments.
The fees are fairly cheap across layer two's, I think.
I mean, I don't know what you guys have seen recently,
but on the order of cents, you know, for most of them.
What's wrong with our current state?
What's wrong with it is that a few cents is still too much
for scaling blockchains worldwide.
Like Epilich said years and years ago,
the internet of money should not cost five cents or 50 cents.
I got the exact quote.
And it's something that people make.
front of him because of layer one gas fees. So the current state of things is like you said,
we do have roll-ups and they do use layer one's data availability layer, which is like an implicit
layer, and as we'll see later. And it's still very expensive because of two things.
It's at one. The cheapest way to commit data on chain right now is called data, which is
about 16 units of gas for every byte that they commit on chain. It goes into the gas fee market,
which can be very expensive, as we see, like one, NFT drops, and then everyone's using the chain.
That makes it more expensive for roll-ups.
It's a gas goes up.
The cost of putting the one byte is the 16 gas, and that price goes up to.
And it's also limited.
The block sizes have to stay limited because of the problem of scaling a blockchain.
But the cheapest note have to be able to verify the chain, and that's why blocks have to be small.
And there's no cheaper way to put data on chain right now for roll-ups.
it's too expensive, it's limited, and we've got this weird kind of resource issue.
Coupling. Yeah, coupling. Yeah, it's all coupled. Resource coupling issue. That's a great word for it, David. And so
what happens is, you know, 100,000 users on a roll-up are competing with like 200, you know,
NFT bidders on main chain, and they're all competing economically for the same block space. And, you know,
it drives the 100,000 users, the prices up. So there's this resource coupling.
problem as well. Those are the three problems with the current state. Yes. So the resource copying problem
is kind of a weird thing economically, but there's a technical reason, historical reason why
Ethereum went to couple everything into a single unit of gas. But it's weird when you think about it
because these 200 NFT bidders are using execution mainly to send money and settle an NFT trade,
whereas these roll-ups are using data. So it's two completely different resources that shouldn't be
linked together. They should have their own supply-in-demand market. But
one goes up, the other goes up as well. So that's kind of the problem of the current state with roll-ups.
So not only are these resources coupled, but what you're saying, Dom, is that these resources are
inappropriately coupled, as in they don't have to be coupled. One is using data, the other is
primarily using execution. And these are two different resources that just because everything is
contained inside of block space, that these things are coupled and inappropriately so.
Yes, that's exactly it.
Okay, so Dom, we're going to get to what is about to be my favorite question that is ever about to be asked on the bank list podcast. Are you ready for it?
Yeah.
Dom, what's a blob?
So you want the technical answer? Because we already en-waved it.
A blob is just a piece of data that roll-ups can put onto main chain.
Yeah, I think it's time to get into a technical. Okay, so I'll ask it again. What's a technically? What's a blob?
Technically, a blob is 4,096 field elements that are just under 30.
two bytes each. So that's the playground that roll-ups have to put data and they have to abide by
this format. What is the difference between a block and a blob? The block contains basically everything
at execution and I don't know how to explain this. It's like it contains the transactions and then
you execute the transaction and you see that I sent you one eth and that changes the state of Ethereum
layer one. It's like I have one less eth and you have one more because we execute.
at that transaction on chain, and everyone agrees that this transaction happened before and after
some other transactions.
Wait, so we have blocks.
Yeah.
And then we have blobs.
Yes.
And then we have, like, data of transactions.
Transaction data goes into block space.
Blobs go into blob space.
Yes.
And everything is contained in a block, correct?
Yes.
And so it's the new thing is this new blob space.
So we have block space.
And I think people might get, and I'm getting confused, but I'm trying to parse as a part that
we have blocks and we have block space and we have blob space.
But blob space and block space are spiritually equivalent and both of them are going into blocks.
And so there's this weird dynamic where block space and blocks space and blocks space
have the same like name, the same block.
But blob space and block space are shoulder to shoulder with each other,
equally contained by a block.
Yeah, so we're getting into metaphor land.
On the consensus layer, what we say is that these blobs are in some sort of sidecar alongside the block.
So the block can stay small and then there's the big blobs next to it that are not exactly contained in the blog,
but the block contains a reference to the data, which is what roll-ups are going to use to do all sorts of zero-knowledge magic and that sort of stuff.
I think one of the first learnings here, the first takeaways in this section,
because it's going to take us a few minutes to really understand what the heck a blob is and what the heck blob space is.
But you know that thing that we say on bank lists so often is blockchain sell blocks?
Well, I think, David, we have to amend that.
Oh, no.
Blockchains sell blocks and blobs.
This is a new resource, I think, is the key insight that Ethereum is kind of a going to market with, if you will, after EIP 484, which again could happen this year.
It's not only selling block space to the market, it's also selling blob space.
is a product tailor-made for roll-ups. What does it do for roll-ups? It gives them a space to park
all of their beautiful fraud-proof data. So beautiful. A data availability layer is so beautiful and
incredibly cheaply. So it's much more efficient resource to get done what roll-ups need to get
done, which is post these proofs into Ethereum, I guess, block space. But it does it.
Ethereum is now in the blob business, baby.
Dom, you said that a block references a blob, but that a blob isn't in a block.
Is that correct?
And can you elaborate?
Yeah, that's the metaphor.
The blob is the sidecar that's alongside the block.
And then basically one key insight is that, like I said earlier, if I sent one to you on layer one,
every single node has to compute the transaction to verify my signature, verify, access the state of your account and, like, subtract a number from me.
add a number for your account. And that's the very expensive thing. That's kind of the bottleneck
that doesn't scale very well. And the way that roll-up scale this way is that I send you one-eth
on a roll-up, and that's cool. The roll-up is going to batch that alongside a lot of other transaction
onto layer one inside of a blob. And now, layer one doesn't care about the data inside that blob.
That's the roll-ups business, right? So I send you one-eat that doesn't impact layer one nodes.
They're just going to see the data, say, okay, the data's there, it's cool.
The roll-up is, everything on the roll-up is happening is legit.
But there's no expensive computation at layer one happening for layer two transactions
other than this very small transaction to verify like a proof or update the optimism, state route
every time a sequencer commits a transaction.
So that's why the biggest resource they need is data, and it's how we scale the whole blockchain.
Right.
Okay.
So the way that I understand, I give a metaphor to block space, is that block space is like a container of data. And if I send Ryan some ether, that's a very small bit of data that I'll throw into block space, start to fill it up. If I send Ryan an NFT, that's an even bigger bit of data. If I mint 13 NFTs at once, that's an even bigger amount of data. And that fills up block space. Block space is like this bucket that you fill with different sizes of transactions. Eventually, you become full. Is blob space like that? As in like there's a container that.
that is filled, or does they operate with different properties?
It's similar, but it's a decoupled market, too.
So there's like this other different bucket that only gets filled with data.
The layer one doesn't care.
It's very agnostic about who, post what into these blobs.
All it cares about is getting these blobs out there for whoever needs them to commit to them
and download the blob content.
This is what layer one enforces.
Okay.
So post-EIP 4844 are when we introduce blob space.
aggregate Ethereum blocks, the size, the data size of aggregate Ethereum blocks, will be blob space plus block space, correct?
Yes.
Okay.
Let's look at this through the different lenses of kind of actors and stakeholders in the Ethereum ecosystem.
So I want to go through validators in a second, but let's first start with roll-ups.
So why is blob space a better product than block space for roll-ups?
Is it just because it's cheaper?
or what other properties does it have that makes it appealing and better for roll-ups to consume?
From the point of view of a roll-up sequencer, it really boils down to being much cheaper and much more plentiful.
Layer 1 is going to do more technical stuff to scale this blob space, but from the point of view of a roll-up, it doesn't really matter.
It's just data for them. That's all they need, and that's all they care about.
And how do they sort of activate it? Is there anything that they need to do on their side? Is it basically like, so all of these,
of proofs that they were posting and buying block space in order to prove, they just switch over.
So it's just like rather than using, I don't know, electricity for your furnace, you're using now natural gas.
You just use a different resource because natural gas is so cheap.
But like, you know, the system works the same way. You still get your heat.
That's what they're doing here, yes?
Yeah, exactly. So after 4844, they'll have to update to support these blobs and stop posting everything on layer 1 in the expensive call data section of
blocks? Is there anything that they'll lose by posting, like, doing this via blob space versus block space?
Or is it just basically kind of an equivalent substitutionary good? Like, it's just as good for
roll-ups. It's just as good and even better for them because it's cheaper and more plentiful.
Can I put a transaction in blob space? Can I, like, send Ryan some ether in a blob? Or, like,
what prevents me as a layer one user from consuming blob space? Nothing prevents you. It's
permissionless system. So it's tailored for roll-ups, but of course you can put any data inside
layer one blobs as you please, but it's up to you if you want to pay for that. I'd be more expensive.
Maybe a bad idea. I'm predicting that there will be some NFT project where the JPEGs are
inside blobs at first, because blobs are going to be pretty cheap and plentiful. Interesting.
Wait, okay, so there's a phenomenon in the NFT world about like NFTs that are on chain,
like Cryptopunks and MFers and a few other NFT projects.
What's the famous one? The generative one, autoglyphs are like the big on-chain art.
Does blob space just simply allow for more, and perhaps as one use case of blob space,
of which I'm sure there's infinity? It is like that we can just put more NFTs, more JPEGs on chain.
Yeah, I'm predicting there will be a lot of DGEN stuff happening with Blob Space.
Because it's going to be very cheap at first.
Oh, no, we don't want that, though.
We want the blob space for our roll-up transactions. We don't want to create another D-Gen market,
not the point of resource decoupling?
Yeah, it's the couple, because if they want to put data on chain, we can't stop them, right? They could just pretend to be a roll-up. Yeah. Yeah, we can't be tough-down about who gets to use what from a credible neutrality at layer one. And I don't think there's a way to enforce that only roll-up sequencers use blob space. But probably the properties of blob space are such that they are most conducive to. Yeah. So one example, Dom, isn't there like an expiry on the blob space? And this brings us to kind of the, we'll talk about the expiry, but this brings us to the second lens I wanted to talk about, which is from a valid.
or staker's perspective, like I'm running a node and I'm staking, does this, the introduction
of blob space change anything for me? Does it increase the requirements of the validating machine
that I have to run? Do I now have to store, in addition to all of this block space, do I now have to
store lob space? Maybe those questions are related. Yeah, so with 4844, it's going to go up a little bit.
You have to download and serve these blobs, all the data that's inside the blob, but you don't have to
store it forever. So that's what you mentioned. Blobs will expire after right now the specs say
about 18 days, I believe. And after that, it becomes like, it was attested that it was available
during those 18 days. But if it expires, then you're not going to be able to get it from a node
if the node pruned it. So you have to get it from some other source. And then you'll be able to
see that it was available on chain during that time. So this might be a reason why it's not as
conducive as block spaces to a non-chain NFT, although I don't know there may be ways around this.
But why is this 18 days not a problem? So why isn't that a problem for later two?
It's like if the data goes away after 18 days, aren't I losing the property?
Putting data away from blockchains is blasphemy. How dare you, Dom?
Yes. Yes. How could you?
So this is kind of the weird thing with the name data availability, because it's not data
storage and it's not continued data availability forever. So it's kind of the opposite.
it. There's a big problem with storing everything by every node forever that doesn't really scale
if you're to pay once to get your NFT JPEG on chain and then it's stored by everyone forever
for all of eternity with the data just going up and up only forever. So instead, we use blob expiry
to just ensure that the data was available once it was published and available to be downloaded
by everyone. And this is also from the lens of roll-ups is that if you,
you're 100 year from now and you want to sync the optimism state from scratch, you'll have to
get the blob data from somewhere. But then once you have the data, nobody can lie to you and say,
oh, I, 100 years ago, I had 1,000 E that came from nowhere and then here's a data. You can check that
data and you can not just check the integrity of that data, but check that its availability was
enforced by Ethereum long enough for anyone to snitch on an invalid transaction on optimism.
So you'll have these properties of being able to verify.
the data yourself. It's just that Ethereum is not the one that's going to serve you that data
forever for free. Okay, so data availability in contrast to data storage. Availability as in
Ethereum is making this commitment to the world around it, that it is enforcing that data
is sufficiently available for the surrounding universe to be able to take and do what they
need with that data in order to do whatever they want to do. And we've chosen 18 days as some
time length that data is being made available to individuals, to node operators, to other
layer two infrastructure providers, to other protocols like file coin or data availability
solutions like Celestia, anyone to take the data that Ethereum has made available to the
world, hence data availability for 18 days, and then that data has gone from Ethereum to some
more long-term storage
solution, which there
are so many of, it's not even
worthwhile for me to list them off, but we can
start with data availability solutions
like Celessia, eigenlayer data
availability, something else.
Roll-ups themselves. Your hard drives, like roll-up
operators themselves. Yeah. And so we're just
making this statement that, like, so long
as we can say that Ethereum enforces
data availability of all
blob data for 18
days, then the
universe is going to be able to do
what they need to do with that data in order to have a fully trustless chain of data that goes
back to Genesis. Is this correct? Yes. Yes. So the goal is to really have lightweight
notes be able to verify everything, including the availability of what's been published in a scalable
way. And also another thing to point out is that this blob data is not like part of the state
where you need to read and write to it many, many times a second. You can just throw it on a cheap hard
drives. And if you're like a hobbyist or any stakeholder in a specific roll-up, you can just
download the data you need from the roll-up you're using. And then you can just store it for
pretty cheap on like hard drives and data storage is just a thing that's continuously getting
cheaper and cheaper over time. Right. Like notably like terabytes of data are like under $100
right now. So maybe we can theorize that in the future there will be some like software
application that you run on your desktop computer and you link it to like your main addresses
and then it follows your addresses around some pre-selected layer twos like optimism and
ZKSync and Arbitrum and then it looks at your addresses and then automatically downloads all
of the blob data for your addresses that you've specified and then it just stores that on
your personal computer and then you can be the one that verifies the history of blob space
in addition to the many other data storage solutions
that may also be downloading and saving the same data.
Is this perhaps a version of the future?
Yes, and also roll-ups can have their own designs
to incentivize storage of their own data
if it's something that's very important
to that particular roll-up in its community.
So the design space is infinite.
So in addition to pushing execution out to the free market
and making layer two teams optimized for execution,
we're also just pushing data storage out to the free market
it as well. Yes. So one metaphor that I've used, Vitalik, use, is that Ethereum is supposed to be more
of like a billboard where you can have information in real time about what's on the billboard,
but once it's like erased, all you can do is verify that this information wasn't on the
billboard available to everyone to see back when it was published. Yeah, that's kind of the image
I get from Ethereum as well. It's just, it's kind of like a, you know, present moment of time
type of consensus mechanism. It's not trying to store the full consensus, like truth of the universe.
The full state of the universe, yeah. Inside of this computer. Yeah, it's more like kind of like a hurricane or it just kind of like...
The tip of the chain. Yeah, it twirls around. And so that's this distinction between data availability and data storage. So Ethereum is trying to be a data availability, a computer, but not a data storage computer. And so this 18 days is also significant because there does have to be some reasonable time period here, right? Like, it can't be like 15 minutes.
or like two hours or even 24 hours
because there are things like with roll-ups, right?
We have kind of like optimistic roll-ups,
kind of a seven-day proof challenge type window, right?
And that's why it has to be, you know, greater than like seven days
or just, you know, 18 days feels like an okay and time horizon.
But it shouldn't be like six months because that gets into data storage
and it shouldn't be like, you know, an hour
because that's not enough time for all of the data storage
and fraud-proof type solutions to kind of react.
Is that accurate as well?
Yes.
So it basically just has to be long enough for anyone who needs that data to download it.
So that's why I said earlier, it's kind of a cheat code for execution charting,
because then you have these roll-ups posting blobs.
And if it's a roll-up you don't care about, then you just don't download the blob.
You're going to see that it's there for, you're going to, as a layer-one validator
and layer one node, you're going to participate in securing that roll-up.
But even if you don't care about it, you don't download that blob.
And that's it. It's like the original plan with these mini blockchains where you would only care about the blockchain that has your fund, like one of the shards that have your fund and you don't care about the others. But this is even better because then the security is not split around these mini blockchains the same way it would have been with the original sharding plan.
Just to tie off this question, you said the requirements for a validator do increase a little bit. Like what are we talking? With protodank sharding. I did the math in my blog post with the current.
size of a blob and the target number of blobs every block.
We're looking about 50 gigabytes extra storage by every node.
But this is for 4844.
After full-dank sharding, it's going to go down somehow.
Blobs are going to be bigger, and the requirements are going to be lower for every node,
which is the magical sharding.
So the goal of Ethereum in general is to be able to run a validator on consumer-grade hardware, right?
And we meet that goal right now.
Like you can run a validator on a Raspberry Pi.
And what you're saying is this blob space change increases the hardware requirements by about 50 gigs of hard drive space.
So doable, I think, you know, on most consumer hard drive rigs.
It's not nothing, but I mean, 50 gigs, like you get, you know, thumb drives for, I don't know, 10 bucks that are 50 gigs these days.
Mantle, formerly known as BitDAO, is the first Dow-led web3 ecosystem, all built on top of Mantle's first core product, the Mantle Network, a bridge.
brand new high-performance Ethereum Layer 2 built using the OP stack, but uses Eigenlayers data
availability solution instead of the expensive Ethereum Layer 1. Not only does this reduce Mantle
network's gas fees by 80%, but it also reduces gas fee volatility, providing a more stable
foundation for Mantle's applications. The Mantle treasury is one of the biggest Dow-owned
treasuries, which is seeding an ecosystem of projects from all around the Web3 space for Mantle.
Mantle already has sub-communities from around Web3 onboarded, like Game 7 for Web3 gaming,
and buy bit for TVL and liquidity and on-ramps.
So if you want to build on the Mantle network,
Mantle is offering a grants program
that provides milestone-based funding
to promising projects
that help expand, secure, and decentralize Mantle.
If you want to get started working with the first Dow-led,
layer-2 ecosystem,
check out Mantle at mantle.xy-Z
and follow them on Twitter at Zero-X Mantle.
You know Uniswop.
It's the world's largest decentralized exchange
with over $1.4 trillion in trading volume.
You know this because we talk about it endlessly on banklists.
It's Uniswap, but Uniswap is becoming so much more.
Uniswap Labs just released the Uniswop Mobile Wallet for iOS, the newest, easiest way to trade tokens on the go.
With the Uniswap wallet, you can easily create or import a new wallet, buy crypto on any available exchange with your debit card, with extremely low fiat on-ramp fees, and you can seamlessly swap on mainnet, polygon, arbitram, and optimism.
On the Uniswap mobile wallet, you can store and display your beautiful NFTs, and you can also explore Web3 with the in-app search features, market leaderboards, and price charts.
use wallet connect to connect to any Web3 application. So you can now go directly to D5 with
the Uniswot mobile wallet, safe, simple custody from the most trusted team in D5. Download the
Uniswap wallet today on iOS. There is a link in the show notes. Cello is the mobile first,
EVM compatible, carbon negative blockchain built for the real world. And now something big is happening.
Introducing the Cello Layer 2. It's a game-changing proposal that's going to bring Sellow's rapidly
growing ecosystem home to Ethereum. Vitalik has shared its excitement for the Selo Layer 2 on the
cello forum, so has Ben Jones from optimism. But why? The cello layer two will bring huge
advantages, like a decentralized sequencer, off-chain data availability, and one block
finality. What does all that mean? Rock solid security, a trustless bridge to Ethereum, and more
real-world use cases for Ethereum without compromise. And real-world adoption is happening.
Active addresses on cello have grown over 500% in the last six months. With the cello layer two,
gas fees will stay low and you can even pay for gas using ERC-20 tokens. But
SELO is a community governed protocol.
This means that SELO needs you to weigh in and make your voice heard.
Join the conversation in the SELO Forum.
Follow at SELOorg on Twitter and visit sello.org to shape the future of Ethereum.
Don, there's two other lenses I just want to put this through.
One is developers, which I think a lot of the development, a lot of the app building will kind of migrate to layer twos.
And probably the obvious answer, the good news for them is, you know, your application can support more transactions per second because they'll be a whole lot cheaper.
go have fun, right? That's probably the answer for developers. How about users? Is there anything
that they, like how will they experience this new blob space world, this post 4844? I know David's
going to try to send me some ETH using blob space somehow. I don't know if he'll be able to do that,
but maybe. Not really your one. The typical user. How will they be able to do it? The typical user,
hopefully all the technical stuff is abstracted away from them. It's more of the heavy users who really
insist on having trustless verification of everything to secure their funds themselves with their own keys.
And this is what BlobSpace allows them to do. They just keep that data, only the data relevant to them about their funds, where it's stored on the state, how to move it, stuff like that.
So the overall trustless and permissionless is not sacrifice from a blockchain point of view.
Yeah, I guess this will just accelerate the migration from Maynett to roll-ups as well. I'm sure we could probably predict that.
economic change will result in that.
And you quickly mentioned developers.
I would also, I like to add that right now.
Developing on layer O-1 is kind of a weird thing because every block space is so limited
that as a developer, you have to do all sorts of fancy tricks to use less gas for the same
transaction.
But on layer two, you're going to have so much more block space of layer two blockchain
and roll-ups that you're done going to have to worry about these fancy tricks to use less
gas because you're just going to be way more bandwidth for layer two as a developer.
Dom, you just said as a developer
that, you know, block space, and you use that term block space.
Yeah, for layer two.
For layer twos, okay.
So layer twos, basically, are almost like a value-added reseller of blob space.
Yes.
So they take this blob space.
It's now a lot cheaper and subsidized by Ethereum.
And they convert that to Blockspace.
Yes.
And their own Layer 2, and then they sell that to devs and applications.
Oh, layer 2s turn blob space into Blockspace.
Yes.
That's the tagline of the Rolexcentric world.
It's much easy.
to scale data availability on layer one. And roll-ups are going to take that data and convert it to
scalable execution. So scale one, you scale the other for much easier. Yeah, it's just like,
you know, unrefined oil that you then process and turn into like a petrol or something like this.
Amazing. That's a great metaphor. Yeah. So as a developer on layer two, you have much more freedom to do
whatever you want, however you want. And it's something that kind of goes under the radar with layer two
execution is that you can only batch the state transitions onto layer one. So if I do a very,
very complex execution at layer two that sends one east to like a thousand people and then,
I don't know, loops around and buys a bunch of NFTs and trades them somewhere, buys collateral,
like you can think of this hyper-complex transaction. And then that goes batches onto layer one,
only the state transition. So people don't need to verify the whole execution of the transaction.
they only need to verify the actual outcome of the transaction.
And that's much more scalable.
And it's pretty cool how more complex transactions at Layer 2 become cheaper compared to the same transaction at Layer 1.
It's not the same ratio of scalability for each transaction.
Like a more complex transaction has way more savings.
So as developers on Layer 2, there's way more use cases and way more freedom that become
available to do all sorts of things that we aren't even conceiving of right now.
Computers are so cool.
Okay, Dom, let's do our best to get into probably what's about to be the most technical part of this conversation, which is actually how does a blob become a blob?
There's some like crazy math involved here.
There's things like polynomial commitments that make me scared.
How does a blob become a blob?
How do we start this conversation?
It's all polynomial magic.
Oh, no.
Basically, you have these 4,096 elements that contain your data of your blog.
Sorry, what's an element?
field element. It's basically a number, but inside of modular arithmetic. I don't know how technical
you want me to be about this polynomial stuff. I have a feeling we're getting to the firmware level
of, you know, technicality here, but please continue. Yeah, this is the stuff that everyone on layer
two doesn't need to care about, including developers and roll-up sequences. All they do is send the data
to layer one, and then layer one transforms it into this big polynomial equation that, like, if you
have a bunch of data, you put them inside these things.
and then you interpolate into a polynomial.
So it's kind of like, you know how two points make a line?
Then you can just treat your two-point data set as a line equation.
And then suddenly this line is extended to infinity on both sides.
So that's what we're doing.
But with 4,096 numbers instead of just two,
so it's going to be a big degree polynomial.
I think at the end of this technical conversation,
the mic drop punchline is that this is what sharding is.
and then there's a bunch of middle ground question marks math steps that I don't understand.
But the end result is like, oh, the execution sharding that we originally plan for in the roadmap.
Now we have data sharding and it's done with these polynomial commitments.
And ultimately, you have more data that is actually contained by a much smaller amount of data that, like how you said with this extending the line,
the smaller amount of data contains references to every other bit of data that is in the blog.
and once, you know, quote unquote, extend the line, you can, like, unfold the packet of data to create the full amount of data that is the blob.
That's my summary of this.
Do you want to add any more of that?
There's more polynomial magic to reconstruct the whole data from just a portion of it, but also to check its availability, which is the crux of dang sharding.
You'd want notes to be able to check that data availability, like that data was actually published by a block producers and
a roll-up sequencer. You can check that a roll-up sequencer was legit and posted the data on chain,
but without downloading that data yourself. So you're not suffering that burden from trying to
enforce the availability of data. This is what the polynomial magic enables. So really quick here,
Dom, so you just mentioned data availability sampling, I believe. This is a future upgrade that is
beyond EIP 4844. Yes. So EIP 4844, another synonym for that is,
Proto-dank sharding. And the proto-means-sharding. It's first, right? First dank-sharding.
No, it comes from Proto-Lamda.
Oh, yeah, Proto-Lamda. That's right. Never mind. I've forgotten about that. So Proto-Lamda,
it's also implies me, like, it's first. We do that first. And then the day-day availability
sampling, you were just discussing, which also uses some magic polynomial math, that comes later
with full-dank sharding. And the benefit there, with full-d-sharding later, TBD, we don't
have any dates on this. Like, think years, not months, okay?
And the benefit that that gives us with full-dank sharding and with data availability sampling, full-dank sharding is what?
The validator clients just hold less data, so that 50 gigs that you were talking about earlier, that drops down?
Or what properties does full-dank sharding give us?
Yeah, so 4844 sets the stage for full-dank sharding with all the polynomial stuff.
But also, like I said earlier, the Ethereum right now has this kind of a data availability layer in the form of block space and call data.
but it's not scalable, right?
Because every note needs to download everything,
which makes the bottleneck.
So to answer your question,
the data availability sampling aspect of full dank sharding
will enable this data availability layer
to become much more explicit,
where even a full majority,
like a super majority of validators
can't fool you into believing that an unavailable block is available.
And this is what your node is going to sample,
just a tiny amount of data to be sure
with like a one in a trillion probability
that the data is actually there.
Okay, so Dom, just again,
high level, not in the weeds,
because there's so many things we could explain here,
but high level, this branch of magic polynomial math,
which enables proto-dank sharding and full dank sharding, okay?
This feels like a free lunch to me.
It's like, wow, we've just like, here you go with some math.
We just put some math on it, and we, you know, we get scalability.
What kind of, like, is this new stuff?
is this branch related to
cryptography at all? Because we're used to
exploring on bank lists, of course,
you know, a ZK Snarks and this whole
new branch, this relatively new decade
old branch of cryptography
that's completely revolutionized everything
about blockchains and how
we scale them, what we do on them. But this,
you're talking polynomials.
I mean, I learned some polynomial stuff in
algebra, right, in high school. Is this
a new branch of math, or is this stuff
that we've used all over the place
and are just now applying to
Ethereum and blockchains just because we figured out a clever way to do it.
Yeah, I would say it's a combination of many things that we already knew inside
cryptography, so like erasure coding and data reconstruction and polynomial commitments.
And this is basically, it's new in the way that Bitcoin was new, right?
Because Satoshi did not invent proof of work.
It did not invent cryptography, cryptographic signature.
Like the novel thing was combining these things together and getting consensus.
So it's kind of like that from a research.
perspective. I'm not saying that data
availability sampling is as novel as Bitcoin
was in 2009.
But this stuff like data availability of sampling
and ratio coding, that sort of thing, I mean,
that's existed for a while with just like hard drives,
right? Am I right? Like the old style
of hard drives, when you put them in rate arrays
and that sort of thing, you use some
of this? Yeah, you're talking about redundancy.
Yeah, redundancy, exactly. Yeah.
It's an old concept. I know rate doesn't
really do it with polynomial, but it's kind of
the same idea where if you lose
a portion of the data, you can reconstruct it.
So of course, this part is not new or novel, but apply to blockchain.
We can combine all these elements together that we already knew into this data availability sampling to solve the problem of data availability in a scalable way.
Because right now, it's not scalable.
To confirm that a block is available to the network, your node has to download the whole block.
And that's why it's kind of implicit where it's like, of course, I need the block to verify the signatures and the transactions and everything.
So that's like an implicit step.
you need to download the block.
But now we're really thinking about it.
How do we solve the problem of checking if the block and the blobs are all available
without having to download the whole blocks and all the blobs?
And this is what full-dank charting will solve.
Dom, are you able to kind of put numbers on the scalability that proto-dank charting
and then full-dank sharding enhances?
Like, how much more scalability?
Is that a valid question?
Can that be measured?
It's very speculative, I would say, right now.
Like, people are throwing numbers around like 10 to 100X with just 4844.
And then with dank charting, we're talking.
I know Vitalix throws like 100,000 TPS, but it's a bunch of speculative and weird metrics
because a transaction can be anything.
And the more complex one get more scalability, like I said earlier.
So I don't really have any numbers.
But with 4844, we're going to get those numbers.
And it's something I'm very excited for.
Right.
It's impossible to really measure these things.
Like a transaction is a different thing depending on different contexts.
But I think the point that I was trying to get out of you is that there's a number of zeros.
Yeah.
It's a number of zeros of scalability increases depending on how you measure it.
And there's this one like little part about full dank sharding that I think is actually just a really emblematic about the balance that Ethereum takes between hardline trustlessness and pragmatism.
And that is the number that you said earlier.
I don't know, maybe you just threw it out there, but it is spiritually like aligned with this number.
It's like one trillionth.
And in full dank sharding, we do this.
availability sampling mechanism where you sample a data and if you are somebody who's trying to
produce a fraudulent block or hide data there is at most after one sample a 50% chance that that is a
fraudulent block yeah and then you take another sample of the data and you cut that in half and then you
cut it in half so it goes down to 25% 12 and a half percent 8.275 percent whatever and you do that like
i think 30 times yeah and you get to one one billionth odds yeah that this block is
is fraudulent. And at some point, we, like Ethereum in dang charting, pick this numbers.
Like, okay, we'll do 30 samples. And then that will give us a one in one billionth chance that
this block is fraudulent. And we will accept that probability. We are not so crazy hardliners
that we won't trade off multiple orders of magnitude of scale increases when we are giving up
a one-one billionth probability that this single block in the blockchain has fraudulent data.
That is a tradeoff that Ethereum is making.
Multiple orders of magnitude of scalability increase for a one-one-billionth probability
that somebody hid data inside of a block.
I think that's just like such an elegant way of articulating Ethereum's values.
And the way that cryptography allows us to do cool things
by compressing the bad and magnifying the good.
I just think it's so elegant.
Yeah, so it ties in with the concept of weak subjectivity, right?
Because with blob-espiry, we're kind of losing that aspect that a caveman can just go hibernate in a cave for a thousand years and then come back out and then verify the whole blockchain from Genesis.
That's something that we already kind of sacrificed with proof of stake, with the weak subjectivity.
But it's such a small trade-off compared to the scalability in enables and everything.
And yeah, compared to what you said, I would say, we're probably going to do more than 30 samples with full dank sharding.
So it's going to be even lower probability that you believe.
an invalid block is valid and available.
But also, this is the probability that your node personally gets fooled.
Like, there is no way you can fool the entire network, right, with all these probabilities.
Even a supermajority can't convince the network that unavailable bloves are available
and then do nasty stuff at later too.
You know, David gave a shout out to computers earlier.
I just want to shout out math right now because...
Yeah, this is great.
The statistics, the math behind this is absolutely fantastic.
I do want to just ask a general question.
question here, Dom, and order of magnitude is fine, right? But just this will lead us into the next
section. So we talked about the history, we talked about the technical, and I think we've,
we have a sufficient definition of what blob space is. At least I feel like I know blob space now.
I know what it is. Now we want to, you know, talk about the economics of blob space and this decoupling
of the market. But before we do, I do want to get some rough approximation of what David was saying
based on kind of like the scaling factor, right? So if I go to L2B slash scaling,
slash activity, there's this chart here that shows me the current scaling factor of layer
twos, and it gives an approximation.
And it says, layer twos are operating at their activities about 5x main net right now.
And that's useful for me.
And this is 5x main net.
And right now, this world, they're consuming a very small amount of block space, aren't they,
per day of Ethereum, right?
And they could be consuming a lot more block space.
And if we had the activities to support it, layer twos could be,
you know, a much higher than a scaling factor of five. I don't know if that's like 10 or 50 or
100. I'm not really sure. But then when they have this new blob space resource available to them,
then they have this whole other scaling factor. And as you and David were saying, it totally
depends on what they do inside of that blob space, right? So they're going to take the blob space.
They're going to resell it as block space. And the block space that they resell could be very complicated
transactions, or it could be very simple transactions. But rough order of magnitude here, right? If we have
blob space, and it's almost like, let's say it's 80% used, this blob space is now used, post-EIP
4844, how much of a scaling factor are we talking for like this type of typical DFI-type
transactions? Are we going to be able to turn this like 5x into a 50? Are we talking about like a 500x
Ethereum maynett. And again, just rough
or like approximations here.
Well, the amount of scalability
really depends on the users. So like
roll-ups could update to 4-44
and then have the same amount of users
and the same 5.something X on
L2B would stay the same. The difference would be
that it would be much cheaper for each layer
to users to the point where it's practically free.
Right. So like you said,
80% of the blob space is used,
then the price of blobs goes to zero, right?
Because it's anything below 100%.
That's right.
The price, it's like EIP-1559.
If we're above 100% of gas target usage, then the price goes up and down to manage congestion.
So the prediction would be that at first, blobs are going to be mostly empty and only
used scarcely by roll-ups because they don't have the actual user base to fill them up.
So we're going to see roll-ups operate practically for free with just a tiny amount of execution gas at layer 1.
and some other expenses for like roll-ups, whatever expenses they have.
But then that layer two transaction can be subsidized,
and then you can have a world where layer two is basically free for a while
until there's enough users to fill that blob space and congest it
and make it go up in price.
So that's kind of the economics of 4844 in a speculative prediction of mine.
And that's one of the things I'm really excited to see is raw data and how it's actually used.
So, Dom, let's spend more time.
kind of defining the economics of this. And this is a quote from your article. And we've alluded to
this earlier in the episode, but you said this. Another fun aspect of EIP 4844 is the introduction
of the two-dimensional fee market, meaning execution and blobs will be priced separately,
according to the individual demand for each. The price of execution is simply the gas fees
we know today, with EIP-1559 and all that good stuff. So can you explain what you mean here?
So this is a two-dimensional fee market.
There's like two types of gas.
Is that one way to think about this?
Yes.
And then you mentioned the Hallow EIP of 1559.
So how is that related?
What does that imply about the gas market for blob space?
So it's a two-dimensional market in that these two resources are going to be priced separately.
So one concrete aspect is that the famous NFT drop at layer one example, like if it happens, then the roll-ups are shielded from that.
because the price of blobs that they commit on chain
is going to stay the same relative to the demand of other roll-ups,
whereas Layer 1 execution gas can be very expensive.
So that's a cool aspect that Layer 2 users are shielded
from whatever D-Gen activity happening at Layer 1.
Right, we'll use the word decoupling now.
We're originally coupled markets, now we are decoupling the markets.
Exactly.
So the high-level overview of 1559 that I'm sure listeners know very well
is that there's a target of gas use per block,
And if it goes above that target, the price of gas goes up for the next block, and then it goes down if there's below the target.
So, and the goal is to manage congestion, right? If there's too much demand for gas, the price goes up and it makes them more efficient auction than we had before.
And then that base fee gets burned because you don't want validators to be able to manipulate it to their own benefit to have, like, high gas fees and reap the rewards.
And we get the same kind of market for blobs, but it's a completely different market.
So right now the specs say that we target three blobs per block, but we can handle six.
And if it's six, then the price is going to go up for the next block.
And it's that same idea.
Like the price goes up exponentially if the blocks are always filled with six blobs.
And by the way, these numbers might change because it's still a discussion in progress.
And that's the blob market.
We're going to have a separate EIP-159.
And with another base fee for each blob gas being used, which also gets burned for the same
reasons. Okay, so both gas fees to pay for block space, aka the status quo, that's burned. We know this.
Blob space is also burned in the same mechanism. It's basically one-to-one parity with block space, correct?
Yes, just the price function is a bit different for blob space, but that's technical stuff.
But it's the same concept. Yeah, you gave the numbers three. Three is the target, so we target three blobs.
Blobs are all uniform size, right? One blob is always the same size as another blob. Yes. But if a roll-up has more data,
they can post two blobs in the same transaction if they want.
So like the data requirements for roll-up,
they can use as many blobs as they want.
They just have to pay for it, like the rest of other roll-ups.
Okay, so post-4844 in Ethereum,
a block will have space for three blobs.
So when we talk about blob space,
we're actually just talking about three slots for blobs,
but with tolerance to flex up and down.
Yeah, so that's where the 50-gibite number comes from.
Like, if there were six blobs, every single block,
then that number would be 100.
gigabytes, but that's completely unsustainable.
If you have six blobs for every block for 18 days, then that the price of blobs just goes
up exponentially.
And at some point, there's just not enough eth in the world to pay for those blobs with that
high base fee.
So the target is going to be reached that on average three blobs per block.
But that's to manage congestion.
But if there is no congestion, then you have like one blob or zero blob in a block.
Then that price goes down to basically zero.
So that's where the, at first, when there's.
no congestion, blobs will be practically free.
So one perspective about this is, oh, we have a brand new resource called blob space and it also
burns ETH. There's another mechanism, a brand new mechanism to burn ETH. Yay, we're going to burn
more ETH. But actually, I don't think that's true because the existence of blob space is
likely going to pull away demand for block space, because why are we doing this in the first place?
We're trying to make roll-ups cheaper. And so there will be a tension, a balance between block space
and blob space, they won't be formally coupled in the protocol technically because we are decoupling
this block space, but they will be coupled because if it is so much cheaper to do whatever you're
doing on layer one on a layer two, then transaction demand is going to flow out of the layer one
towards the layer two and then rebalance the demand between blob space and block space.
So these are like informally coupled. If it is so much cheaper to buy blob space than it is to
by block space, then these things will probably balance out a little bit. But in aggregate,
I think total burn will come down because we are encouraging, incentivizing layer two activity,
which is fundamentally cheaper than layer one activity. This is my intuition. Is this right,
Dom? Yeah, in aggregate, it's going to go down at first. And then as layer two, hopefully gains more
adoption and more users, then it's going to go back up eventually with way more users doing like
half a penny or thousands of a penny per transaction on layer two, that in aggregate, if that's
enough to fill the blobs and then the roll-ups have enough fees from these many, many thousands
of a-penny transactions, they can afford to pay for high gas prices for blobs once that's
congested. But right now, I don't believe blob space is going to be congested at first with that
level of activity, which is why it's very alleviating. Just three blobs per block is going to be
very alleviating for roll-ups needs. But we're going to lose that 200 or 300-Eth a day that Ryan said
earlier. Like, that's probably going to go way down. So we're going to... To zero. That'll drop towards
zero, right? We're going to burn slightly fewer eth at first, but it's not really a problem,
because if we're getting much more adoption from that and scalability, then that's a win.
Right. The line here was like, we were just going to make it up in volume. So where things are going to
drop to zero because we're reducing fees to layer two's, but then, like,
layer two fees also drops to zero. And then all of a sudden, we're like, we can take off the
brakes of layer two economic demand and pick it up in volume. And so this is Ethereum opening up
its layer twos to the long tail of economic use cases that can approach, you know, point
zero zero zero zero zero one pennies per transaction because we've enabled blob space.
Yes. And like I said earlier, roll-ups can have their own incentives and pay for these
fractions of a penny for each users. And then you have a layer two experience where it's completely
free and you can just do whatever you want and more complex daps can now be viable because there's
basically no fees. And that sort of activity at first that's going to be incentivized by these
effectively zero transactions that's going to drive more users until later two and that's going to
fill up the blobs eventually. And then at some point it's going to start burning more.
All of this has been a fascinating conversation, very interesting, but one of the least explored
areas, right? So I think some people listening will just, you know, maybe early in the podcast
are up to this point, be like, WTF, there's going to be a whole new resource market? Like,
how does that impact demand? You know, there are these ideas earlier, and some people still think
this, that layer twos will compete against Ethereum and, you know, kind of the layer one, right?
This is a whole new variable, I guess I'm saying, that I'm not sure many have looked at when it
comes to like how do you try to predict the future price of ETH and the future demand for
block space. What's effectively happening here, though, is we get, you know, a new furnace, right?
So we've had the furnace one of just block space demand and that burns some ETH.
Then we have the second furnace firing up. In the short run, this will be all of the layer two
block space consumption will drop down towards zero. In the longer run, we may end up burning more
ETH, and we probably will, but that will take some time. I don't know if that'll take months or that'll
take years. In fact, it's probably impossible for us all to predict here. But one of the side effects,
I think, is it depends, you know, kind of how you model out the value of ETH and where that sort of comes
from, right? Because there's certainly some value of ETH accrual that is related and correlated to
kind of burn. How much block space is consumed? How much blob space is consumed? How much ETH is
burnt, that sort of level. But then you also have to think about in the layer two world,
all of these new defy applications or in general applications that open up based on cheap
block space, how much eth will they actually consume as what we've termed before economic
bandwidth or like eth-based collateral? I mean, you look at an app like friend.com on base,
and it's purely denominated in ether, right? It's like purely, and so that has been a net accretion
point for ether value. And so the question, I think, becomes with these new layer two's opening up,
how much more do they start to use Ethereum's monetary properties and eth is money, right? And
eth is collateral and eth is a store of value and all of these things. And does that compensate
for the short-term reduction in burn? It's a very complicated kind of economic model,
but those are the puts and takes of this. One question I have for you, though, Dom, is you mentioned
this use case of layer 2s
will no longer have to compete against
the DGen NFT drop on
Mainnet. What would a world look
like where other layer 2s though are
competing against other layer 2s
for blob space?
So, you know, that would have
some contention and resources.
So can you imagine a world where we are not
80% of blob space consumed
but we're doing the full, you know,
three blobs per block thing, and we're getting
into 4 and we're getting into 5 and we're getting
into 6. Does this mean like
an arbitrum layer two competes against an optimism layer two. I'm not even sure how to imagine
this world. What does that look like? That's pretty much exactly what you said. They're going to
compete each other. And you can think of relaps taking small pennies from a lot of users for
transaction fees on layer two. And then that's enough to pay for layer one blobs. And whatever
is lessover is the roll-ups profit, right? So if the blobs just go up, then the layer two becomes
like they're going to lose money if the blobs are too expensive for them, then the most efficient
roll-up that can have more activity, can batch more transaction, can compress better,
they're going to make more use of that more expensive blob. So that's sort of a world where we're
heading to where if block space becomes congested and the best roll-up that uses this more efficiently
is going to be able to provide cheaper transactions on their layer two, which are going to drive them in.
like you say,
arbitrarum versus optimism,
they're going to fight
and bunch of each other
for who can use that blob space
better.
So all of these roll-ups
are in this race
to convert this raw material
of blob space
from Ethereum
into the most valuable
product it can
in the form of layer
two block space.
And some
roll-up ecosystems
will be more successful
than others
at doing that.
Whatever that means,
whether that's kind of
the network effect
of the roll-up.
It means the same thing
as the Ethereum
layer one.
all, the Ethereum layer one block space is the most valuable block space in the world because people do
NFT drops on there. People do defy stuff on their network effect stuff. There's a lot of liquidity.
People pay more for Ethereum layer one block space. And layer two block space will be judged on
its same merits. Like how dense is layer two block space will be a function of how much demand there is,
which is going to be a function of how much aggregate economic activity there is on every respective
layer two. Yes. And it's also on a spectrum where
like a roll-up that aims to be ultra-secure,
they're going to want to commit to a blob every single block to,
because from the perspective of a layer two roll-up,
once a transaction is batched onto a layer one blob,
it becomes finalized from the roll-up's perspective.
Like the security is now offloaded to Ethereum.
So if you're a roll-up whose community really values high security,
then it might be worth it for them to post a blob every single block,
even if it's not completely full or stuff like that.
So that would make the roll-up more expensive.
But a roll-up that doesn't care too much about security,
like a roll-up that has, like, game assets or something
where it's not too valuable.
And they can afford to just wait out a few blocks
and have the price of blobs go down.
And now they commit a lot of data compressed.
Roll-ups don't have to post a blob every single block.
So that's another aspect of, like, individual trade-offs that roll-ups will make.
Yeah, this is something I wanted to unpack is there's this new variable
in blob space resources,
which is exactly what you've said,
how frequently does a layer two
choose to commit its date route to the layer one?
And with proto-dank charting,
you said that there is space for three roll-ups
per block to submit their blobs.
Well, for three blobs.
Three blobs, excuse me.
Three blobs per block on the layer one.
So, like, three total roll-ups can submit
a total of three blobs per block.
That's a low number.
That is a very low number.
I mean, that goes up with full dank sharding,
but only three blobs per block,
every 12 seconds. And so this is also going to be a vector, a variable that layer two's tinker with.
If you submit your blob every single block to the Ethereum layer one, you are maximally secure,
but that might not be the optimum balance of security versus expensiveness or efficiency
that your users on your particular layer two demand. Maybe like the typical optimistic roll-up,
which is already operating on fraud proofs, maybe they only need one.
blob every, I don't know, 10 minutes or something, and all of a sudden, going from 12 seconds to 10 minutes is massively more efficient.
And so this isn't really anything that's part of the core Ethereum protocol, but that is a variable to consider when we talk about layer 2 economics, which is how often do they commit their blob to Ethereum layer 1 blob space.
Yeah. I'll just add something for the listeners. Like, if you're a layer 2 users and the layer 2 post every 10 minute, you don't have to wait that 10 minutes.
Like your transaction is instantly confirmed by a sequencer and then you can instantly do other stuff.
But every 10 minute would be the full commitment and batching on chain where your transaction on there too now becomes completely settled and finalized and the sequencer can't change it.
So that's sort of the tradeoff like we have today or with Bitcoin where you wait more blocks if you want more security.
Like if you're on a roll up and you want ultimate security, then you're going to wait until the transaction is committed.
batched unchain before considering it finalized. But if you don't really care, you're just doing
small transactions and the sequencer is going to be the one in charge of the security in transit
until it gets onto a mail-lare-one. One word you used earlier in this conversation, Dom, I want
to return to, is subsidy. That's kind of an interesting word because, you know, like we like
analogies on bank lists. This sort of feels like a, almost like a government,
subsidy of a particular sector that the government wants to grow, right? Like, say solar, there is going to be
a government subsidy in the form of tax credit or tax reduction for a particular industry or set of
resources. Because why? Because let's say some government wants to subsidize green energy in the form of solar.
And you can agree or disagree with that policy. Let's not get caught up in the weeds here. But this is
effectively the Ethereum protocol subsidizing in a way the roll-up roadmap, would you use these terms?
Maybe the net effect is going to be the propagation of roll-ups because the resource is now
much cheaper. How do you think that analogy holds? I would just argue against a tiny thing you said
about it's not the layer one subsidizing. The layer one just says, here's the blobs, do whatever you want
with it. And if they're so cheap, then it's up to the roll-ups to decide if they want to subsidize
and run at a loss, just to attract more users.
It's like we've seen this war with daps,
like subsidizing liquidity or giving token incentives.
Like roll-up sequencers can do the same thing and see,
here's few transactions, here's the token incentives,
come use our roll-ups, come, like, increase the network effects,
just so that in the long run they have these users
and compete against each other for network effects.
But this is, in the end, it's positive for Ethereum
because it's all getting settled on Ethereum.
And like, it's really up to,
the roll-ups to decide if running a loss for a little couple months is worth it. But while the
blobs are very cheap and not congested, then that's probably what a lot of them are going to try
doing. But the broader point here, Dom, is that the Ethereum protocol in general and Ethereum
researchers are choosing to implement this change in order to make roll-ups far cheaper.
Yeah. And this is towards this goal of like what Vitalik said is like a blockchain transaction
should be fractions of a penny, not even like pennies, right?
It's all towards that goal, but it is an intentional decision that the protocol is making, right?
Yeah, it's the roll-up-centric roadmap. It basically puts roll-ups as first-class citizens on the layer one chain,
but it's not really a subsidy as such. It's just a decoupled market to have more efficient usage of resource
that can then be scaled up. I think you might be actually able to take the alternative perspective on this, guys,
which is like, Ryan, the way that you articulated that is that the protocol designers of Ethereum
want a specific outcome. And I think that is actually totally true and exactly what happened.
That's why we call it the roll-up-centric roadmap to Ethereum. Yet also, what we are doing
is merely decoupling data from execution into two different spaces in a block. And then what happens
as a result of that decoupling is that it just so happens that roll-ups are cheaper. But simply doing the act
decoupling is all that 4844 is doing. We are just separating data from execution. It just so happens
that that makes roll-ups cheaper. And so I think that there is an argument that is actually decently,
credibly neutral. Like, we are not picking winners and losers here. We are not saying that roll-ups are
the winners, even though, again, it is kind of like why we're doing this. We are just merely decoupling
data from execution and letting the free market build on top of this permissionless protocol.
And it just so happens that decoupling data from execution is more efficient and allows
for more total net economic activity
to be produced through crypto economics.
Do you like that approach, Tom?
Yeah, it's fine.
It's a good analogy.
We don't really pick and choose the winners and losers.
It's really, here's the resources available by layer one.
Go make use of them as efficiently as you want.
And that's what the market is going to converge onto, right?
The roll-up that has the most efficient usage of Blubspace
will offer the cheapest fees and thus attracts the more users.
And on the long run, like I know that's something Justin Drake likes to ponder about is enshrining a roll-up onto layer one.
At which point, we're using the full tank charting and 4844 infrastructure to scale up layer one execution.
But that's kind of in the far future where we enshrine a ZKVM.
And now we can have much higher gas limits and have a roll-up at layer one that's going to scale up.
That's, Doc, is that, that's 2019, isn't it?
Are we full circle to 2019 and we have like sharded execution?
We're finally going to be able to do that.
With an interim roll-up, yes, that would be effectively be scaling up execution,
but in a much, much better way than we initially planned.
So we're going full circle with roll-ups and then getting that into layer one in the end.
But that's kind of in the far future in crypto time.
We're just going to have roll-ups.
Doing the full circle and returning to execution charting
in the distant future of Ethereum's roadmap
is like one of the weirdest, most fascinating
things about this universe's timeline.
Books will be written about this.
I agree.
This would make 2019 me so happy
to ensure that this is actually happening.
We found the more natural route
to execution charting.
It's a much better outcome
and it's a much better way to get there too
because at layer one,
we don't have to do as much upgrades
and that's much research
because we can just let the market innovate
and find the best designs,
the most secure ZKVM and stuff like that.
And then we just,
enshrine it. We just yoink it from layer two that developed it, innovated, competed for it. And then
assuming it's open source, of course. And we enshrine it into layer one and reap the benefits.
And then we get back the coveted execution scaling. You sly dogs you. I don't know how you made this
happen. It's pretty clever. This is just the beauty of the harmony of Ethereum that I think really
exemplifies why and so many others are just attracted to the protocol. It's just like a nice,
there's a beautiful harmony between the free market and the protocol, right?
Like, top-down rules mixed with bottom-up markets, and then that is Ethereum.
And we got there in a roundabout way.
It's the philosophy of addition by subtraction.
Like, whatever the community can do, we just delegate and there's no top-down approach, really.
Beautiful, beautiful.
So just to really drive home the roadmap on this, we have 4-8-44.
Optimistically, you may be coming November, pessimistically, some coming in sometime, 2024,
for full dank sharding sometime after that.
And then you just kind of left us with what will definitely be future content,
which is enshrining a roll-up in the layer one.
What does come next?
So let's say we're post-full-dank sharding.
Is that it when it comes to data availability?
Or what are the next steps after this?
It's not really a sequential roadmap, as you know.
So it really depends where we'll be with the other items,
like enshrined PBS and stuff.
By the time we have full-dank sharding.
So it'll be definitely one more step towards the endgame as described by Fitalik.
So let's just talk through that blob space and how this kind of pans out with respect to the roll-up.
So we'll have EIP 4844, right?
And at that point in time, blob space and like layer two, data availability and transaction costs will drop towards zero.
Okay.
And then over the months, we know apps will take off if you have a really cheap resource like a blob space, guaranteed it gets used.
If we have cheap block space, it gets used on Ethereum. That is the rule here.
And so by the time that starts to ramp up, maybe we are consuming those three, four, five
blobs per block. We start to get congestion. And so rather than the, you know, the 2022 people
complaining about expensive block space, everyone's complaining about this blob space is too
damn high, Dom. And then, then we have another ace up the sleeve, which is full dank sharding.
Again, we don't know the exact timeline here. But once the blobs,
space fills up, full-dank sharding gives us even more blob space availability, right? And if
things work out from a timing perspective, maybe we meet the market need at the right time when
things are just starting to get congested again. Is that roughly the idea here? Yeah,
that sounds about right. It will be a crazy increase in blob space with full-dive charting,
but we definitely have some things to figure out before we get there, like with regards to networking
and proper sampling techniques and everything.
So this is why 4844 is a perfect stepping stone for roll-ups,
because they don't have to wait as on.
They can just use the blob space today
and then not even have to upgrade once we have full-dank charting.
It'll just be done behind the scenes for that.
Wow.
Well, this has been an epic episode in a tour to force,
I think, blob space 101 of everything we needed to know.
And I was trying to keep track, David,
of how many times we said the word blob on this podcast.
I don't know.
But you definitely failed.
I don't know how well you did.
Well, you know what?
I think some bankless listener will be able to tell us how many times the word of blob was said on this podcast.
And if you DM us, you know, maybe there'll be something special at the end of that show.
They could tell me any number.
You can't believe you're actually counted.
Just control after transcript.
Dom, thank you so much for walking us through this today.
It's been, thanks for having me.
Fantastic. Thank you.
There is an article as well.
If you like written material that Domithy wrote about blob space, we'll include a link to that in the show notes.
Got to let you know, as always, crypto is risky.
You could lose what you put in, but we are headed west towards blob space, the frontier.
It's not for everyone, but we're glad you're with us on the bankless journey.
Thanks a lot.
