Unchained - The Chopping Block: Data Availability & Why It’s Important - Ep. 599
Episode Date: January 25, 2024Welcome to The Chopping Block – where crypto insiders Haseeb Qureshi, Tom Schmidt, Tarun Chitra, and Robert Leshner chop it up about the latest news. This week kicks off with a crucial question: Are... the latest trends in crypto ETFs signaling a major shift in the investment landscape? We delve into the market's nuanced response to these ETFs and what it means for investors. How important is client diversity for Ethereum's stability and future growth? The squad engages in a lively debate on this topic. With the advent of Proto-Danksharding, how might Ethereum's scalability be impacted, and what are the implications for the blockchain ecosystem? We further examine the user experience across blockchain platforms, particularly comparing Solana and Ethereum in terms of their user interfaces and transaction dynamics. Looking to the future, what breakthroughs and challenges can we anticipate in blockchain technology? Join us for an in-depth exploration of these key questions and their profound impact on the world of cryptocurrency. Listen to the episode on Apple Podcasts, Spotify, Overcast, Podcast Addict, Pocket Casts, Pandora, Castbox, Google Podcasts, TuneIn, Amazon Music, or on your favorite podcast platform. Show highlights: 🔹 ETF Market Analysis: Dissecting the impact of GBTC and other ETFs on the crypto market. 🔹 Client Diversity in Ethereum: Debating the pros and cons of multiple clients for network resilience. 🔹 Proto-Danksharding Effects: Assessing its potential to lower rollup costs and enhance scalability. 🔹 User Experience in Crypto: Exploring how fees and speeds affect user interactions on various platforms. 🔹 Solana vs. Ethereum UX: Comparing their user interfaces, focusing on transaction costs and latency. 🔹 Blockchain's Future Trends: Delving into predictions and emerging innovations in the blockchain world. 🔹 Ethereum's Protocol Evolution: Discussing the roadmap and future developments in Ethereum. 🔹 Scalability Solutions: Evaluating different approaches to scaling blockchains effectively. 🔹 Layer 2 Dynamics: Analyzing the growth and challenges of Layer 2 solutions on Ethereum. Hosts Haseeb Qureshi, managing partner at Dragonfly Robert Leshner, founder of Compound Tom Schmidt, general partner at Dragonfly Tarun Chitra, managing partner at Robot Ventures Disclosures Links Vitalik's post about Cypherpunk values: https://vitalik.eth.limo/general/2023/12/28/cypherpunk.html PolyMarket for gas price per blob after EIP 4844: https://polymarket.com/event/gas-price-per-blob-1-month-after-eip-4844?tid=1706149992794 Tweet thread from DC investor: https://twitter.com/iamDCinvestor/status/1749410364666606075 Ethereum's Proto-Danksharding and EIP-4844: https://www.eip4844.com/ Ethereum 2.0 Clients Dashboard: https://clientdiversity.org/#distribution Solana's Firedancer: https://github.com/firedancer-io Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You know, if we zoom out and you're listening to this show and you're like,
who the fuck cares about this data storage thing?
Like, why is ephemeral storage versus long term?
Who cares?
All you need to think about is this allows you to lower, you know, fees in some certain ways.
Not a dividend.
It's a tale of two pawn.
Now, your losses are on someone else's balance.
Generally speaking, air drops are kind of pointless anyways.
I'm in the trading firms who are very involved.
I like that eat is the ultimate policy.
D5 protocols are the antidote to this problem.
Hello, everyone. Welcome to the chopping block. Every couple weeks, the four of us get together and give the industry insider's perspective on the crypto topics of the day. So quick intro, first you got Tom, the Defy Maven and Master of Memes. Hello, everyone. Next we've got Robert, the Cryptoconassur, and Tsar of Super State. GM, everybody. Then we've got Tarun, the Gigabrein and Grand Puba at Gauntlet.
Yo. Finally, I'm Haseeb, the head hype man at Dragonfly. So we are early stage investors in crypto, but I want to caveat that nothing we say here is investment advice, legal advice.
or even life advice, please see ChopinBlock.
Xyz for more disclosures.
Okay, so we finally got into the post-ETF hangover.
I think as much as we're tired of the ETF,
it looks like the market is even more tired of the ETF
because the market has just been basically puking
for the last, I don't know, three, four days.
BTC is down about 20% from the high,
so it was at 49K on the high side.
It was threatening to touch 50K.
And as the ETF has been live,
it's just been kind of slowly drooping down
downward to today as a press time, it's something on the order of about 40K, kind of
circling up and down.
It hit a low of, I think it blew below 38.
So a lot of the downshift in BTC has been blamed on GBTC.
So if you recall from the previous shows, we were talking about the net inflows to the other
ETF products.
I think BlackRock finally hit a billion in net inflows and almost all the other products
have been net inflows.
but that has not been able to counter the outflows coming from GBTC.
So initially, I think last week we tallied up the GBTC outflows, I think a day or two
after the ETFs had launched.
And it got into about 500 million of outflows.
And we were like, wow, that actually doesn't seem like that much given the 20-some-odd-bodd billion
they have.
However, it looks like people were eventually waking up and realizing like, oh, shit, I'm paying
all these fees in GBTC.
It's time for me to get out.
And there's been now net estimated something on the order of four billion.
billion dollars in total GBT outflows and it looks like that's not being balanced with inflows from
other for other ETS and so a lot of these people are not just rotating out of GVTC into other
ETFs they're just selling and getting the hell out so it this seems to be a big drag on the overall
market apparently the FTX estate has sold over a billion dollars of GBT shares and so it looks
like if this is to be expected there may just be more puking to go as
GBT continues its, you know, many of these people who have higher cost bases rotating out of
GVTC and just getting out of their positions.
Well, I'll be the first one to actually take a slight disagreeing position on this one.
So I don't think you can blame GPTC because the total net flows across the entire ETF and
ETP complex for all of the Bitcoin products is actually positive.
So, GBT, you know, we can look at the exact data, you know, as of this morning was about, you know, $3.4 billion of outflows or so.
And there was about $4 billion across every single product of inflows.
And so, you know, in aggregate, the flows were, you know, call it ballpark close to zero over the, you know, over the last week.
No more money has been moving in or out of Bitcoin in the aggregate through these products.
is basically flat.
And the price is down dramatically, even with roughly flat flows.
And this is just, in my opinion, a result of the fact that there's hype leaving the market.
And what we're not seeing are the flows outside of the exchange traded complex.
We're seeing only the exchange traded complex.
There's clearly net sales action in spot markets elsewhere.
And you can blame GPDCs, and that's the product that's losing the most money.
but it just seems to be moving into other products, into, you know, iShare's, moving into
bitwise, moving into, you know, all of the other products.
And really just a transfer from GBTC.
Yeah, if GBTC didn't exist at all, or if the SEC had declined, you know, gray scales application
while approving all of the others, we might see a significantly larger net flow in the ETFs.
Well, if you look at the flows basis, my understanding was that, like day one, day two,
it was quite positive.
Right. And basically it's been turning negative to balance out and end up zeroing out.
And I think this was the opposite of what people were expecting, which is that
GVDC maybe pukes day one, but then everything else starts like really gaining and we end up in
a net positive flow position. But it seems like we went from positive to negative,
which ended up net flat. Yeah. I mean, the bad day was the day where GBT
lost a billion dollars. And I think it completely transformed the narrative around the
ETFs into one of which there was more assets at stake to lead.
the products that there was to enter them in the aggregate.
I think that's really transformed the narrative across the market and has led to a declining
price, even though the flows in the aggregate since launch are positive.
But the expectation was that they'd be even more positive.
Yeah, it's always a question of relative to expectations.
Yeah.
I mean, a billion dollars net sounds like a lot, but like for an asset that trades, you know,
$40 billion a day, like over several days is really pretty insignificant.
And yeah, there are a lot of forced sellers like like the FTX.
access state and things like that. But I also think there's a little bit of reflexivity to
GPDC too, where if you think, hey, people who are in loss are the ones who are selling,
where it's like if you add any sort of incremental, you know, negative cell pressure, then more people
are going to be lost and more inclined to rotate. So I don't think it's exactly a super efficient
market. And I don't know what that sort of, you know, cost basis breakdown looks like for GPDC holders.
But I mean, people are sort of optimistic that, you know, volumes on GPCC are going down over time.
And so, you know, hopefully we're sort of reaching the end, but who knows?
Yeah, that seems plausible to me.
And I guess the other thing is, I think this market is probably going to continue being spooked
until we see the GVTTO flows stop and like it finally finds the equilibrium of, okay, these
were all the sellers.
They're done.
Now we go back to net inflows because all the other, you know, all the other ETFs are only going
to accumulate capital over the year as the, you know, financial advisors and allocators finally start
getting some of their clients into this product,
which I don't think we
expected to be all at once, right? I think it's kind of
a slow grind over the course of the year.
Tarun looks so bored right now.
Tarun hates me, Bitcoin ETFs.
It's just like.
Brun, wake up, wake up.
Please tell me. Please kill me if I have to hear.
This will be the last show where we talk about
the Bitcoin ETFs for at least one show.
I don't know if that's true.
We'll see. I hope that's true.
All right, all right.
We've made our sacrifice to the ETF gods.
I think we can move on now.
There was one much more interesting crypto-native story,
which was around a bug in one of the Ethereum clients called Nethermind.
So I think it's maybe worth doing a bit of backstory
just so people understand what the story is about.
So Ethereum, back in the day, Ethereum had one client.
And when we say client, we mean the program that people run
in order to validate the Ethereum network.
Back in the day, Ethereum basically just had Geth,
which was the Go Ethereum client
written in the programming language Go,
which basically existed from the very beginning of time,
essentially when Ethereum first came into existence.
When Ethereum went to Ethereum 2.0,
they split up into two clients.
One is called the execution client,
and one is the consensus client.
The consensus client runs basically the staking
and the consensus layer,
the what was called the beacon chain,
and the execution client actually runs the code on Ethereum.
And right now, there's been this big hollow-blue around what's called client diversity,
which is basically how much concentration is there in which client everybody is running
to actually validate the Ethereum network or to run execution on the Ethereum network.
On the consensus layer for Ethereum, there's actually very good client diversity.
There are many different clients like, you know, Prismatic Labs and, you know,
Nethermind and whatever, all these different people have clients.
And there's a good distribution where I think it's like 40, 30, 20 or whatever.
It's pretty good.
The execution clients, however, are a very, very different story.
On the execution clients, more than, what is it, like 90%, or like 85% or something?
78% are a Geph.
78% I see.
Okay.
So about 80% of the clients that run execution on Ethereum are running Geph.
So now Geth, again, it's the oldest.
It's also the one that most other networks that forked the ETH.
EVM also use geth. So most of the roll-ups, you know, avalanche, you know, Phantom, all these other
guys, as far as I know, they pretty much everybody runs geth. It's just the most battle tested and it's the
most used. Now, the problem with this, and so this came to a head on January 21st, there was a
critical bug in Nethermind, in their Ethereum client, which powers roughly 8% of validators.
And now, Nethermind is a minority client. It's not the most used client. But this got people
worry that like, hey, what if this bug, instead of being in nethermind, was in geth?
Ethereum was able to continue running because nethermind was such a small portion of the
Ethereum in a validator set.
But if this bug was in geth, then this would halt, this would completely halt the network.
And it would mean that Ethereum would stop producing blocks and it would just get stuck
until somebody would go in and fix the gath bug.
And there would be no realistic way that people could rotate clients in that period of time,
rather than just waiting for the bug to somehow get fixed.
And so this has led to a big conversation in Ethereum
about should we work to intentionally increase client diversity
and move away from the monopoly that Geth has over execution in Ethereum.
So a lot of people going back and forth.
One thing to also note is that this is pretty weird
in the sense that almost no other blockchain has client diversity at all.
So if you look at Bitcoin, Bitcoin Core,
is the Bitcoin spec. There is no other client that has any meaningful market share. You look at other
chains like Solana or NIR or whatever. None of them have any other client. There's only the canonical
client effectively. So Ethereum is unique in having other clients to begin with, but it's increasingly
being talked about as, hey, maybe we need to push as consumers of staking services or, you know,
the exchanges, push them to start adopting other clients to enforce client diversity. So what do you
you guys think about this whole client diversity debacle and debate and where do you feel like
this is going? Do you think that client diversity can happen or is this a pipe dream?
Well, I personally don't think that, you know, client diversity is necessary. What you need is
operator diversity, geographic diversity, you know, resilience and strength through numbers.
It doesn't matter if you have 12 different clients running the network. It's about the end of the
day how many different validators are there and, you know, having diversity,
and resilience from how wide your validation is.
You know, I think it's almost safer to have one completely, you know,
battle-hearted client that everybody's focused on as opposed to assuming that, like,
through having multiple clients, like, you know, it doesn't matter if there's an issue.
You know, yes, Bitcoin Core moves slowly, but, and yes, Geth moved slowly as well.
But implementing clients in the first place is unbelievably complicated.
Like the effort that it would take to create, you know, I think multiple new clients,
hypothetically, would be astounding.
Implementing, you know, the Ethereum, you know, spec, so to speak, is not easy.
It's like not trivial.
The odds that you get it wrong, I think are significantly higher from a new client that's
originated from scratch than, you know, an existing one.
You know, when I entered, you know, the Ethereum world in like 2016, 2017, this was actually a pretty big conversation back then.
And, you know, I feel like the conversations died out over the years.
And it was, oh, well, we need to implement this in multiple languages.
You know, get was go.
How do we implement this in Rust?
How do we implement this in Java?
How do we do this in like different programming languages?
Because the idea is that one language itself wasn't dependable.
I mean, I think that's a risk that can be taken.
you know and I think you know it's almost more reasonable to have the entire community
get behind geth make it you know strong and perfect than to try to spin up new clients which
is a Herculane effort and the amount of effort that goes into getting it right I almost think
you know potentially it can't even be done safely at this point um yeah I would I would
mainly echo that although I think like if you look at the history of um
of mission-critical open-source software.
So like compilers, Linux, operating systems,
you know, you generally do see this thing
where there's like some core component that is conserved.
There's very, very, very few sort of like core kernels.
Like there's all these people who make research kernels in academia.
But there's not really like something that's used in millions of production servers
that differs that much from the core Linux kernel.
Obviously, you have modules and stuff in the operating system that are different.
But there's also the same thing with compilers.
It took so long further to be a compiler to competitor to GCC in LLVM.
And there's sort of this question of why you want to do it.
Usually one of the reasons people kind of, when they take a mission critical piece of software
and they want to rewrite it.
Linux did it because it got over licensing and patenting issues from Dell Labs and Unix
and then Windows eventually.
That makes sense, right?
That's a reason to make another operating system.
It's cheaper.
You know, open source AI, it's the same thing.
It's like getting around sort of the licensing agreements and stuff.
But if you look at other mission critical software, it's very rarely that there's a,
a reason to do it that's like, hey, we want to make sure that there's kumbaya for every possible
programming language, right?
Like, there's a reason almost all compilers still just use the C++ compiler to see compilers
at the bottom of the stack and then write stuff around it as opposed to writing their own
from first principles.
This is very easy to make a mistake, and the mistake is catastrophic and very hard to find.
On the other hand, I do think there is some benefit.
So, sorry, another reason that people will make mission critical things that are not just close source competitor is extensibility.
So like you may, in order to make a system more modular, so you have like the core component and then things that people can extend, you may have to rewrite the main thing.
Because, you know, it was initially written as this monolithic single code base.
And so you can't really break it apart into pieces.
and there's a reason to make another round.
So in the compiler case for LLVM,
that was sort of the big deal, right?
It separated the front end and back end more cleanly.
And so people could build compilers for other programming languages using it.
And so it kind of had this nice little feedback loop.
I don't really see an argument for that here either.
It's not like adding other clients makes you more modular.
It does.
Adding other clients does give you some new functionality.
you can double-check particular implementations of very core cryptography.
You can make sure that, like, hey, multiple people have seen it in different languages and come to the same thing.
So, like, everyone's agreeing on the math correctly.
But I think there's sort of, yeah, there's kind of always been this war, like Robert was saying.
I think it's only become kind of more interesting with staking derivatives and also with the fact that there are tons of
forks of these clients for different purposes that exist and maintaining the forks,
the forks you could think of as like the modular components, right? Like I have the main thing.
I can add these other pieces and use it for my use case. So I guess long story short,
I think client diversity is generally a bad idea. I think it is a good idea from a as an
academic exercise of proving, hey, there's like these faults that you can correct. But I don't
think it's necessarily a good production software exercise because like I said, I think there's only
really two reasons you rewrite mission critical software. One is licensing, getting over licenses,
and the other is kind of this modularity thing. And that neither of those applied in this scenario.
I thought that the Linux kernel analogies is actually really apt. The difference is that like
Linux kernel does not update nearly as frequently as something like Geth, right? Geth is expected
to be this sort of core that then people, you know, sort of sort of, I think Linus Torvald from the
90s, it would be offended by what you just said.
No, I think, if anything, you know, that's the little point is they're very protective
and he's an asshole about the colonel so that it is solid and robust.
Yeah, yeah, I just meant in the 90s it was updating like-
Oh, sure.
Everyone does crazy shit in their 20s, so.
But I think it really comes out in like an incentive issue, right?
If you look at the clients today, and actually, to your point around, around Geth,
you remember back in the day, there was also parody, right?
And the idea with parity is that would be more, you know, sort of for-profit, you know,
red hat fedora kind of style.
And like, therefore there was sort of a profit motive to it.
There's maybe a story around modernization.
These days, you know, all of the clients, right, are basically sort of research or grant-driven
development.
There's, there's, there's Reth, which is, you know, coming out of paradigm.
So it is possible to build a new client, but there's not really a profit motive.
There's not really a business incentive.
There's not really a, you know, anything sort of self-serving around developing
it and something therefore there isn't really a, you know, desire or movement or any reason to,
hey, have a competitive marketplace of these things. These are kind of like R&D projects,
almost sort of like nonprofits in a way. And so it's like, you know, how are you going to
incentivize people to build new clients and run new clients if there isn't really, you know,
any sort of marketplace for these kinds of things. So I will gently take the other side of that,
although I'm somewhat sympathetic to the arguments you guys have levied.
I think if you look at the execution layer, it kind of looks like, okay, well, clearly this is impossible.
There's just naturally going to be centralization in the most robust version of the Ethereum client, which is Geth and the end.
If you look at the consensus layer, it's kind of the existence proof that that's not necessarily true.
The consensus layer is pretty dispersed.
There's not a lot of concentration in a single client.
And, you know, Tarun, you're making the point that, like, look, you know, something that as actually mission critical,
you have one implementation and that's the implementation.
And that's the view that Bitcoin has always taken is that, look, Bitcoin Core is the spec for Bitcoin.
There is no external thing that is Bitcoin.
There's just Bitcoin core.
Literally this piece of code is the is the spec for the protocol.
Ethereum does not take that view.
And I think in a way that is, in some ways you could say it slows it down and other ways you
could say I think it makes it more robust.
Also, to be fair, I think Bitcoin is a weird example.
example nowadays because now people are making all these custom clients for ordinal support and
L2s and stuff.
Bitcoin actually I think is evolving more quickly than you think right now.
But I agree like the three year ago vision is sort of like Linux.
It's sort of like Linux in the way you're describing, right?
Where like core is the kernel and you can kind of add.
Yeah, yeah, yeah.
But I think these add-ons now thanks to ordinals have been getting, seeping in more into innards.
True, true, true.
But I mean, you know, R.SK was doing that back in the day.
And there's all sorts of other stuff.
Like lightning is kind of this weird.
Also,
the thing that's worth remembering is the first Ethereum client was neither in Russ
nor in Go, it was C++ Ethereum written by Gavin Wood.
And it's very important to remember that Ethereum itself flipped its client pretty quickly.
And that's a historical vestige that's quite different than Bitcoin.
Right.
There was also Ethereum J, which was the, I think what Tron originally was a fork of Ethereum
J, which is the Java version.
Yeah, so there have been many Ethereum clients over the years, and they've come and gone.
But I guess the point that I'm trying to make here is that actually I thought the analogy you were making, like, okay, mission critical software means there's one implementation.
The one canonical example that I actually remember learning about when I first got into crypto was space shuttles.
Space shuttles actually do precisely this, which is that they have, I think it was four implementations by different programmers of the same mission critical code.
Because a space shuttle, it's like, if there's a bug, at least especially,
back in the day when you can't do over-the-air updates.
If there's a bug, it's like game over.
The space shuttle crap,
there's billions of dollars down the toilet.
And so they have four implementations of the same code
or the same program,
and they run Byzantine fault tolerance over those,
like a Byzantine fault-tolerant algorithm,
over those four implementations of the same code,
because, of course, there can be bugs,
there can be cosmic rays, flipping bits,
and all sorts of craziness.
And that's what, like, mission-critical comes from space shuttles.
That's where the term comes from.
And that is the place where, yes, you actually do want this re-implementation of code
because it's very, very important that this thing always works.
Now, in normal software, right, if I'm just running like an operating system.
This is why I left the trapdoor of like the academic exercise of using it is good.
I think in production, it's kind of annoying.
I think the thing about space shuttles that's very nice is they have a finite lifetime.
So you need it to work for this U-turn lifetime.
I think the problem of blockchains is that have this like perpetual nature.
that makes the running multiple versions like a lot more hairy to deal with.
Yes, it's true.
And obviously, Ethereum moves pretty slowly.
I mean, obviously, Bitcoin moves slowly too.
So maybe it's more a function of age than of the fact that they're multiple clients.
But you can tell Ethereum development moves very slowly because of all the coordination
that's required across all of the client development.
That said, that's just kind of where Ethereum's at.
You know, like I would never recommend Salana or, you know, any new generation of blockchain,
start by saying, great, let's go multi-client and support multiple clients simultaneously,
although Salana is trying to do that a little bit with FireDancer,
but they're still saying, look, look, we're not going to slow down for FireDancer to catch up.
We're just going to keep iterating and keep improving the tech.
And I think that's right when you are a startup, when you are, it's kind of in,
you sort of first flush as a new blockchain trying to get your footing.
But I think where Ethereum's at, I think it's possible.
And you're seeing it right now, like there was this tweet thread from, who is it?
Who is the guy who yelled at Coinbase and was like, I'm pulling out all my stake.
From Rocket Pool?
No, no, no.
Let me see who it was.
Whatever.
I don't remember.
There was some guy who was like, hey, I'm pulling out my money from Coinbase because I don't like the fact that Coinbase is all running on Geff.
And Brian Armstrong replied and said, hey, we're going to look into this and fix it and make Coinbase Cloud go multi-client and make sure that
we're staking through a different validator.
It's a DC investor.
DC investor.
Yeah, yeah, exactly.
DC investor.
So now, look, it's one guy.
I don't know that there's, you know, this flood of people who are going to be following
in his footsteps.
But I think it's relatively easy for a small number of players to just kind of feel the zeal
that's coming at them and decide to change their minds.
The same way that in Bitcoin mining pools, people just got mad on Twitter and then the things
changed, even though the incentives weren't really there, right?
We know, look, the bug was not in Geith.
The bug was in Nethermind, which is a minority client.
There was a bug previously in Bessu, which is another minority client.
Most of the bugs are in the minority clients.
So the reason why Geth is dominant is not because people are lazy.
It's because people are rational, is that they know this is the most battle-tested
and the oldest of the realistic clients that one can use.
So people are smart.
People are maximizing their own profit.
But if consumers can change the incentives and say, hey, it's really important to me
that you actually lower your usage of the majority client.
It's kind of like the Ethereum version of ESG.
I think it can work and I think it's already starting to work.
I think it's not that I don't think it's possible.
It's that I think investors have never fucking done DevOps in their life
and had to do an on-call rotation.
And all I got to say is fuck you people who have never had to do that.
Because it is very stressful.
Let me tell you.
In many different contexts, it's like the worst.
And it's usually just some like when token asshole.
yelling at Peter S,
the main maintain one of the main maintainers of Geth
about like shit like this.
And I'm like, I don't know how that guy does the thankless job.
Not only does he have to deal with the emergency stuff,
he has to deal with these fucking asshole air drop farmers.
Like, you know, it's, it is kind of like a very thankless thing to work.
On Bitcoin?
Because they're on Ethereum.
No, no, no.
Sorry, sorry, so sorry.
In his Twitter, like if you, if you were seeing like him complaining about
tokens.
Oh, oh, oh, oh.
You know, like the airdrop farmers will just be like giving him shit and you're like,
you wouldn't have any air drops without this guy.
It's just, they're just so dumb.
Like I, it's like actually impressively like idiotics in some ways.
Yeah.
I mean, look, it is, it's obviously technically possible to have client diversity.
I think, you know, the point is it's not natural.
It's very inefficient.
And you have to sort of force it and be willing to eat the inefficiency.
You're buying a ton of.
tail risk insurance, which you're basically never going to cash in. And so you're like, hey,
this is just something that we're going to eat. And, you know, even the, you know, consensus
client example, like your person at a lighthouse, hey, are pretty much a duopoly. So it's not
that actually diverse. And those were also, that was also a very concerted effort with each two,
as you said, where large grants from the Ethereum Foundation from consensus went out to like build
these things. If you went through like the ETH staking flow on Ethereum.org, they, they pushed you
to choose a more diverse client. Like it was a very considered effort.
And so I think if there is willingness to basically, you know, burn cash for the sake of sort of this
terrorist insurance, then yes, you absolutely can do it. I think it's, it's my, my question is
more, how can you make this sort of self-reinforcing? How can you make the market desire client
diversity? How can you make entrepreneurs when I want to go out and make a brand new client?
I don't really know what that answer looks like today. If you have a answer for that, then I think
you kind of have a self-sustaining solution. No, that's a fair point. I don't know that you're ever going
to get an economic mechanism to make this kind of thing happen, right? And like,
blockchains, I think they're, you know, they are about tail risk because it's about being
like the whole point of blockchain. It's up all the time. It's usable all the time. And
the one thing that could really shatter Ethereum story is that it fails in a catastrophic way.
At the same time, like, look, it's almost certainly the case that there are bugs and death.
You know, like there's no way that there are zero bugs and death. I mean, we've had,
had bugs in OpenSSL and the Linux kernel and things that have been around much, much longer than
Geth and have had many fewer eyeballs on them over that period of time. So there's just no way
that there's literally zero and we'll never find another bug in Geth ever again. So there will be
bugs. And at some point, we are going to have a bug that ends up causing some kind of consensus
failure. You know, like it will just happen. It's not a chain split than some kind of massive
downtime for Ethereum. And so I think it's a matter of creating the
resilience to I think what is more or less inevitable at the expense of yeah we're
going to eat more complexity and we're going to move slower in the meantime but like I
don't think Ethereum at this point is about being the fastest to iterate or to move I
I think it's probably wrong for Ethereum to prioritize that at this point at its
life cycle yeah maybe but I think that's also you know there's a sort of you know
bandit problem type of thing here like I can either spend I have a fixed amount of
resources. I have a bunch of different places I can spend them. I could spend them on having
10 clients or I could spend them on having one client but get dank charting working or 4844.
And from an engineering organization standpoint, I think it's kind of, yeah, it's a different
tradeoff point on the tradeoff surface, right? Like the Solana version of this is kind of interesting
because they somehow outsourced clients to other people, which is a bizarreo.
Well, I mean, yes, but they also have G2.
They also have the forks of the Salon Labs validator that.
I think, like, Solana is actually much more diverse than you think.
All the people doing SVM roll-ups also have kind of client forks.
It's like it is starting to look like Ethereum's ecosystem.
Probably the only L-1 that has, like, that many client patients.
But I just think, like, you have to remember, like, whenever people are complaining about, like,
oh, my feature didn't make it into mainnet.
Like, why are, why do we have no bandwidth on alt-2s and we need Alt-D-A layers?
You know, like all of that stuff stems from the fact that we split the fixed engineering pie,
fixed budget across many places and so.
And look, look, look.
It's a values judgment, right?
Community made that value.
But hold on.
I'm just pointing out.
For Salana, yeah, for Solana, my understanding, I mean, correct me if I'm wrong,
but my understanding is like everything is derived from the Salana core client.
And like they added little things on the same way that people have added stuff on to get.
Oh, no, no, no, no, no, no, no.
It's all still the same code base.
And so many little...
Like Gido Sol...
Gito is added.
It's like...
But FireDancer is totally different.
Fire dancer in C++.
Yeah, FireDancers is a ground up rewrite,
but FireDancor, they, didn't they constrain their initial launch to say,
look, we actually, a bunch of the stuff, we're just going to run the original code
and then we're just going to add, like, modules over time because the ground up rewrite was
just way too big for them to actually...
But they changed, like, everything.
Even though, like, erasure codes are completely re-refer codes are completely re-refer.
written. Like, I, I think you should really... In the V1. Yeah. I mean, the, the re-solum and implementation there is
completely different. So I think, at least from what I see their GitHub, which, you know, is my
North Star on this. I think, like, you should really consider FireDenzer, a complete, you know,
like, it's like breath and death. Yeah, that's my, that's my assumption is that they want to
eventually get to a complete rewrite where they jettison all the old code. Yeah. But my understanding
was that they were basically like, look, we're going to chew up, we're going to bite off something
that we can actually chew initially, as opposed to waiting until we have the entire client
rewritten. But yeah, look, fair point. I understand. Obviously, FireDancers is not live.
Well, I just, I just think it's interesting that they chose different, you know, engineering
choices, right? Like, FireDancer chose particular optimizations they want to do. They're going to rewrite it,
whatever. But they didn't hold up hard fork inclusion, right? It wasn't like they had
engineering resources that would have gone towards adding new features for the next hard fork
into maintaining another client, right? Whereas in Ethereum, I do think that's actually true.
I really think there's a, I don't think the pool of people working on clients is growing
anywhere near as fast as like transaction demand is growing. And you have this finite resource
and you have to allocate it,
and you chose an allocation as a community
to spread it across clients,
but you could have imagined an alternative universe
where all these features that everyone talks about,
which aren't implemented,
actually are already in the chain
because instead of rewriting the client X times,
you actually added the features.
So there's also this kind of...
What are all these features that you're...
I mean, dank sharding and all of the data availability-related stuff.
Are you kidding me?
Like that, there's the entire, there's tens of billions of dollars of market cap that have arisen because of how slow this is.
Never forget that, right?
Like literally because people just didn't want to invest time.
So it is a management thing.
It's an organizational behavior thing of like we have finite resources and we're treating them as if they're not finite.
But they actually are.
And so what's the opportunity cost?
Well, it's this thing.
And then so now we have a DA layer war.
So actually, maybe this is a good transition to talk about Denkoon. So Denkoon is the name for the upgrade
that is going to be actually shipping proto-dank charting. So proto-dank charting for those who are not
familiar. Long story short, this is essentially what's going to implement EIP 4844, which is the
blob storage. So essentially what this is going to do is right now, if you're a roll-up and you
want to post some data to Ethereum, you have to pay a lot of money.
basically you're taking up the same space that every other computation in Ethereum is taking up
of writing volatile storage onto Ethereum.
In a proto-dank charting world, there's going to be a separate lane that is only going to be
for data availability, basically meaning it's going to be short-term storage where you can just
dump some blob of data, arbitrary data.
Ethereum doesn't care what you're putting there is just anything.
And it's stored in retrievable for a short amount of time, essentially.
this is more or less ready to go.
I think people are projecting right now,
maybe like end of Q1.
I think this is projected to potentially hit Maynet.
There was a test net trial run that was taking place on Gurley.
Apparently didn't go very well,
and so they're now kind of going back to the drawing board
that may end up delaying things.
But this is projected to lower the data availability costs
for roll-ups on Ethereum significantly.
Now, that being said, from what I've read,
the expectation for the throughput,
meaning the total amount of data that's going to be writable
to the blob storage on Ethereum
is something on the order of 0.6 megabytes per second,
something like that.
We don't know the exact numbers right now,
but it's not huge.
Relative to what people are talking about
for data availability layers,
like Celeste or Avail or Igen-DA,
those are projected to be more in like the many megabytes per second
of total data throughput that they can withstand.
Ethereum is going to be relatively low,
and so there's a lot of projection
that even in a protadank charting world,
although it will lower costs for roll up significantly,
the total demand for DA is so high
that there's still going to be,
we're still going to have to use external DA
in order to withstand all the demand for data availability.
And see, usually you are the person
who forces everyone to stop and define some terms,
but I'm going to flip the script on you
and make you define data availability for the listener.
I used it first,
I'm going to still make you do it.
Okay, thank you.
No, good shout.
So data availability, it's the new hot term that everybody's talking about.
Celestea, eigen-DA, and Avail are all data availability layers.
Data availability is basically, the very simple explanation is that there are certain kinds of storage
that are not needed for long-term storage, but basically short-term storage or medium-term storage.
So for example, for roll-ups, especially optimistic roll-ups, oftentimes you have what's called
this challenge period where you need to be able to see the data that has been committed to
this layer two for up to two weeks or up to a week or two weeks or whatever. Different
roles parameterize it differently. And so data availability is a mechanism to prove this data has
been stored and it's available for up to some short period of time. It's not forever like a network
like Filecoin or what's the infinite store? Arweef, right? Or we've, right. Or, or, or
are we, which claims that it stores data forever, or Filecoin, which stores data up to some
long contract period. But instead, it's like, hey, this data is going to be available for X
period of time, and then, you know, no bets are off on whether or not this data will continue
to be available. And the replication and the availability is very important to roll-ups in
particular. So almost everybody, when we talk about data availability, we're mostly pointing at
roll-ups. Almost nobody else really has this data access pattern. But roll-ups are the big story for
Ethereum scaling, and that's why data availability has become such an important idea that many
different teams are trying to add on to the capacity of Ethereum in a proto-dank charting world
to give it more data availability capacity.
So what do you guys think is going to happen when we get data availability increases?
A lot of people are now projecting that because of the total demand and the total number
of roll-ups increasing so much, that we may actually not even get the discount.
in DA prices that people are projecting with proto-dang charting,
because basically as soon as we get more capacity,
demand just increases to fill it up.
So do you know what Brace's paradoxes?
Oh, is that the traffic?
Yes, exactly.
It's one where when you add a road,
you actually increase congestion in certain networks
under certain types of flows.
I feel like this is a nice apt version of this.
Where sometimes you build a road,
And actually, all you does increase congestion because everyone wants to take the fastest road or the safest road or whatever.
And so they all just kind of keep doing that.
That does seem, I mean, on the face of it, it seems implausible today because almost everyone is using Ethereum as DA.
Yeah, everyone is using the main road, right?
So I think Brace's Paradox only works if there's already multiple roads.
Yeah, you need roads of different amounts of traffic.
You need roads that are highways and that they kind of have constant speed that doesn't do.
The speed it takes you doesn't depend on how many other people are on the road versus like single lane roads where like speed you go at depends on.
But you could argue that a lot of the alternative DA layers now in a world where you have multiple of them, you might actually start having this kind of brace like effect.
Well, what's going to happen is the newest data availability layer that, you know,
people get excited about will be the one with this paradox because everyone's going to race to whichever
layer yeah yeah i agree i agree i kind of agree with that well they're all fighting to be the layer everyone
wants to be the brace paradox you want to be the brace brace data availability right that's the paradox it's a
double paradox is that it's going to happen because the road itself wants to happen well data
availability is kind of weird because it is it is in a sense like a b-to-b thing in that consumers
like don't know where their data is being made available because they don't necessarily care.
Yeah, exactly.
So like would that not be an indication that like actually the protocols themselves are likely
to be pretty rational and sort of allocate efficiently across all the all the roads, so to speak?
I mean, I think we're seeing we're seeing certain roll-ups or like people who have moved
some of their applications for their own roll-up.
Like Avo announced today they're using Celestia.
Avo is like a large perpetual, decentralized perpetual.
Exchange that disclaimer. I think everyone in this shows Festerin. And, you know, I think there was
there's Lyra Finance a couple weeks ago. So you're starting to see like people who have real
users and are paying the DA costs suddenly be like, wait, I don't want to pay the DA costs anymore.
And I think the market is starting to flip into this fee mindset. Now, I was talking to someone today,
even if the DA layers lower the fees to, you know, Salana-like levels for all these applications,
like I think that's like the goal in some ways of like, you know, if we zoom out and you're listening to the show and you're like,
who the fuck cares about this data storage thing? Like, why is ephemeral storage versus long term who cares?
All you need to think about is this allows you to lower, you know, fees in some certain way.
Because it turns out these data storage fees are quite high if you make them perpetual.
but if you timebound them, you can make them cheap.
And I still don't think that you can say that roll-up U-X is like,
roll-ups are like one-to-one with Solana just because the fees are the same.
Because I really think the U-X difference is still not very non-trivial between the two.
I mean, how do you feel when you, you know, do you feel like if you went to a roll-up to use an
application, the fee went down 100x, you would, you'd be like, okay, great, I'm going to stay
here versus moving to a cheaper.
Yeah, I don't know how much.
I mean, this is actually also an interesting argument.
I think there's like, if the fee is low, it doesn't really change the U.S.
At least not for me, if the fee is like 10 cents versus like a 10th of a cent.
I think for meme coin traders, that's the only place where it matters because it's like people
who are like, I have 10 bucks.
I want to buy 10 different meme coins right now.
You know, it's like, okay, then you're very, very fee sensitive.
Yes, yes.
but I mean, if you look at like Binance Smart Chain
or you look at like Polygon, like fees are not
like I don't think there's a lot of evidence
that you need to be
like that basically demand is responsive
to like fractions of a fraction of a cent.
You know, like I think like
maybe there's a theory that oh, the guy with $10,
like he can do all these things on Salana,
he can't do on finance smart chain.
But I just don't think there's a lot of actual evidence
for that story being a driver of
behavior.
Well, I'm just saying that this is something I,
this is something I feel like I hear all the time
that people are saying it's like oh like our fees are like salana then we will compete for the meme
coins again i just don't i just don't feel like that's true to me but i'm willing to be wrong it doesn't
feel like that's true to me either i don't know tom what do you think oh i was going to say i mean i think
you know sort of like the whole like log versus linear wealth debate i think there's sort of like a
inverse log like like cost debate here thing too where i think you're right if you go from i think
a 10th percent to 100 percent or 1,000 that's the other than percent like it's actually like so
negligible at that point. And I think even, again, this argument too, on like, latency as well.
People say, okay, you know, Salana, you have, you know, a few hundred millisecond block times
versus, you know, one second confirmation time on a roll up. Sure, maybe if you're high
currency trader, maybe you're trying to do, you know, NASDAQ on chain, that is a meaningful
difference. For people who are trying to send USDT back and forth to their friends or, you know,
mint and FD, that is really not a meaningful difference. And so I just find it hard to believe that
that is going to be where people are sort of making their decision.
I think it's going to be much more around where do developers want to actually build
their next application.
Is this a place where they feel like they can find users and they can find capital and
they can find a great experience to build on top of?
And again, I think that's more where Solana succeeds versus a rollup is that it's, you know,
simple, it's monolithic.
You can come up and you can you can deploy.
But obviously the downsides of the SVM, people having to bridge over, you know, having to
like find new users in capital.
I think in many respects, like the cost of the tech are a very minor, you know,
determiner of usage.
And it's much more about sort of these other factors.
As I go back to your fee question, there's actually a polymarket for the gas price per blob
one month after EIP 4844.
I don't know, like, what one blob corresponds to you in terms of screen?
Yeah.
Yeah, I have not done the back of the envelope math to even.
like grok the basics. Yeah, I didn't know what this means. Um, so yeah, I'm like, I'm like,
I think we're actually, we actually are like one unit off from like a like average like
transaction or like, you know, call data amount or like, you know, uh, uh, uh, storage size. But, um,
introducing there's already a market here. Um, I guess if listeners know what a blob corresponds
to, um, in terms of practical usage. I'd be curious to, to hear. Okay. So none of us actually
know what this means. No, I'm glad. I'm glad we threw it up here.
Is like a roll up, is a roll up settling one blob per block?
Okay, a blob is 128, 128 kilobytes of temporary data.
Yeah, it's just 100.
Okay.
This doesn't seem that off.
I mean, in this prediction market, it doesn't seem that off from vaguely what the costs are today in the status quo.
I mean, looking at the upper bounds of like 0.01 ether and stuff to 0.1 ether.
No, that is cheaper.
It is cheaper.
Yeah, yeah, yeah.
The market is pricing in a 10x reduction effectively.
Yeah, sorry.
Yeah, if it's 120K.
Let me zoom out.
Let me zoom out and look at this from first principles for a second.
So I think data availability does lead to different types of applications fundamentally.
I think everyone's focused on like, oh, at least L2 is being efficient.
But I also think it could lead potentially to very different things.
you know, when you look at like, what is the cost of writing something to a blockchain, some data?
The reason it's so expensive is because like you're not writing it to one computer,
you're writing it to like 200,000 computers and like forever.
That's like the notion that most people have about Bitcoin or Ethereum.
I mean, it's like data is crazy expensive because like you're writing it all over the place
and you're writing it in forever versus writing it all over the place for a short amount of time.
That's way cheaper, right?
But it's still fundamentally like crazy expensive because you're writing it all over the place.
You know, to write one 128K of data to like, you know, an S3 bucket is, you know, nothing, right?
Like it's free forever.
So I think what happens is you start to innovate on the data availability products.
Oh, it's not forever.
It's for a week.
It's for two weeks.
It's for, you know, two minutes.
You get different like applications entirely.
Like I actually think, you know, crypto gaming actually might be a weird like long-term beneficiary of like blobs and more transient storage in general.
Just because like, you know, if I'm making a video game, I don't need the data to persist for 20 years.
I need it to persist for like, I don't know, maybe a couple months in some sense.
And like maybe it just completely unlocks a new type of application there.
And yeah, maybe like there's a lot of on-chain like,
NASDAQ like trading that just doesn't function at all.
But when it's more transient, it does function.
And, you know, I'm curious to know what sort of like the use cases that erupt from this are
that aren't just like, oh, this was designed for L2s to post data that you need for exactly two weeks.
Yeah.
I mean, so far, I haven't seen a single, I don't think I've seen a single game.
I don't think I've seen a single application.
Yeah.
Claim to me that they're going to use DA for this alternate use case.
That said, the further away you get from
the more crypto-native stuff of like defy or layer twos or whatever the less people even care about
any of these concepts they're like oh yeah i mean look my game's already centralized so i might as well
just store all the shit myself and there's a 15 billion dollar uh market cap coin that says
there's some amount of care a 15 billion mark cap coin that wait what about celestial yeah
care in these things i'm saying i'm just saying oh oh fdv excuse me fdv yeah let's not say market cap
FDV.
Yeah, sorry, sorry,
FDGF2.
Okay,
I was the market cap.
Yeah, look,
yes, yes, yes.
Okay.
Here's the other thing I would say,
and kind of going back to the previous point about performance, right?
Because I want to continue this thread of like how much are people sensitive to fees
and also how much are people sensitive to latency?
Because I think this is also a big part of the story about rollups and about Solana
and about why people are going to be moving away from Ethereum Maynet and these traditional
experiences.
I mean, this is also something, by the way,
is unique to crypto, this fee-versed latency trade-off in U-X design, right?
Like in what in like, if I made Robin Hood or Revolut, I don't really ever think about
that, right?
I just try to.
Well, you do in that you want to get the latency as low as possible.
Yeah, exactly.
Yeah, exactly.
Yeah.
Exactly.
Yeah.
Exactly.
But the data is plentiful because you just stored on one server.
Oh, totally.
And you can do it on limited scale.
I mean, ironically, I'm pretty sure that storing stuff on R-Weave, which is supposed to
be forever, is cheaper.
than call data on Ethereum, which is supposed to be temporary, right? So, like, I, I think the absolute
externality is not how these things are being priced. It's just a market. And there's, like,
some constraints that are being set somewhat arbitrarily on, like, why, why is Ethereum call
data more expensive than forever storage on RWeave? Can't really do anything. There's no, like,
other applications that are composable with that data, really, at least to my current knowledge,
like, there's no applications composable with call data, right? Call data is transient.
Yeah, that's true. But, like, on Ethereum and on some,
Alana and all these places.
Like the data is like super valuable because it represents like token balances and the transfer of those tokens.
And it's like, you know, extremely valuable data.
Like storing like a photo or a movie or something like that.
Like, you know, it's not as valuable.
I also think there's kind of this weird thing that I still have not been able to philosophically get around,
which is like philosophically,
blockchain started as append only, right?
There's no notion of this like delete transient behavior, right?
Of course, you know, you eventually add that back on.
But the history of computing has always been this kind of like just in time storage,
just in time compute, right?
Like that's how the cash hierarchy kind of means that, you know, whatever,
you're not really a Von Norman architecture computer in any sense of the word in a modern computer.
But it's never been clear to me like what blockchain.
get from these types of caching optimizations.
Like I think L2s are an example, but it's sort of a blockchain eating a blockchain.
It's not like a caching thing that without the caching, you would just like never work, right?
Like L2s are working right now without a blog space.
There's more expensive in some sense.
So there's just kind of this weird thing to me where I haven't seen something that isn't just like,
what's the use case of this particular thing, another blockchain?
you know like
we haven't quite
that's true that is true
that said we love blockchains
especially on the show we love blockchains
that's the best that's the best
Christmas present is more blockchain
the chopping blockchain
yeah exactly
I'm just saying I would like to see some like
you know like oh like could you
how could you
you're ruining Christmas
I know I know I'm sorry I'm sorry
I'm just saying like
it would be cool if the like
to Robert's point of like, is there something else?
Because like, okay, yeah.
I agree.
What is there?
Tarun, you are in the wrong industry if you're asking, is there something else?
I think maybe you need to take a vacation, come back, renew your love of blockchain.
No, no, no.
I love them.
I think if there are more blockchains, that's fucking awesome.
I think, I just think like caching, right, enabled a lot of things in operating systems,
like window managers, like, why do you have the U.X that you have?
with like having multiple windows like caching helps a lot with with like a very basic level for
that's like some application that was enabled not just strictly like oh i could like do more of this
compute you know and like i just feel like somehow that is not obvious to me in a lot of these
systems like where that comes from other than fee like the fee part i get right
But the ux, you know, I honestly would say something like privy and like embedded wallets
is much better for user ux than this 10x fee reduction because it makes it feel closer to
a web2 app or salana.
And the fee reduction matters until some point, right?
People aren't elastic like what we were just saying, right?
Yeah, but we're not anywhere near that point for something like Ethereum, right?
Ethereum fees are crazy.
I agree.
But now suppose we take the DA layers, we made the fees salana fees.
I still think the marginal utility to the user actually goes up more from the privy style embedded wallet experience versus the fee beyond like there's some threshold at which they completely flip.
Right. But here's the point that I was going to make is that L2s also enable a lot of this. Not necessarily, you don't need a privy style thing, but you just need the fact that, you know, L2s have these sequencers who basically run the chain, right? It's potentially even a centralized entity, although people want it to be decentralized sequencers over time.
One of the advantages of these sequencers is that you can give people these optimistic confirmations.
Instead of saying, okay, we're going to fully achieve consensus and do all this fancy shit.
Instead, we're going to say, look, we're going to tell you thumbs up.
You're in the blockchain, even though you're not actually in the blockchain yet.
But I promise you you'll be in the blockchain.
And if you're not, you know, you can at me and take some money.
I'll pay you some money.
I'll give you a refund later if it turns out you're not in the blockchain.
And this actually enables really great U.S.
and even lower latencies than Solana.
You can basically say, like, look, the moment you get,
the moment you ping my IP and I give you a checkmark,
you're good.
So that can get you down to like 100 millisecond latency
or like more Web 2 style latencies.
You know, I think this kind of the intermediated architecture
where you have this optimistic layer
in between yourself and the chain confirmation,
I think this is going to be the direction
that all blockchains go over time,
or not all blockchers, but I should say all applications.
go over time, right? If you're a Dex, if you're a game, I'm not actually going to wait until
the blockchain says, yeah, yeah, I don't, even if I'm a game, I don't want, even if I'm on Solana,
I don't want people to wait 400 milliseconds. Like, that sucks. No game would ever say great.
We have to wait 400 milliseconds because the blocks, you know, block time says so.
Well, it does ask the server, you know, very quickly. Right. Did like you get the kill shot or
not or whatever. Exactly. But it doesn't wait 400 milliseconds to ask whether you got the kill shot,
right? It needs super snappy.
latency and it used a lot of optimistic tricks to assume the answer is yes.
Right, right, right.
And if the answer is no, then I go and revert.
This optimistic thing is effectively the same as, you know, in some ways what, you know,
cash is in a processor do, right?
They like optimistically store a cash line, even if it doesn't know it needs to read it
because it assumes there's some sequentiality and like you'll get some benefit from it.
Well, it's more like the sort of super scalar processors.
For sure, sure, sure.
But I just don't, to me, it's, there's still.
kind of this like missing thing of like I open phantom and i use salon applications and it's
very responsive i mean they have problems with transactions not landing like that they have tons of
problems with like spam so like let's let's ignore that but the the actual application usage is
just so much easier onboarding for a new user than an l2 and l2 like you kind of need an intermediary
lending you money on the L2.
Otherwise, you have to go through the canonical bridge.
It takes forever.
You don't have the same onboarding UX either, I feel like.
And that's something I don't have been thinking about.
I disagree.
There are exchanges that will give you a direct deposit on an L2.
Going to base is easy.
This is like,
going to base looks nice.
This is like a very like...
On the on Binance, it's like almost every L2
they support direct withdrawals.
Yeah.
I was going to say, imagine this is just like a,
this is like a twist on a Chinese room argument.
You know, you're saying,
sending transactions into a box and, you know, on the outside of this, hey, transaction was confirmed.
Does it matter if it's just, you know, a single server sitting somewhere that's actually
processing all the transactions or it's, you know, the entire cellana blockchain going through
consensus?
Practically not, especially if the end result is, you know, going to get you at the same point,
you know, several days down the line.
Yeah, I just think, I just think somehow, like, features that improve UX seem to never get
any priority in these roadmaps.
And again, maybe we're back to where we started of like, hey, we have fixed engineering
budget.
Maybe we shouldn't spend it on duplicating the thing more than three times.
Going back to Vitalik's post about cypherpunk values, I think it is really like, yes,
you look, you're totally right.
And obviously, you know, four of us are investors and we care a lot about creating great products
as opposed to just, you know, great protocols.
But Ethereum is Ethereum because of the values that it espouses.
For sure.
I'm, I'm certainly not.
We're not focused on U.X.
Like, the U.S. of Ethereum is shit.
No, it's like C plus.
Okay, well, that's, I mean, I don't know.
For a startup, that's shit, right?
Like, from the perspective of product building, and Bitcoin is even worse.
Yeah.
And Bitcoin, it's like, yeah.
Yes, you're right.
It's bad and it's going to stay bad forever.
God bless Bitcoin.
Well, that I actually am, I would be willing to bet against.
I think on the next shopping block.
I think the Bitcoin roll-ups are,
Bitcoin roll-up world is kind of interesting.
lately.
Okay.
All right.
All right.
Interesting.
Maybe we'll have to do revend this.
But I guess to me it's just more like I want to kind of feel like the L2 progress is more
than just fees.
Because like I get the fees are important.
But I think the fees are not linearly elastic.
It's not like I decrease the fees 10x.
I get 10x more utility for a user.
And I somehow think like that keeps just being completely missed.
And it in ways that like annoy me when I use the.
try to use some products.
Yeah, there's definitely diminishing marginal utility for reducing fees, but we're not there yet,
right?
We can still reduce fees and we will get a much, you know, much better UX.
Yeah, yeah.
I think that that gap is between like Solana and the roll-ups is where, or like Solana and
Polygon, right?
Like that's where there is this just inelasticity where nobody really cares if it's like
a tenth of a cent or a thousandth of a cent.
But people really, really do care between like it's $2 to do a swap versus it's two
two cents to do a swap.
I think there's a lot of elasticity in that in that gap.
So anyway, we're up on time.
We'll be back next week.
I'm sure there's going to be no more ETF conversations at all in any way.
So look forward to...
For the record.
I feel like...
I feel like somehow someone on the internet will interpret what I said as being a salonashil.
So I want to let the record be straight.
Are you not a salonah shell?
I think there's duality where both ecosystems have their right...
And like, using products in both places gives you,
it just gives you a lot more insight into, like,
there are things they could learn from each other.
But instead, everyone's just, like, constantly beating each other up.
And, like, I think the code bases, the products,
like, if you use those and read the code,
there's a lot of lessons that could be learned.
And, like, somehow, I just think, like, in ETH,
this, like, obsession with optimizing DA,
I feel like at some point, the returns that are going to zero, right?
And so we need to do something else.
And I think that's why people are excited about restaking because it's not, it's like it has like, yes, there's DA, but then there's more.
Right.
And then like it sounds like a Solana Schill would say.
All right.
With that, thank you for giving us the last word to ruin.
We're going to sign off.
Thank you, everybody.
