Bankless - 205 - The State Of Ethereum L2s
Episode Date: January 15, 2024We're thrilled to be joined today by Paradigm CTO, Georgios Konstantopoulos and founder of Conduit, Andrew Huang who are on the show today to deliver a masterclass on the state of Ethereum Layer 2s, h...ow they're going, and where they're headed. There's nobody better to take us through this technical and very educational episode on the state of L2s. ----- 🏹 USE PODCAST24 FOR 10% OFF https://bankless.cc/Citizen2024 ------ BANKLESS SPONSOR TOOLS: 🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://k.xyz/bankless-pod-q2 🛞MANTLE | MODULAR LAYER 2 NETWORK https://bankless.cc/Mantle ⚖️ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum 🔗CELO | CEL2 COMING SOON https://bankless.cc/Celo 🗣️TOKU | CRYPTO EMPLOYMENT SOLUTION https://bankless.cc/Toku ------ TIMESTAMPS 00:00:00 Intro 00:05:59 L2 Vibes, How Are We Doing? 00:08:43 L2 Obstacles 00:11:47 Building Conduit 00:14:19 Strengths of L2 00:17:30 Few Rollups or Many Rollups? 00:21:29 Market Demands 00:24:47 The Evolution Of Superchains 00:30:47 Shared Sequencing 00:34:02 Universal Composability 00:35:57 Evolution and Benefits 00:38:35 Unionized vs Independant L2s 00:45:42 Benefits Of Staying in The Ecosystem 00:50:05 The Modular Conversation 00:52:02 Cheaper DA 00:56:13 RAAS Business Model 01:01:26 Scaling Out The Business Model 01:04:23 L2 Security 01:11:27 Modification 01:16:15 Dealing with Growth 01:17:56 What Comes Next 01:20:01 RETH ------ RESOURCES Andrew Huang https://twitter.com/KAndrewHuang Georgios Konstantopoulos https://twitter.com/gakonst ------ Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
In general, my prediction would be that
2024 is the year where performance, you know,
we end the year basically, and performance stops being a differentiator.
Everybody will have figured out high performance.
Finally, now that we've learned how to modify nodes,
everybody will like figure out in some way.
You know, not everybody will be in production,
few will be in production, the best teams only.
But over the years, what's happening is that the best technology
that was considered a mode is finally starting to be,
for lack of better work, democratized and accessed by everyone.
Welcome to Bankless, where we explore the frontier of Ethereum's layer two roll-ups.
Today on the show, we have Georgios, Constantinopoulos, the CTO and researcher over at Paradigm,
and Andrew Huang, CEO of Conduit.
Both of these extremely smart gentlemen have unique vantage points over the future of Ethereum's
roll-ups that needs to be shared with the world.
Here is what you're going to hear on this episode today.
What is the state of Ethereum's roll-ups in 2024?
What's going right and what is still left to do? How will roll-ups recompose with each other?
What mechanisms are there to help with roll-up composability? And do roll-ups even need to compose with
each other at all? Or is that narrative just totally overblown? What about roll-up security?
Multi-client fraud proofs and multi-ZK provers? Why Georgios thinks we get them this year and why
Georgios thinks we're entering a golden age of layer two experimentation. I learned so much in this
episode. As soon as I'm done recording this intro, I'm going to go back and listen to it.
Before we get into this episode, though, we disclose. Ryan and I hold investments in some of the
layer twos mentioned today. We also hold ETH. You can see all bankless disclosures at bankless.com
slash disclosures. Now let's go ahead and get right into the episode with Georgios and
Andrew. But first, a moment to talk about some of these fantastic sponsors that make this show
possible, especially Cracken, our preferred crypto exchange for entering or exiting layer two's in
Ethereum. That's Cracken. If you do not have an account with Cracken,
Consider clicking the link in the show notes below to getting started with Cracken today.
Cracken knows crypto.
Cracken's been in the crypto game for over a decade.
And as one is the largest and most trusted exchanges in the industry,
Cracken is on the journey with all of us to see what crypto can be.
Human history is a story of progress.
It's part of us hardwired.
We're designed to seek change everywhere, to improve, to strive.
And if anything can be improved, why not finance?
Crypto is a financial system designed with the modern world in mind.
Instant permission.
list in 24-7. It's not perfect and nothing ever will be perfect. But crypto is a world-changing
technology at a time when the world needs it the most. That's the Cracken mission to accelerate the
global adoption of cryptocurrency so that you and the rest of the world can achieve financial
freedom and inclusion. Head on over to cracken.com slash bankless to see what crypto can be.
Not investment advice, crypto trading involves risk of lots. Cryptocurrency services are provided to
US and US territory customers by Payward Ventures Eek, PVI doing business as Cracken. Are you launching
a token? Is it already alive? How are you managing the legal and tax for providing
token awards for your team. Toku simplifies everything about managing token grant compensation,
and you can get started with them for free. You'll have access to top-notch legal and tax
support to handle the distribution and management of tokens for your team. Toku caters to every step
in the process, from user-friendly legal templates for granting tokens to tracking investing periods
and calculating withholding taxes. Toku understands every grant structure, token purchase agreements,
restricted token awards, restricted token units, token options, and all the other ones.
already simplifying this today for leading companies like Protocol Labs, DYDX Foundation,
Mina Foundation, and many more. You can learn more about how Toku can help you streamline your
token management and get started for free. Visit Toku at Toku.com slash bankless or click the link
in the description below. Arbitrum is the leading Ethereum scaling solution that is home to
hundreds of decentralized applications. Arbitrum's technology allows you to interact with
Ethereum at scale with low fees and faster transactions. Arbitrum has the leading D-Fi ecosystem,
strong infrastructure options,
flourishing NFTs,
and is quickly becoming
the web-free gaming hub.
Explore the ecosystem
at portal.arbitrum.io.
Are you looking to permissionlessly
launch your own Arbitrum orbit chain?
Arbitrum orbit allows anyone to utilize
Arbitrum's secure scaling technology
to build your own orbit chain,
giving you access to interoperable,
customizable permissions with dedicated throughput.
Whether you are a developer, an enterprise, or a user,
Arbitrum orbit lets you take your project
to new heights.
All of these technologies
leverage the security and decentralization.
of Ethereum. Experience Web3 development the way it was always meant to be. Secure, fast, cheap,
and friction-free. Visit arbitram.io and get your journey started in one of the largest Ethereum
communities. Bankless Nation, I am excited to introduce you to Georgios, Constantinopolis,
the CTO of Paradigm. Georgios is an enjoyer of Rust and has helped build R.E.th and
OPRieth, a Rust-based execution engine for Ethereum and the OP stack. He's been on Bankless
before, talking about MEV, when we first discovered its implications all the way back in 2020.
Georgeos, welcome back to Bankless.
Hi, David, and thank you for having us.
Andrew Huang is the CEO and founder of Conduit, which is a roll-up as a service provider.
Many of the Layer 2s, which you have likely used, were spun up and supported by Conduit,
including Zora, Avo, Public Goods Network, and many others.
RAS is, as they're called, as infrastructure supporting roll-ups, both present and future,
have a unique vantage point for seeing the evolution of Layer 2s, which is why we're bringing
Andrew on the show here today.
Andrew, welcome to Bankless.
Thanks.
Thanks for having me.
excited to chat today. I think the whole entire Ethereum ecosystem and the crypto ecosystem would really
just like an audit of the state of layer two's, the state of roll-ups. There's many, many
roll-ups going in many different directions with different design strategies, different holes,
and there's still a lot of unknowns for what the future holds, I think, for the scaling of
Ethereum. While roll-ups did give us a lot of clarity for how Ethereum will scale, it has also
given us a lot of questions, mainly in the world of fragmentation and
security. So George, yes, maybe we could just start with that. Can you just kind of give us
the vibe of the audit of layer two's? How are we doing? What's going right? What do we still
need to work on? Just overall, give us your sentiment check. Yes, of course. So starting from the
basics, A, it exists and it's real, which I think is on its own remarkable achievement
after many years of hard research and hard work. And I have pulled up here L2B, which is a great
website that many people reference these days. And you can notice that the current TVL in Layer
2's amounts to almost 20 bill, a number that has over 10x almost this year. The risk is in a
lot better spot. We're suddenly starting to have Roll-ups deploy more faultproofs in 2024.
We're seeing the Security Council's going live. We're gradually moving towards a so-called stage
two decentralization in roll-ups. If you keep losing in L2B, you're
will notice that there is over 20 or 30 maybe roll-ups live of various types, optimistic, ZK,
some are not even roll-ups. Granted, some might have the layer two or the off-chain data availability,
but overall things are growing, technology is advancing, things are being deployed, which I personally
find really exciting. That is on the core tech side. Now, there's a whole ecosystem being
developed around roll-up services.
There's companies like Espresso that are building shared sequencers for people that
want to outsource their sequencing needs.
There's companies like Conduit, which we'll talk about in a second, that are building the
infra for people to deploy more and more roll-ups.
There's other companies building alternative data availability layers.
And, you know, there's the evolving roll-up stack and the layer two stack that we envisioned
many years ago that is finally starting to play out in production.
Now, are we done?
No, we're not.
have a lot more to do. The ecosystem still depends on permission faultproofs. It still depends
on permission sequencers. In general, the stacks are nascent. We still have a lot to do,
but that is for the coming years. But overall, super excited, and I think today is a great day
to be having this episode because we're almost at a pivotal point in the scaling story for
Ethereum and its ecosystem.
What are the big problems that you identify Georgios as things that we still need to work on
in the layer two space. Obviously, security is one that you mentioned and brought up and will be an
ongoing thing that we will need to work on for a while. There's been a recent focus on while layer
twos are scaling, they are also fragmenting, which is one of the big problems. What would you say
are the big obstacles, the big research obstacles or maybe engineering obstacles that the
Ethereum layer two space needs to focus on more? Of course. I think we're actually way past the research
phase this year has shown that we're entering this deep productionization phase where all the research
is mostly done. And right now we're just seeing productionist deployments of things that we've
known for a while. So right now, the things that I think are very important are, as you said,
security. So right now you can check the state of the security of the ecosystem just by counting
the multisigs that govern all the roll-ups. As long as we have all of these multisigs, it's going to be
hard for us to consider the ecosystem like really mature and secure. Now, what are we doing for that?
We're moving towards multiple faultproof implementations for each roll-up. We're moving towards
delays on any kind of power that any security council must have. We're limiting the power
of every security council to, let's say, only when there's a layer one hard fork. We have a lot
of work to do on decentralization. Right now, most, if not all, roll-ups are
sequenced by one sequencer, usually the one by the labs entity or the foundation entity
that the company runs, but the project that built them, which is okay given that there is the
layer one fallback option. However, it also means that the system can go down as we saw many
times this year with various projects. So overall, we also have work to do there. So one, security,
two, decentralization. Now, three, there is one not as spoken about,
topic, that is the tooling, which is all that we think about at paradigm.
And for tooling, the problem is going to start manifesting when people switch to different
op codes, when people switch different pre-compiles.
There's a lot, a lot of areas where things can start to change.
And right now, the tooling is not ready to support this evolution in the layer two ecosystem.
For example, if somebody builds a new op code for their layer two because they want to experiment,
now they need to go into the solid compiler and edit the compiler and figure out a way to expose the user.
And people just don't have the expertise to do that.
And somebody, some mistakes will happen if we don't prepare for that.
So we really, really, really need to make the tooling like ready and robust and modular and extensible
so that it's ready for this Cambrian explosion of layer to innovation over 2024 as the barrier to entry goes down and as the stacks mature.
And that's part of like the tooling that we're also trying to build with our teams in an effort to make them extensible and ready for layer two.
Andrew, I want to turn to you and get kind of a similar perspective from you and what you see over at Conduit.
But before we dive into more or less the same set of questions with your perspective, maybe you could also illuminate what your perspective is.
What does the perspective at Conduit give you?
You are a role up as a service provider.
You talk to a lot of layer two teams doing different things, similar things.
And so maybe first illuminate for us the vantage point that you have as your role at Conduit.
And then we'll kind of go into a same set of questions that I just asked Georgios.
Yeah, definitely.
So as you said, we work closely with a lot of the later two teams.
We work with folks that want to launch chains.
We work with integrations that need to integrate on those chains.
And so I think it's a very kind of like pivotal point in the ecosystem that affords us the opportunity to really like see a lot of different things.
And importantly, see what's kind of on the bleeding edge here and how we can help enable that.
I think the TLDR here for us is like, if a year ago, I think nobody was really thinking about many different roll-ups.
And I think it was viewed as much risk here.
I think today it's in some ways kind of like becoming the default.
And we're very excited to kind of help with that transition.
And I think for a variety of reasons, both economic as well as frankly just at a technical level, enabling kind of new applications to be built.
We're really seeing kind of the rise of the modular blockchain.
And I think in some ways it's kind of the cosmos thesis, but playing out on.
on Ethereum, where people do want sovereign chains, but you kind of have those interoperability
standards that allow folks to kind of transact across chains and across roll-ups in a way that
kind of makes sense. And so very excited to kind of be at the center of that and kind of playing a
role. And, you know, all the complexity, Georges was kind of mentioning around fault produce
and removing multisigs and upgrades. And it's just one difficult foundations or whoever's running
them to do themselves. Like, it takes a lot of time. And not to mention if a random kind of dev team
wants their own kind of roll-up, you know, in some ways be impossible.
And you see folks kind of mess it up all the time and their bridge gets hacked or
something happens.
And I think one of the benefits of like a RAS provider like conduit is, you get all of that
same great tech and all of those migrations and like upgrades kind of seamlessly.
And kind of like AWS, instead of focusing on building the best data center and like
all of this kind of undifferentiated heavy lifting, you get to focus on kind of the important
thing, which is like building a great application for your users.
And so we're very excited to help facilitate that transition.
And so what are the strengths of the layer two ecosystem right now?
What's going well for teams as a whole that you work with?
And then also what are some of the pain points?
What are the hurdles?
What are the difficulties that teams are experiencing?
Yeah.
So one, I think the first question when it comes around in roll-up is like,
how do you actually stand this up in production in a kind of secure, reliable,
performant way?
And I think a lot of roll-up frameworks make it easy to spin up a test net,
even like a local version, but there's like a huge gap between that and like something that is
ready to custody and kind of hold user funds. And that's really where Condo comes into play,
again, making that seamless. So you get that at the click of a button, all of the work we put
into reliability, security, performance, et cetera. You just get for free. And it, again, it makes
sense for us to invest in because we run like hundreds of these across Mainnet and TestNet. And so like
there's kind of small percentage points that might not matter for you really matter for us.
and that means you're getting the best offering.
I think typically after that,
the next thing we see is frankly just like,
for lack of a better word, PMF,
I think like early on in the narrative,
it was, you know,
we just launch a chain and it's going to work
and it's a new thing and therefore we'll have users.
I think very quickly it's become clear
that, you know,
roll-up will need to differentiate in what they offer.
And I find that exciting
because I think we'll start to see
instead of like clones of your favorite DFI app
or like, you know,
the 10th,
their 11th clone of Uniswap will actually get something new and differentiated.
And I think, like, one example that I've been, you know, kind of was a dark course for me,
but, like, it's been really exciting to see play out is something like Zora network,
where, you know, they're very focused on kind of the collecting side of things,
and that network has grown in a really interesting way, where suddenly they have all of this data
around like Mince and art that, you know, only exists on the Zora network.
And I think they'll be able to create really compelling applications and new behaviors on top of that.
So you're saying that you think that there will be a,
trend away from the highly general layer twos and layer twos as an ecosystem as a category are all
going to shift the Overton window towards more specialized niche layer twos that are optimized for
more narrow use cases. Is that what you think is going to happen? I'd say we see a bit of both.
I think it ultimately depends on the brand. I think something like base, for example, right? Huge
brand, a lot of access to, you know, retail users. That just makes sense of like kind of a generic chain
that has everything. But I think for kind of your
average dev team that, you know, like a startup, right? They need to build something new and
differentiated and aren't going to have the same distribution or brand benefits that some of these
larger organizations have. And I think the only way to differentiate yourself is really on the
go-to-market and what kind of uniquely is happening on your chain in terms of the state space.
And that's kind of my recommendation to teams is, you know, actually building something novel
versus just trying to be kind of your 100th DGN defy chain. So I want to ask the very big question
that I think a lot of people are asking in the layer two space, which is few roll-ups or many roll-ups.
And there are arguments for both sides here. The few roll-ups arguments are the fewer roll-ups
you have, the more net composability there are. So, you know, there's fewer different chains
to be fragmented around. So like, you know, the more liquidity aggregation, you know,
fewer networks and the drop-down on Metamask. And so more composability is just good U.X.
And also, roll-ups individually have costs. And so if we aggregate everything together,
you can consolidate the costs in order to save money. And so these are some of the arguments for why there will be a few
roll-ups, but then there's also arguments on the other side, which are that technology costs always get cheaper.
And that's something that conduit is doing. It's making it cheaper for layer twos to exist. And that will only
improve over time. There's also a roll-ups desire for sovereignty. We know that this is a powerful force.
Games, for example, will probably want their own chains. So what do you guys think? Well, which is it? Is it few
roll-ups?
Is it many roll-ups?
George,
I'll start with you
and Andrew,
you'll follow on.
Yeah, of course.
So there's two
access on the demand side.
One is cost
and the other is
customizability
or being free,
you called it sovereignty
earlier, being able to do
whatever you want.
On cost,
how I would think about it
is that roll-ups
is an elastic
scaling solution.
You add more compute,
the more load arrives.
So if we end up having so much compute demand, then probably there will be many roll-ups, because it is unlikely to expect, like we saw from all the past L1 lessons, that one chain will be able to accommodate the world's compute.
Or that's where I come from, at least.
Other some might disagree, and that's perfectly fine.
Now, on the side of customizability, of course, that comes off at odds sometimes with the perfect horizontal.
horizontal scaling thesis, which means that, you know, sometimes when the demand might be enough
for five chains, maybe there's 10 chains because people want some extra customization that you
cannot get in the other place. For example, a big brand like Coinbase or, I don't know, say
Starbucks or someone. For some reason, maybe they would want to be on a separate area. Why, maybe
there's like a lot of customizations that they want to make, or maybe they want to own the brand,
or they want to do specialized airdrops and whatnot. Hard to tell. So I think,
I think it is hard to bet on few, as in, you know, one or five or ten.
And I would probably expect a power load distribution to nobody's surprise on where the demand gets kind of like concentrated, let's say in the top, whatever.
But I expect a very, very long tail of chains with also varying duration.
Because imagine a game could be played over a day on a, you know, people have called these flash chains, one day roll up, pop-up roll-ups, whatever you want to call them.
You know, maybe you play a game for a day, an on-chain game.
That game maybe has stupidly high state growth or whatever,
so it would never make sense to actually put it on a real network.
And then you just checkpoint the result into a layer two or a layer one or something else.
So the whole co-processor thesis that many people have been putting out,
it might also apply in the roll-ups,
and that would enable thousands and thousands and thousands of roll-ups,
but also with a very small duration.
So, you know, power law, a lot of activity, but the duration of its chain might change.
Demand for customizable also might affect that.
The reason why I like having both Georgios and Andrew here is we have Georgios, the researcher,
and Andrew, the market founder.
And so one of the perspectives I enjoy here is that Conduit is tapped into what the demands of the market are.
What can you tell us, Andrew, about the few roll-ups versus many roll-ups conversation,
and in terms of what the market wants with your clients and needs over at Conduit.
For sure. Well, I guess one point zooming out and kind of tie you back to what Georgia said earlier,
I think like if you believe there's only going to be a limited amount of demand for crypto compute,
then I guess like the kind of few roll-up kind of world makes sense.
If you're really bullish on crypto and, you know, new applications taking off more and more demand,
I think like by necessity there's going to need many, many of these different kind of crypto-compute
environment. So just as an argument for like industry growth,
as a whole. I think it's almost somewhat bearish to believe that there's only going to be a
couple of roll-ups to serve all that demand. And so, you know, here at Conduit, we're very
bullish on crypto and kind of believe in this bull-centric world with kind of thousands or hundreds
of thousands of these roll-ups. And so just kind of from that argument alone, I think we're very excited
about it. In terms of like what we're seeing from a market level, I think ultimately, one,
customization is a good point, but I think even more than that, I think there are large economic
reasons to launch your own roll-up. And, you know, I think the biggest factor that we see today
is that when do you launch and deploy in another chain, you're essentially paying rent to that
chain. And by deploying your own chain, you get to like internalize those fees. And not to mention,
you have more control of your own ecosystem. It's narratively kind of a great thing to do.
And you get to customize. You get to maybe build your own ecosystem on top of that, right? So you
have your own L2 with many different L3s on top, other types of applications. And so I think just
from a pure economic argument and kind of like a pure, a pure,
sovereignty argument. There's just this incredible kind of demand to launch your own block space in the
same way that if you look at Web 2 today, right, it's not one big global computer or like a couple
big global databases. Like every company has like their own application. And like if you look at
Facebook, right, like they have a ton of apps and whatever they've built on top. It's this like gigantic
system. Like that could be one big roll up or it might be like multiple rollups. But then you have
this long tail of other companies that also have their own. Yeah. Roll ups. And I think like
If I'm thinking through what the future kind of holds for crypto, it's that model seems a little more clear to me where I think like it's unclear that every application needs maximum composability at all times and therefore needs to pay all this rent and like all these other things to a single chain.
It seems clear to me that kind of like asynchronous message passing or like if you look at, you know, APIs today, right?
It's like kind of this asynchronous kind of webbook kind of thing or like you just have an integration's API that happens less frequently than kind of normal.
but then you co-locate the logic that's really kind of important.
And I think the scaling argument is kind of like demand for customization, this demand for
economics is really driving what we're seeing in terms of this Cambrian explosion of chains.
Yeah, if we're going to see thousands and thousands and millions of chains,
we need infrastructure that we can like copy and paste, right?
We need highly replicatable infrastructure in order to make that happen.
And this is where a lot of the battles are being fought in the layer two space from all
like the super chain standards, the optimism superchains, arbitram orbits, polygon supernets,
ZKC and hyperchains. The way I think about these things is that they're all economic zones
because the block space is very alike inside of a network, right? And so like the OP stack
main net is very alike to the base main net, right? And so these have a relationship with each
other that's more close than, for example, like base is to arbitrum. And so this is how I think about
these things, economic zones. And they can engage in trade.
with other economic zones, right? Like Arbitrum can trade with optimism via a bridge, like across.
But trade is going to be easier inside of the optimism super chain. Trade will be easier inside of
an Arbitrum orbit, and it'll be a little bit more costly to go between these things.
This is my perspective for how I understand. Georgios, how do you think about the whole evolution
of superchains? Yes, maybe to give you a bit of a cynical take to start. Anything that we say in
this conversation will probably be more speculative and more an expectation of what is
happen given that there's none of these systems live yet in production.
There exists one system called Astria that was deployed a few weeks ago, but it's still in
TestNet, and it even had some issues when they deployed their shared sequencer.
So it might be worth zooming out and thinking, what problem are we trying to solve as a
fair place?
And the problem you mentioned earlier, David, it's how do we make these different cities
talk to each other in a cheap way without introducing
too much additional layer of trust.
When is this useful?
One would think first and foremost on defy, you know, or on transfers.
I am on chain A, you're on chain B, and we want to talk to each other without having to think
even about where am I sending money to, right?
And as you said earlier, we don't want to be in a world where I go to Metamask and I pick
from a drop-down of 55 or 100 or whatever RPCs.
That's terrible user experience.
And honestly, we would have failed miserably if we're in that world.
So the superchains or the shared sequencers or whatever you want to call them, they come in as a set of solutions.
They try to address that by allowing you to interact with one endpoint as a user, one place.
And the sequencer smartly will route to the right area, whatever transaction needs to happen.
And all of these superchains roughly have that same vision, that they want to abstract away
the communication inside of their own ecosystems.
Now, there is solutions that achieve that for heterogeneous systems like Espresso,
and they introduce the require modifications to each of these stacks to make them compatible
with each other.
For example, to make the Arbitrumorbit stack shared sequencible with the Optimism
Super Chain Stack, Espresso needs to modify the same modifications both systems to make them
compatible.
And to what extent that will be feasible or not is TBD and is an exciting area overall.
One point that is worth unpacking, though, is that, to the best of our knowledge, none of the systems offer, let's say, the holy grail of synchronous calls across all of these systems.
There's no world where, you know, you can say A calls B, which calls back to A and does a lot of like things together, unless you're in the design where you're basically one chain.
And that is where the optimism super chain design is going toward.
For the most part, what you get from all of these shared sequencing designs,
you get atomic top-of-block inclusion, which is useful for MEV,
which we had talked about two years ago, David.
So the idea there is that the shared sequencer is able to guarantee that five transactions
will always be at the top of the block.
And these five transactions that will be on top of block A and on top of block B,
will be extracting some kind of arbitrage opportunity that existed.
And that makes money and that's a valuable service to be offering.
And that could be a valuable like infrastructure protocol to be running.
However, to go fair beyond that, you know, in terms of conditional execution,
let's say I send the transaction on A and that means that the transaction also delivers on B
and stuff like that.
I don't think we have any design yet that is soundly implementing that.
So that part is still in the research phase?
I would think that it's the way that people are trying to do it without doing further modifications to their systems, I think is not there yet.
The most promising design that I have seen is one in a blog post by James Preswich, which describes a certain way where each block commits on every other block in the super chain.
Again, research phase.
So I take back what I said earlier that we're done with the research phase.
There's that component that is not figured out.
But it's also worth understanding that it might not be worth figuring out fully.
You know, maybe the holy grail solution of feature set, nobody needs.
And maybe you can get away with something you can ship in like in a year or in six months
that solves the real problem, which is not actually the composability, but the decentralization.
The problem is that every one of these systems run by one person.
Whereas ideally, there's not that much of a problem, as you said earlier, we have a cross,
we have, you know, 10 bridge protocols to do all the transfers.
people will figure, let the market figure that out.
But the part about the decentralization is more critical, especially as more and more infrastructure gets launched.
And we put our kind of like trust on few intermediaries.
George, could you go into shared sequencing a little bit more?
Just what composability benefits does shared sequencing give chains?
And does that conversation change if we are talking about all these chains inside of a single setup,
like all the OP stack chains that are going to be a part of the super chain?
chain, these can shared sequence with each other in some degrees and produce some composability
benefits. And then there's also the potential, like you said, of optimism and arbitram
shared sequencing and also getting those kind of composability benefits. Overall, what does
shared sequencing get us? Right. So it solves you composability and to some degree
decentralization. Composability, though, it solves to a small degree as described today,
and TBD, whether it can solve it to a larger degree. And surely,
shared sequencing in the same ecosystem, in the same flavor is going to be cheaper, easier,
more compatible, whereas trying to make two different ecosystems talk with each other will be harder
and might not even be, you know, desired to some extent. Yeah, because that's always what I've
thought is going to be difficult because sequencing is the golden goose for layer twos. Like,
that is where they get a lot of their fees. So why would arbitram and optimism want to give up
their sequencing fees to espresso in the name of a decent?
but marginal amount of composability benefits.
Right.
I think jury is still out there,
and I think depending on who you ask,
you will get a different response.
From my point of view,
the optimism ecosystem, for example,
wants to build a moat around a different set of,
let's say, infrastructure components or values,
for example, the governance
and the entire process around the law of chains.
Whereas it's kind of almost acknowledging
that the biggest part of the technology
is open source is to be given away.
It is not really a moat.
Or if it is a moat, it's a very weak one that's going to go away over time,
whereas the real mode is elsewhere.
So that would be one take.
And also remember that to opt into the super chain ecosystem,
and Andrew will tell you that very well,
you pay a rent back to the super chain, the optimism collective.
And that is where it comes in.
Effectively, optimism is offering a service.
That is the shared upgrades,
the shared governance and the shared sequencing, which is going to be in part mandated by optimism smart contracts.
And it's happy to take a cut for it.
And all the rest, let the ecosystem figure out because it's hard to pick a solution yourself.
Maybe you don't even want to build it yourself.
And Andrew can cover more on how this looks like from the business perspective of how do roll-ups actually, you know, because we've done this with multiple customers so far.
Yeah.
Yeah, I definitely want to get there because I think that that conversation of synchrony and chain composability
and chain governance, I think is one of the most interesting ones.
And I know that Andrew at Conduit, you're right at the heart of that conversation.
But George, I was one last conversation before I asked that question to Andrew.
Is universal composability a dream?
Like every single layer two across Ethereum, different constructions, different setups,
even if it's like 10 years in the future.
Is that a pipe dream or how far can we get there?
I think it is hard to beat against the mad Ethereum scientists,
just to say it as a prerequisite.
So, you know, David, it's hard to bet.
on a 10-year time horizon.
My view is that we don't have any fundamental, let's say, problems to be solved,
but there's also a degree of like a prioritization.
And I think that we also need to think about the demand side.
When the demand side requires that we solve universal composability,
trust me, we will find a way to solve it.
But I think sometimes we should acknowledge that in the Ethereum ecosystem,
we don't always take the most, let's say, boots on the ground demand side,
and we always think what's the most perfect protocol
I can design for the next 10 years.
So I think trying to answer that question,
we could entertain it,
but I think it's more worth focusing on the important topics
on the ground.
Exactly, exactly, because we're in this very pivotal phase in crypto,
and we really need to stay focused,
stay close to the customer and on the user experience.
We solve fees, then we solve wallets.
We solve wallets, we solve products,
and then, you know, off to their races.
And yeah, the horizon always continues.
Andrew, I want to talk about the spawning of these super chains and orbits and hyperchains and supernets.
Because from my perspective, conduit is the place where these things spawn from, since conduit is the place where it's the cheapest to make more chains.
So I always kind of see the super chain spawning out of conduit.
And there are competitors in which the super chain also can spawn from.
But a lot of OP stack chains come out of conduit.
So what is your perspective on the evolution of the system?
What do alike layer two's? What benefits do they have of being both optimized and built by conduit,
call it a neighbor change, right? Just overall, what's your perspective on the growth of these ecosystems?
Yeah, I think the growth is really exciting. I think narratively, there's kind of optimism probably
far at the first shot here around like the super chain and like kind of homonidious block space,
kind of tighter and drop her ability. And I think other ecosystems kind of quickly followed in kind of
their ways and credit to them. Maybe like zooming out, I guess the question would be like how
much does that interoperability actually matter to users? And like based off of the current
applications that we're seeing, it's not clear to me that there's a huge benefit outside of the
current bridging protocols that already exist. And so just to rattle off a couple customers,
somebody like AVO, right, it's kind of this decentralized exchange and like, you know,
not a lot of interoperability opportunities outside of bridging from other rollups. You can take
a look at public goods network, right, and kind of running some Bitcoin grants rounds. But again,
not a ton of opportunity for interop
beyond bridging to the chain.
Zor network, again, you can make the same case.
And I think that will just, frankly,
be kind of how these new
roll-ups actually launches. They need to do something
differentiated in new. And by
definition, that isn't going to actually
need that level of inter-operation
that, you know, we desperately
think everybody needs. And I think
in some sense, there is this,
I'm not going to call it a Stigil,
but I think, like, we're so early
in terms of how, one, like,
crypto compute is developed to how applications have developed that it's so hard to make the case that
this is the end state of maximum global composability and that's the most important thing and
maybe drawing an illusion to like web two you know back before we had like networked computers
you know unix pipes were probably like unix pipes are like kind of the equivalent of composability
and like people use them all the time and then uh now you just use it for like a bash script on
your like the local computer right and then like everything happens in the cloud everything
happens over the network. And so it's just kind of this new dominant model that became possible
because the compute landscape really changed and the capabilities of that allowed for new things.
And so I think there's somewhat of this residual overhang of maximum composability that
even if you look at the numbers today aren't necessarily like the largest use cases,
particularly in like emerging roll-ups. Interesting. George has previously gave us an access about
a roll-up designs and constructions. There was like the customizable and then the cost. I want to present
another spectrum, Andrew, along like this whole super chain conversation. And there is on one side of the
spectrum, every single chain is a part of a super chain, whether it's the optimism super chain or an
arbitram orbit or as a polygon supernet, or every single chain is a complete independent chain.
And it's not part of these collectives. The reason why I always kind of thought this super chain
conversation is cool is because these are digital collectives of block space. And there's like
the optimism collective, which manages the super chain. And where we end up on this spectrum,
where every single chain is its own independent ecosystem versus every single chain has determined
that it's beneficial to be a part of a collective, only the future can really tell us.
The bull case for optimism, the optimism collective is that the value of being a part of the
collective, the value of being part of those shared upgrades and that homogenous block space
is worth it so that the fees that the collective charges, like the union fees, call it,
is worth it. That's a worthwhile tradeoff.
Or maybe it's not. And people are more inclined to stay in,
independent layer two inside of their own ecosystem and sacrifice some of those composability benefits.
Do you have a perspective as to where we end up on this sliding scale of unionized layer
twos or independent layer twos?
Yeah.
I mean, like maybe like a reframing of super chain is kind of taking out the interoperability
aspect is like how much is it worth it to have homogenous block space, right?
That you know you can interact with in the same way.
It has all the same security guarantees.
And it seems like, you know, in a world of many, many roll-ups, like that matter.
thought. There's, you know, a reason why in Web 2 you talk about like API compatibility, right? And like different clouds, right? They copy each other's APIs so that you can just migrate in or like, you know, you make a new database product and it's like Postgres compatible. And so there is a good reason as to, you know, why to do it in a compatible way and to kind of have these shared upgrades that maybe isn't necessarily so tied to the interoperability aspect, even though that might be a nice kind of future add on. I don't know that that has to be only part of the super chain. For example,
example, like you can follow the same upgrades and do that without being a part of the super chain
or even on the arbitram orbit side, right? You could stay closely to the spec, stay closely to what
the governance approved versions are, and kind of upgrade in concert with that. I think ultimately
it's an area that like infraperiders like conduit will play and a service that we offer, right,
is that you can get the gold stamp that your chain is going to be compatible and like equivalent
to all of these other major networks that are very popular and everybody use. And so I think
To answer your question, I think compatibility is a key kind of thing, particularly in this early phase, as we're getting this explosion of chains, as we're kind of getting these integration headaches where it's like, oh, like, a new chain spends up every day, how am I going to like integrate? And like just knowing that your stuff is going to work properly is a huge benefit. And that also ultimately ties into the types of customizations that we're interested in at Conduit, where, you know, one thing we can't talk about it publicly, but we have some exciting kind of custom chains and the works. And ultimately,
one of the services that we provide is like, listen, like we're happy to do customizations,
but we also want to make sure that one, your Ford's compatible with like future upgrades
and two, that you're not doing it in a way that makes you so unique that people can't use you.
And so we're very excited about both kind of the max compatibility,
as well as this broad spectrum of customization that ultimately will enable like new types of applications.
Mantle formerly known as BitDAO is the first Dowled Web3 ecosystem,
all built on top of Mantle's first core product.
Mantle Network, a brand new high-performance Ethereum Layer 2 built using the OP stack,
but uses Eigenlayer's data availability solution instead of the expensive Ethereum Layer 1.
Not only does this reduce Mantle network's gas fees by 80%, but it also reduces gas fee volatility,
providing a more stable foundation for Mantle's applications.
The Mantle treasury is one of the biggest Dow-owned treasuries, which is seeding an ecosystem
of projects from all around the Web3 space for Mantle.
Mantle already has sub-communities from around Web3 onboarded, like Game 7 for Web3 gaming,
and buy bit for TVL and liquidity and on-ramps.
So if you want to build on the Mantle network,
Mantle is offering a grants program
that provides milestone-based funding
to promising projects
that help expand, secure, and decentralize Mantle.
If you want to get started working with the first Dow-ledd layer-2 ecosystem,
check out Mantle at mantle.
And follow them on Twitter at ZeroX Mantle.
Sello is the mobile-first, EVM-compatible,
carbon-negative blockchain built for the real world.
And now something big is happening.
Introducing the Cello Layer 2.
It's a game-changing proposal.
that's going to bring Sello's rapidly growing ecosystem home to Ethereum.
Vitalik has shared its excitement for the Sello Layer 2 on the Selo Forum, so has Ben Jones
from optimism.
But why?
The Sello Layer 2 will bring huge advantages, like a decentralized sequencer, off-chain
data availability, and one block finality.
What does all that mean?
Rock solid security, a trustless bridge to Ethereum, and more real-world use cases for
Ethereum without compromise.
And real-world adoption is happening.
Active addresses on Sello have grown over 500% in the last six months.
With the cello layer two, gas fees will stay low and you can even pay for gas using ERC20 tokens.
But SELO is a community governed protocol.
This means that SELO needs you to weigh in and make your voice heard.
Join the conversation in the SELO Forum.
Follow at SELOorg on Twitter and visit cello.org to shape the future of Ethereum.
Okay, so compatibility, I really want to double click on that.
So you're saying that there's some benefits of having homogenous block space and true composability
in the blockchain transaction sense of the focus that,
much of crypto Twitter it's been on lately. But I think what you're saying is that there's also
just ecosystem benefits. There's infrastructure benefits. There's shared standard benefits,
maybe developer outside of my wheelhouse here because this is all very technical. But just there's
like, you know, the benefits of the EVM, for example, or like the optimism ecosystem has
Rust, the Rust O.P. Stack client that I think Georgia and many others at Paradigm are using. And so if you
want to be a part of that shared ecosystem, you have benefits of being inside of the OP
stack or part of the super chain. Is that part of your answer?
I think that like similar to how, you know, browser and the JavaScript have taken over kind of web development and everything is based around this kind of core kind of open source software stuff.
There's a very good reason to stay EVM compatible and like work with, you know, the optimism super chain, really anything that's like compatible and has a big kind of open source community around it to benefit from all those network effects.
And like you can see firsthand with what George is doing with Reth and like OpReth and like all these other things that, you know, you just work with the EVM.
you work with optimism or whatever role of framework,
and you just get these massive performance improvements for free.
And ultimately, I think as a dev and as somebody who is going to launch their chain,
you want to be able to ride that wave up versus fighting against the tide.
George, Jers, could you just illuminate much better than I could,
the technical benefits of staying inside of the ecosystem,
just all of the developer ecosystem tailwinds that one would get by joining
one of these broader ecosystems versus swimming alone, for example.
What are the benefits? Can you help us shed some light on that? Well, there's the benefits from the
infra operator side, the chain builder, and then there's the end user using a chain that's a part of
that. I would think that from the infra operator side, you basically get a managed service of
something that you would otherwise you yourself need to do. But security upgrades, governance,
patching, having on-call support, all of these things. I think this matter a lot more than
people think. From the user side, by being part of such system, you would get the free composability
to your users. And you also get, of course, that, you know, one RPC. You don't need to talk to
every single thing separately. So in general, in one word, maybe what you get is that you get
homogeneity. I see you're writing in the dog, David, you're writing the trust as a differentiator.
I think you get the trust anyway if each system is done properly. I think it's honestly, you know,
right now we're in a bit of an embarrassing state where no you don't get that much trustlessness in the
end state whether you are in a shared or not system if that system is like running off of a specific
version of a broader standard i think it doesn't matter if it's part of a super chain or if it's not
ultimately you know just to illustrate the point of an example if the blast l2 is like deployed
correctly with appropriate faultproofs and you're in it you have same you know you know
know, protocol security as any other L2 or L2 ecosystem, if it's deployed with the appropriate
parameters. Do you get the same composability? No, because maybe you're deployed on a separate
ecosystem if you're at that L2. If you're in the ecosystem, then you get the composability.
So on the trust component, I feel like you're good anyway. And even at a more pedestrian level,
I think compatibility is important for tooling. I know Georgia's brought it up, but just to make
it explicit, things like block explorers, things like tenderly, things like Dune,
All of this tooling, you know, they need to run RBCs.
They need to run against RBCs.
They're, you know, software works against certain versions.
And so, like, if you think that every single one of these tools has, like, a cracked
in for team and is going to be able to keep up with all the different customizations, I
think you have a very overrated sense of, like, how many engineers that are in the industry.
I think ultimately, again, you just want to be able to ride this wave where you have the same
stuff.
You can plug and play just by switching out an RBC endpoint versus something massively custom
then you're fighting against the tide.
George, just I just want to double check on that trust element,
make sure that we're talking about the same things.
The way that I kind of understand this is that when there's going to be 10,000 different chains,
the overhead for users to understand the safety and security of the chain that they're on
or they're interested in using is going to be too much.
And I don't really think about this when I go to any websites.
That's fair.
But sometimes there's one of these websites that I go to and my browser is like,
hey, this website you might want to think twice about.
And I would have no idea how to identify that if the browser didn't tell me
this. And so this is like one of the benefits of having a shared state of whichever the chain is
built on. So like the OP stack. And so you could imagine that little shield of security in your
browser when you go to an HTTPS website, you might have something similar. So it's like you are
on a verified OP stack chain with a verified client. That's kind of what I mean by trust.
That's a great point. That's a great point. I had not appreciated that earlier when I was
talking, yes, spot on. Every shield that we have on our browsers, when every browser will
bundle a wallet and whatnot, every shield that we have in the browser will for sure be
extra shielded, let's say, when it captures an OPE stack chain or an orbit chain or whatever else.
I don't know that the user will know whether it is part of a specific flavor, but the user
will need a certain lockbox to indicate it.
Beautiful, beautiful.
I want to pivot to the modular conversation.
Andrew, right before we started this podcast, I actually noticed that Conduit put out a tweet
about Lyra, which is one of the conduit OP stack chains, pivoting its data availability from
what I believe was the Ethereum Layer 1 to Celestia. Give us your perspective as to the evolution
of this conversation where we have Ethereum Layer 2s that are potentially consuming Celestia
for data availability. Where do you think this goes in 2024? Yeah, I think what this really
enables is bringing roll-up costs down significantly to a point that really enables like new apps,
right? And so I think like roll-ups kind of quite expensive, particularly
if you look at December, right, maiden gas prices were like spiking. There was a ton of activity.
And even if you have your own roll-up, you're still not fully insulated from all the activity
that actually happens there, right? Because as main-neck gas prices increased, like 5x, 10x, 100x,
whatever it is, suddenly like your app, which, you know, maybe had sub-cent fees, sub-10-cent fees,
you're paying like a dollar per transaction. And that meaningfully changes the business model and
the economic model for a lot of these roll-ups where, you know, frankly, a lot of them are either one,
the point of the chain is that fees are low to enable new types of behavior, like on Zora network,
like collecting and minting is very cheap.
Or frankly, like, a lot of these applications are paying gas fees for their users.
And, like, you know, they're just running up this huge bill.
And that's ultimately where all DA layers are really going to come into play is, like,
separating that data costs from the activity that's happening on Ethereum midnet.
While, you know, there is an additional kind of like security assumption around, like,
that data being available and being able to relay it to Ethereum and, like, eventual integration
with fraud proofs. But today, frankly, the just economic reasons of having 10, 100x,
thousand X cheaper fees or really just bottom line for the roll-up are just like too insane to
ignore and are really enabling any team to launch a roll-up today.
Interesting. One of the perspectives for app chain bears will say that app chains probably won't
have enough users or transaction volumes in order to justify themselves because roll-ups have
costs. Using Celestia or just cheaper data availability providers for cheaper DA than Ethereum
helps with this thesis where, well, you know, if we have cheaper data availability, it's the
main cost for roll-ups. So therefore, there are going to be more app-specific role-ups that can
economically justify themselves. What's your perspective on this? Like, how far will we be
able to go down the long tail of app-specific role-ups being able to economically justify themselves?
So there's like two variables here. There's just more users making more transactions. And so
we can justify more that way. And then there's also cost going down on the infrastructure and
data availability sides. Just give us a peek forward on your perspective with this. Yeah, I mean,
I think ultimately with Alt-D-A, as you said, it's like 95 or more percent of kind of the cost
of a roll-up. And Alt-A basically makes that free. I think enables, like, a lot more use case than
enables startups to just experiment. And like not all of these are going to stay around forever,
like startups, right, 90 percent or like some wild statistic, like 90 percent of them fail. I think
you may see something similar along the kind of like long tail of rollups, but the fact that, you know,
if you go to AWS, you spend up an EC2 instance, it costs you like 20 bucks a month. Like, ultimately,
if you can get the price down to a point where it's kind of risk-free to start and just kind of see
where it goes, I think that's ultimately the world that conduit wants to live in and what we want to
enable. And ultimately, I think we'll see a lot more innovation that way. And, you know, I'd say we're still
kind of not quite there yet. I think Alt-D.A. gets us a lot closer to that reality.
Can you, Andrew, give us some perspective of the lowering costs of being a roll-up over time since when you started Conduit and there was a lot of juice left to squeeze. You've squeezed some juice and now roll-ups are easier and cheaper to deploy. There's more juice left to squeeze, I'm sure. Give us a sense of like where things started, where things have been and where they are recently and then where they are going in terms of just like the trend of lowering cost of roll-ups.
Yeah, for sure. So I think like maybe a year ago when the company was starting, you know, there's no documentation, you know, the code base is changing in very significant ways. And like not all the software was production ready. And so kind of starting in that environment was very challenging. But I think it's allowed us to get expertise in the stacks that we're using and really build production grade, kind of production ready deployments. And ultimately, like, I think that's kind of the most important thing is understanding the pain points, understanding the vulnerability.
abilities of these stacks, like where things might go wrong, and then allowing us internally
to build, like, solutions for that. I think one good example of that is something that we
built internally called conduit elector, which allows for high availability sequencing of,
you know, OP stack chains as well as like arbitram orbit chains. And, you know, prior to this,
in order to upgrade a chain, you'd have to have like downtime, right? You'd have to bring down the
sequencer, upgrade the code, let it sync up again, bring it back up. And, you know,
downtime means that people can't use the chain, you're losing money, you're losing revenue.
Conduit Elector actually allows you to roll out those upgrades with zero downtime, which means
that users don't even notice that an upgrade happened.
That's also important for things like if a hardware failure happens or something like that,
we automatically fail over versus, you know, I think for a lot of OP stack chains out there
today, it's a manual failover, which again means that when it goes down, somebody needs to get
page, they need to wake up, they need to figure out what the issue is, and then they need to
like actually fail it over. And so cutting our teeth on kind of those early problems allowed us to
build like a best in class solution that, you know, one, much better for kind of the reliability
of the chain, but also frankly giving us a lot of insight into the future of things like shared
sequencing. Yeah, one of the interesting things I always just think about conduit and these other
like RELOSPERS service providers is like I alluded to earlier. This just like an epicenter of many
chains. Like all the chains are like proximate to each other inside of a RAS. Could you just
just share with us, what's the RAS business model, the archetypal, like, business model for a RAS?
Like, how does a RAS make money? And then what can a RAS do for the webings between the chains?
Yeah, for sure. So I guess in terms of business model, it's typically two components to that.
There are the infrastructure and, like, hosting fees, which are, you know, you're running a bunch of stuff kind of for the chain.
You're running the sequencers. You're running any additional components to sync with layer one.
You're running the, you know, RBC. Hopefully, it's auto-scaling.
versus kind of like a fixed set of notes.
It's actually like kind of a non-trivial thing to solve.
And then you have your metrics.
You have your alerting.
So you have all the stuff to make sure that it's production ready
and got to be ready for Maynett.
And is this all just like SaaS stuff?
This is just SaaS model.
So that's kind of just like a SaaS model, right?
And like I think over time costs will be brought down.
And like I think crypto software hasn't been engineered in a way that makes it
easy to do multi-tenancy, for example.
So like if you look at a lot of traditional web due companies,
whether it's like planet scale or like,
and you pick any SaaS.
They've kind of built multi-tenancy into the model,
which means that you get to share a bunch of customers
across, like, one kind of hardware stack.
Today, everything is pretty distinct.
And what that means is, like, everybody gets their own dedicated capacity.
And so, like, that's great for, like, stability and uptime.
It does mean that for kind of the longer tail,
it's just, like, a bit more expensive to run.
And over time, like, internally, we're working on solutions here
that will, again, bring the price down and allow you to scale with your volume.
And so today, the RPC is.
somewhat that, but like you can imagine for sequencing, right, and the main kind of bottleneck
being, you know, processing these transactions and kind of access to state. You can elastically
kind of scale that up to absorb like a burst of transactions and then like scale that down.
And so just getting the kind of price to performance ratio automatically correct over time is
definitely something that we're keen on. The other aspect of the business model is sequencer fees.
So as you know, sequencers sell L2 gas and buy L1 gas, right? And so that diff is the
sequencer kind of like net revenue. And ultimately, like most of that goes to the customer.
It's there right, right? It's kind of like the revenue model. And then typically RAS providers
take a percentage of this and then roll frame marks, for example, optimism and Arbitrum or kind
of name your role up here. May also take a kind of a percentage of that as well.
A percentage of that as well. And is that the fee, the 15% optimism fee that base pays to the
optimism collective? Is that where you're talking about? That's right. So for the super chain,
it's 15%. And then I believe for Arbitram, depending on the
the license, I think it's around 10%.
Okay. Okay. Understood. These models are obviously
kind of evolving over time. Sure. Certainly.
Yeah. Well, crypto's evolving all over time, isn't it?
So the RAS business model is just volume. It's just volume, right? It's just total
transactions. That's really where it comes from?
Yeah, a mix of transactions. I mean, like, it really depends on the network.
For example, for Zora network, we're probably around 50-50 in terms of hosting fees and
like sequencer revenue. But also they have one of the largest, because they have so many
integrations. We actually just have a ton of load on the RBC endpoint that we don't monetize
today. And so in the future, you can imagine, again, us like bringing costs down like significantly
and then being able to charge more granularly kind of with the RPC. Okay. So imagine that there's
a chain that's just massive, tons of volume. It's a single chain that's doing a ton of volume.
Your share of the revenue of that probably decreases as a percentage, but it increases in nominal
terms, right? And so maybe some of these smaller, less used, less popular chains, you guys are taking
like maybe let's get more of a 50-50 split, but then as these chains get larger, your percentage
of your revenue comes down on a per chain basis, but the total like US dollar revenue goes up.
Is that right? I think that's one way that it could work. I think ultimately we're still very
early in like these models and whether or not sequence of revenue is the primary revenue driver
for the customer. One example is like, I keep bringing that up, but Zora network, right?
They have a $1 mint fee.
And so if you just look at the mint fee and how many mints they have, that's the primary revenue driver.
And so sequence or fees are kind of less important for that.
Right.
And so ultimately, it's going to be case by case based on the customer.
And, you know, what we wanted is just kind of align incentives and make sure that, you know, we're getting value when we're providing value for the customer.
Case by case based on the customer, the customer being the chain.
That sounds hard to scale.
If we're going to have millions and millions of rollups, which is what we all kind of want, at least thousands, starting with thousands,
how do you scale out a business model in which each the economics of these chain has to be negotiated?
Definitely. So that's where I think today our ethos is like do things that don't scale.
And in the future we'll have to figure out a way to kind of like wrap back to this and then figure out something a bit more scalable.
I think like our focus today is on just serving as many chains as possible and getting our infra kind of out there.
And so we're less concerned today about like particular splits or kind of whatever.
and we're more interested in kind of growing the market because I think a year ago, people didn't know it existed.
And now it's taking off an exciting way, and we just want a 10x or 100x that kind of over the next year.
And David, one way that I've been thinking at least about conduit from the very beginning is that's almost like the Switzerland of infrastructure, just deploy things.
It's there for you. We will collocate services happily.
We will offer every service that is not a commodity for basically cost price because we want to go where the value is.
and to ultimately, as Andrew said, grow the pie.
Whereas right now, if you look at the inframmarket,
it's a bit embarrassing how much people charge for services
that are embarrassingly, you know, cheap to run
or, you know, not that sophisticated to operate.
So with conduit, we hope to kind of commoditize all of these areas
and to go to where the actual value is.
Andrew, one more question about like kind of the economics of a RAS.
I've always understood that RASS are kind of in a tug-of-war
with the roll-up frameworks.
And so the roll-up frameworks, the OP stacks, the arbitralobytes, they want their fees.
RASS want their fees.
And there's kind of like a thumb of war of who gets the fees.
How do you think about the relationship between RASS and roll-up frameworks?
I think, like, maybe in the long run, like, if you really analyze it, I think you're right,
there probably is some sort of, like, competitive aspect there in the future.
I think, like, given the state of the market today, it's so early, it's so small, and there's so much room to grow
that I just don't think it really comes up.
And ultimately the way that I think about it is like we want to work very closely with all the different role of frameworks and enable the distribution of their software.
And I think it's a very different skill doing that than core protocol development of the actual role of.
And so I think there's a ton of kind of synergies and complementary skills here.
And we're very excited to kind of package up what we've done with the OP stack and the orbit stack and bring that to new frameworks and new ecosystems.
And I think ultimately, again, our attitude today is, you know, let's grow the pie together versus,
like fight over the small scraps that exist today. Beautiful. George, so I want to turn the conversation
to L2 security. You brought up multi-client fraud proofs earlier, which I believe are the same things
as multi-provers, but provers are in the ZK sense. Correct me if I'm wrong. I want to start there.
There's a lot of like focus on roll-ups being centralized because they don't have shared sequencing,
but I actually don't think that that's right. One of the benefits I've always thought is like roll-ups
actually get to have centralized computation because of fraud proofs, because of ZK proofs.
Maybe you can unpack that a little bit more and explain it better than I can, and then we'll get into the conversation of why multi-provers and multi-client fraud proofs are important.
But can you just talk about the role of shared sequencing when it comes to decentralization and how important is that?
Of course. So a roll-up is an extension of the main block. And a sequencer is just a privileged party that's able to would trust them for some things to order and to give us some batching benefits.
before we extend that main net block.
Now, right now, we only have one sequencer,
and that one sequencer, maybe they can misbehave.
They can misbehave by doing anything they want.
They could introduce an invalid state transition,
they could omit a transaction, they could do a lot of things.
To combat that, the layer two system design
basically introduces a proof or a game or whatever you want to call it
that the sequencer needs to follow
in order to make sure that even though it's one person,
that person cannot screw us, which means that they cannot take away funds from us and they cannot censor us.
And for that, there are two mechanisms that roll-ups employ.
One is the fault-proof and the validity-proof, which ensures that the sequencer cannot steal funds from us.
Because if they do, somebody in the optimistic case will come in and challenge them and that will abort the invalid state transition.
Or in the valid-de-proof world where the sequencer is responsible for also posting some cryptographic,
information that says, hey, what I did is actually correct.
Now, for the censorship use case, not for the, you know, soundness of my funds, but for the
censorship use case, every Rola protocol also comes in with a force inclusion function, which
means that if the sequencer dies, stops responding, you know, goes on vacation, whatever,
the user can go to the layer one smart contract and their wallet ideally in the future.
Again, this is something that's also in an embarrassing state today.
The wallet should basically choose, hey, sequencer is down.
Instead of sending the transaction, the sequencer, actually send it to the L1 and that it will keep things going.
Now, we're not in this world.
So because we're not in this world, we have multisigs on all sequencer upgrades and we know who the sequencer is.
So until we have the ability to have anyone to be able to submit a faultproof,
And until we have the ability for anyone to spin up a sequencer to continue the chain, if the previous sequencer dies, we're going to stay in with multisigs.
And it's going to remain centralized.
Now, how do we get to this world where anyone can propose faultproofs or anyone can be a sequencer?
That is the word of multi-proverts or of whatever you want to call a multi-sig of fault-proofs or a multi-sig of validity proofs.
And the idea is that because the fault-proofs, the fault-proofs, or a multi-sig of valid proofs.
And the idea is that because the fault proofs and the validity proofs are novel technologies,
instead of having one of them decide the outcome of a dispute or of a certain operation,
why not have, let's say, two or three or more?
Why don't we have a quorum, let's say a two of three quorum?
Many configurations will exist.
But for example, in the optimism context, we're going to have one fault proof that runs on MIPS built on GETH.
we will also have another fault proof that again runs on MIPS but built on RET.
And we'll have a third proof that is built on risk zero in this actually validity proof.
And we will only allow a withdrawal to go through if two of three of them agree on the outcome.
Now why would we do that?
Because it gives us more redundancy.
And with more redundancy, we gain more confidence in the security of the system in aggregate.
And by doing that, we can finally reach the so-called stage two decentralization.
in roll-ups, which will let us remove the training wheels. I think this will happen in 2024.
Oh, wow, this year. This has been like one of the big things that's been holding layer two
roll-ups back in terms of just their full decentralization and trustlessness, right? And the way that
I kind of think about this is like, it's just the multi-client design for Ethereum that has
protected Ethereum so many times, now doing the same things for layer two's, but inside of the
fraud-proof context. And so if there's one bug in one fraud-proof or one bug in an
optimism client, all of a sudden that's catastrophic. But if we have multiple client fraud proofs,
then all of a sudden we have that redundancy, 99% uptime and full decentralization.
I've always thought that this is like the most complicated thing about building decentralization
into layer two's. You said that you think that we're getting these as like, you know,
available for the main roll-up standards in 2024. What makes you so confident that we're getting them
in this year? That would be my prediction in particular because we have one MIPS faultproof already
on OPGET that optimism has developed and has on TestNet.
And there's a work in progress on the Reth side to incorporate these as part of the OP
Reth project.
So, you know, one might ask, why are we doing the OPE Reth project?
You know, not just because we want the high performance, but because we thought that it
would heavily accelerate the stage two decentralization for roll-ups.
I think this is also the first time that we talk about this in public.
But, yeah, in general, we're like very excited about using the Reth, let's say, S-DK, REST
the RET software stack as a general accelerant for the entire EVM ecosystem, whether it is
performance, whether it is indexers, RPCs, or the decentralization of layer two.
And maybe just to tie this back to an earlier conversation, if you are an OP stack chain
and these multi-client fraud proofs get delivered, then everyone gets to upgrade all at once because
we're all on the same standard if you're an O.P. stack chain is why. It's one of the benefits of
being with the herd. You're being aggregated. Yeah. Exactly.
Cool. Okay, so that's the big thing about layer two security. What other conversations are relevant, would you say, Georgios, in the layer two security conversation? Is that kind of just like the big one?
I think the modification story will be interesting. Andrew touched on it a bit earlier. You know, when you give people the ability to experiment, they will do crazy things, both in the good and in the bad sense.
So I think that people will experiment with like things that are, you know, in general, like interesting ideas. For example, native yield rebasing coin or fee sharing or whatever.
but who knows if people will be able to implement these soundly.
You know, will there be a library of plugins, let's say, that people are excited to interact with?
And these are the whitelisted ones or the, you know, the conduit app store.
Offered ones, I honestly do not know.
People will probably go through a big pirate phase where people will change the gas token to pay.
People will change the runtime.
You know, there will be L2s that are not EVM L2s.
already seeing that with ellipsis, we're seeing that with eclipse, we're going to see it with
like the move layer 2s, we're going to see it with like every runtime that exists.
A runtime. Is that just another word for a virtual machine?
Yes, yes, yes, sorry. So the runtime and the virtual machine are equivalently used here.
So it could be EVM, SVM, MOVM, you know, brain fog for all I know, like whatever you want.
It doesn't matter. So there's many, many choices to experiment on the runtime.
On the transaction format, people will come up with novel account obstruction techniques,
people will come up with privacy systems, all sorts of things.
I think we're entering an era, David, where nodes have been thought to be like a hard thing
to ship or to work on or whatever, you know, because they require people to learn about databases,
about peer-to-peer, about each runtime, they require a lot, a lot of knowledge.
I think we're entering an era where they're entering an era where they're
the node modifications become so, so, so easy that it will open up an era of experimentation in blockchain that we didn't see before, which is very exciting for everyone working on infrastructure.
I think the state of layer two development and progress over the last few years has really been about minimizing the diff between Ethereum and layer twos.
The fight for Ethereum compatibility evolved into the fight for Ethereum equivalents.
And as a result, we have a lot of very popular layer twos that are just carbon copies of Ethereum, but faster.
And I think maybe, Georgia, what you're saying is that not only have we kind of neglected non-Etherium versions of roll-ups that can settle on Ethereum, but also the technical difficulties of building these systems has also been easier than ever.
Is that kind of the groundwork that you're laying?
In general, my prediction would be that 2024 is the year where performance, you know, we end the year, basically.
and performance stops being a differentiator.
Everybody will have figured out high performance.
We will be in a world where
parallel EVMs that we've seen discussed a lot,
new databases, all of that stuff,
finally now that we've learned how to modify nodes,
everybody will figure out in some way.
Not everybody will be in production,
few will be in production, the best teams only.
But over the years, what's happening
is that the best technology that was considered a mode
is finally starting to be, for lack of better,
were democratized and accessed by everyone. And when these performance things stop being like the
differentiators that people go for, we're entering the next area of innovation, which is, you know,
the U.X, the account abstractions, the signing algorithms, the gas sponsorships, everything that has
been kind of like in toy mode in the last year, which it's grown a lot, you know, but it's still
small account abstraction and the like compared to, let's say, where Metamask is in terms of adoption.
all of these, every chain will start to experiment so much with the feature they're able to offer
that somebody will hit gold on some unique feature that enables some unique app.
So that's why the layer two vision is also very exciting, David,
because it is what allows you to start deploying chains on conduit or anywhere else.
These chains that will be modified at no zoom and will see a lot of nonsense in the process,
a lot, a lot of nonsense we will see in the process,
but somebody will hit gold
and that somebody will make something very valuable.
Andrew, it sounds like Georgios is calling for
like a golden age of layer two experimentation,
which I think if I were in your shoes,
I'd be maybe a little bit intimidated
because I think conduit is all about
like how can we replicate the same stuff
over and over and over again.
But when there's a bunch of new stuff,
all of a sudden there's new things to replicate.
So if there's going to be like a bunch of new pieces of software
that you have to support,
how do you think about this?
If this is what we're going into in 2024, and all of a sudden, it's not just the OPEC and Arbitrum Orbit,
but it's like the third thing, and the fourth thing and the fifth thing, and then the sixth thing just
around the corner. How do you think about this?
Yeah, for sure. I think that's a great question. I think ultimately depends on the form factor
of the customization. And it is something that we think about here at Condo, right? It's like,
what role of stacks do we support? Today we support orbit and optimism. What is the next one?
And then in terms of customization, like how does that actually make its way into the stack?
for example, if it's just an execution client change on the OPE stack or like the Arbitromorbit side,
that is actually like pretty workable and like we support that today.
And you can actually have minimal modifications to our infrastructure.
Everything just kind of works.
And then we just slot in your custom execution client.
I think the more custom you go in terms of the entirety of the stack, that is essentially like
writing a new role of framework.
Right.
So like if you look at Starkware, if you look at any of these other things, like I guess the
most similar thing would be like a Cosmos app chain, right, where you take this like
base SDK, but then you like customize a bunch of stuff. I think ultimately time will tell like
how important that model is and whether or not you need to reinvent the wheel or you just want
to reuse kind of the existing pieces of the stack and just customize what's important to you.
And so, you know, jury's still out. I think we want to support everything that we can.
And ultimately it's kind of our challenge to figure out the kind of best way to make that scalable.
Andrew, Georgias, I've learned a ton in this episode. I'm going to have to go back and
re-listen to this to make sure I got everything.
entering what seems to be a bull market.
Bull markets get very, very busy.
They can also be very distracting.
But Andrew, when we are done with this podcast episode
and you go back to work at Conduit,
what are you going to focus on?
What is the nearest term thing that you are focusing on right now?
Yeah, I mean, I'd say that given the gas fee,
kind of spiking in December and, you know,
Selesh becoming production ready and Conduit supporting it,
our biggest task is really migrating a bunch of chains to Slech.
I think our next goal after that is making, you know,
blockchains and rollups as accessible.
is like using AWS. And so I think today, you know, we offer a self-serve kind of test that API.
I think our question is like, you know, what happens when we make that accessible and
permissionless for anybody to deploy to main net? And like what kind of came bring an exposure does that enable?
Georgia, same question to you. So at the top level, I'm excited to continue working with great people
like Andrew and others who are working to push on the limits of what people think is hard in
making it really, really easy. Personally, David S just wrote a good.
goal doc called What is 10X Reth in 2024. So I hope to join our team into pushing the limits
of performance and commoditize the 10K TPS in 2024. I'm excited for the Reth SDK to be used to build
new roll-ups, to build new experiments and to, in the end, push forward the layer two industry.
And I'm also finally excited about the Reth core, let's say, roadmap in 2024.
the upcoming Cancun 4-844 release and finally by end of March to be 1.0 production ready,
ready to support Ethereum Player 1 audited by Sigma Prime, you know, going ham in the year and
pushing the frontier really hard. Going ham in the year pushing the frontier. I love that.
D'Ratios, why does the world need Reth? What is Reth and why does the world need it?
Reth is an Ethereum node compared to Gets, Errigon, Nethermind. It is written in Reth.
rust and it's a modular and blazing fast node that is also contributor-friendly. What this means
is that while it achieves best-in-class performance on most Benfluorics that matter, it is also
aimed at developers. It has a developer community and is built to be used first and foremost as a
library. And we see Reth almost like as a testbed for building EVM native infrastructure.
And we want to give to the world access to these high-quotes.
tool that we've been using over the years, with the node as the first demonstration of how
good these tools can be, and hopefully with other infrastructure built on top of it, like
OP-R-R-R-F, like the OP fault-proof, like a bunch of infrastructure that Conduit will be running
in the future and other portfolio companies.
We really want to push the frontier of what is hard that should be really easy on infrastructure,
and we're going to commoditize with it everything that we think is currently hard or
perceived as hard. Well, thank you, Georgeos, for doing Heroes work. Speeding up Ethereum and helping
with its decentralization, both on the layer one and on the layer two is what I would call
very noble work. So thank you both for coming on bankless and sharing us with your perspectives and
what you guys are building is making me very optimistic for the world of layer twos in 2024.
Big gear ahead, David. Thank you so much. Thanks for having us. Bankless Nation, you know the deal.
Crypto is risky. Layer twos are risky. Hopefully they're becoming less risky. That's the plan.
At least you can lose what you put in. But we're headed west. This is the first. This is the
frontier. It's not for everyone, but we are glad you are with us on the bankless journey.
Thanks a lot.
