Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Polymer: A New Era for Interoperability...on Ethereum! - Bo Du
Episode Date: September 2, 2024The future is multi-chain, scalable and modular. However, while Cosmos’ IBC set the standard for interoperability, Ethereum’s L2 shift revealed a huge problem of liquidity fragmentation across the... many rollups fighting for market share. Polymer aims to bridge the two ecosystems and bring the best of both worlds: Ethereum’s native liquidity and Cosmos’ interoperability, through a modular framework using the OP-stack and IBC.Topics covered in this episode:Bo’s background and the evolution of PolymerScaling limits of L1s vs. L2sThe ‘endgame’ for rollup frameworksIBCInteroperability & network topology: 70’s/80’s vs. blockchainsPolymer hubMonomer frameworkPre-confirmation & finality trade-offsCross-rollup interoperabilityBuilding appchains with MonomerModularityEpisode links:Bo Du on TwitterPolymer Labs on TwitterSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: Chorus One is one of the largest node operators worldwide, supporting more than 100,000 delegators, across 45 networks. The recently launched OPUS allows staking up to 8,000 ETH in a single transaction. Enjoy the highest yields and institutional grade security at - chorus.oneThis episode is hosted by Sebastien Couture.
Transcript
Discussion (0)
When we first started Polymer, the interoperability landscape was very different.
I think the thesis then was there's going to be potentially millions of L1s and millions of these app chains.
What we started to see was the landscape began to shift.
And you started to see this rise in the number of layer 2s.
And not just layer 2s.
Originally these layer 2s were just individual layer 2s, but they'd changed into layer 2 frameworks.
We retained what we called virtual IBC.
This enabled us to permissionlessly expand the IBC network.
But we shifted the underlying architecture from using Common BFT as a layer one to using the OP stack for settlement and chain derivation from Ethereum.
What this allows is that for Ethereum roll-ups that implement the EIP 4788 standard, where there is some layer one information on that roll-up that we can utilize, we can allow those roles to communicate essentially as close to block time as possible.
Welcome to Appa Center, the show which talks about the technologies, projects, and people.
driving decentralization and the blockchain revolution.
I'm Sebastian Equitio, and today I'm speaking with Bo Doe,
who's co-founder and CTO at Polymer.
They're building an interoperability hub for Ethereum.
Before I chat with Bo, here's some information about our sponsors this week.
Corse 1 is one of the biggest node operators globally
and help you stake your tokens on 45 plus networks like Ethereum, Cosmos,
Celestia and DYDX.
More than 100,000 delegators stake with Corse 1,
including institutions like BitGo and Ledger.
Staking with QuarS1 not only gets you the highest years,
but also the most robust security practices and infrastructure
that are usually exclusive for institutions.
You can stake directly to Quarice 1's public note from your wallet,
set up a white label node or use the recently launched product, Opus,
to stake up to 8,000 eth in a single transaction.
You can even offer high-year staking to your own customer,
using their API. Your assets always remain in your custody so you can have complete peace of mind.
Startsaking today at chorus.1.
This episode is proudly brought to by NOSIS, a collective dedicated to advancing a decentralized future.
NOSIS leads innovation with circles, NOSIS pay and Metri, reshaping, open banking and money.
With Hashi and NOSIS VPN, they're building a more resilient privacy-focused internet.
If you're looking for an L1 to launch your project, Nosis chain offers the same development environment as Ethereum with lower transaction fees.
It's supported by over 200,000 validators making NOSIS chain a reliable and credibly neutral foundation for your applications.
NOSISDAO drives NOSIS governance where every voice matters.
Join the NOSIS community in the NOSISDAO forum today.
deploy on the EVM-compatible NOSIS chain or secure the network with just one GNO and affordable
hardware. Start your decentralization journey today at NOSIS. I.O.
Hey, Bo, thanks for coming on this week.
Yeah, thanks for having me.
Yeah, so we're talking before the show.
Actually, you've been on my podcast, the Interop a couple of times, but this is your first time
on Epicenter.
So, you know, we'll have to set aside all of those interesting conversations and, you know,
maybe start from a high level here.
But yeah, how's your summer going?
It's pretty good, gearing up for Mainnet soon.
We're working hard going to a lot of these different conferences and trying to talk about
what we're doing.
Yeah, it's exciting.
And, you know, it's been a long time coming.
You guys have been working on Polymer for some time.
In fact, I first heard about you guys two years ago.
Just disclaimer here.
I'm an angel investor in Polymer.
But with that other way, I think we can have a conversation about the technology, about the polymer hub and the SDK that you guys are building.
And also, you know, your fervent, I guess, defense and love of IBC, you know, as like the standard for interoperability in crypto, which I tend to agree with.
And I think, you know, you're sort of like speaking to the choir, preaching to the choir here on Epicenter at least.
you know, as a lot of folks here are like familiar with Cosmos and familiar with IBC.
So yeah, maybe a little bit of background on Polymer and like how you guys got here.
Yeah, absolutely.
I can do a quick, I guess, self background on myself for the folks that are new,
that haven't heard me talk before.
So what I mean is, but I'm a technical co-founder at Polymer.
My personal background has mostly been in Web 2, worked at a bunch of different startups,
worked at Uber, worked in a small startup called a Chronosphere, which,
became quite large, switched over to working in D5 for a period of time, and ultimately ended
up switching to inter-rope.
I would say that when we first started Polymer, the interoperability landscape was very different.
I think the thesis then was there's going to be potentially millions of L-1s and millions of these
app chains, maybe a few large L-1s and I guess the long-tell of these like L-1 app chains.
And that world has changed a bit.
So our initial product was focused on that.
We were building an L1.
We wanted to connect all these different L1s, starting with different ecosystems like Ethereum.
We were investigating ZK.
We're investigating all of these different technologies because we wanted to make it cost efficient
to connect securely from a cosmos chain to Ethereum.
But as we were working on this problem, and as we had developed some of these proof-of-concept technologies to enable
this, what we started to see was the landscape began to shift. So instead of this, like, L1
thesis playing out, it kind of, you kind of saw this like rise and fall of interest in the
cosmos. I think it kind of rose with Terra and then after Terra and FDX and like the, the
bear market kicked in, interest started to die down a little bit. And you started to see this rise in
the number of layer two's. And not just layer two, originally these layer twos were just individual
layer two's, but they'd change into layer two frameworks. And this concept of roll-up as a server
became really popular. And I've seen a number of folks tweet that and mention that in different
talks. And it kind of makes sense. When you have a layer one blockchain, you're paying for
consensus. You're paying for these things, which comes at a cost in order to have decentralized trust.
But with these like layer two systems, you can essentially run it in a fairly centralized fashion
to get some of those blockchain properties from the layer one.
And that that seems like a more pragmatic scaling model in terms of cost and efficiency.
So our thesis changed from, okay, there's going to be from millions of layer ones to we want to connect millions of layer two's.
And because the thesis changed, we also had to change our focus.
We realized that building this layer one solution wouldn't be effective.
in connecting the different layer twos, the technical nuance of which we can get into a little bit later in this call.
But I would say that it revolves around the latency, cost, safety, and all these other properties.
For us to make the best solution for layer twos, we realized that we had to pivot to being a layer two ourselves.
So we architected the polymer solution.
We did not change the application layer protocol that we had built.
we retained what we called virtual IBC.
This enabled us to permissionlessly expand the IBC network.
But we shifted the underlying architecture from using Common BFT as a layer one
to using the OP stack for settlement and chain derivation from Ethereum.
And with this new solution, that's basically we've been working on for over a year now
and what we'll be launching Mainnet with very soon.
Yeah, very cool.
Yeah, I think like this idea that we would,
have thousands and perhaps like millions of app chains hasn't really changed. It's just like what has
has changed is the level of sovereignty over settlement consensus. You know, I think logically
app chains have, you know, now kind of moved into this L2 territory, which allows them to run
with sufficient
for most I think
sufficient
guarantees on
censorship resistance
and throughput
while maintaining
your reasonable cost
because as we've seen
throughout this
cycle
maintaining a chain
in the traditional
kind of cosmos out chain
sense running your invalid
set
there's just like
a lot of inefficiencies
there starting by the fact
that like most
cosmos dive chains have the same ballad era, et cetera, at least like there's a huge overlap.
Yeah.
But this new model kind of makes sense.
You know, like, you guys publish this article on the scaling limits of L1s and the L2s,
which I thought was really interesting.
And it seems like with L1s, there's kind of a hard scaling limit that the space is, you know,
established and is aware of.
But then with L2s, it's a little bit more blurry, or at least the scaling limits are not fully tested.
Can you talk a little bit about what those scale limits are and what we know about them?
Yeah. Before we talk about it, I wanted to mention one thing because you're on the topic of this app chain thesis moving from a bunch of layer ones to layer twos.
It's like Ethereum is speed running the Cosmos playbook, both from the tech perspective and also from the problems perspective.
I think the arguments being had in Ethereum or arguments that happen in the cosmos or have been happening in the cosmos for many years now, which is kind of funny.
So more on the scaling limits, I would say that there's, so maybe the way to frame this is from, if you would have, imagine a chart going from low scale, so like very low TPS, very high latency, to really high scale, which is a really high TPS, really low latency.
you can imagine that there's a number of bottlenecks that you're going to hit as you go from one technology to the next.
And in the article I create this diagram where I try to help users visualize that you end up hitting the first bottleneck, which is consensus.
And every consensus algorithm requires a certain number of rounds of communication.
It requires data, transaction data, to be gossiped over some peer-to-peer network.
These are just kind of algorithmic things and data through.
put things that need to happen.
And above the consensus layer or the consensus algorithm bottleneck layer, you have this
network bandwidth bottleneck, meaning that if you want to account for home stakers, home
internet connections are only so fast.
Like, for example, you can only, well, it's very hard to get a gigabyte internet connection
in most parts of the world.
I think many folks in the West are fortunate to be able to have fiber optic cable into
their neighborhood and can get some of these connection speeds.
But even then, like, the real connectivity speeds are far less than one gigabyte.
And if you look at, you know, some chains like Salada, where they're requiring 10 to 25
gigabytes per second network links, it doesn't work with home staking.
You end up having to put your software into data centers because data centers can offer anywhere
between 10 gigabytes to 100 gigabytes or more.
And not only do you put them in the same data center,
you're actually, in some cases,
push to put it in the same cloud provider
because a lot of cloud providers maintain high throughput
between their data centers,
but if you want to go across different cloud provider data centers,
you may not have the network bandwidth that you need.
So there's this network bandwidth limit
that kind of like presents itself as a ceiling
for how fast or how scalable layer one can be.
And then if you step beyond that, you're in this layer two territory where the layer two doesn't necessarily need a consensus algorithm.
It doesn't necessarily, or isn't necessarily network bandwidth bottlenecked because you could run like a central, like a single proposer, single sequencer, and just run extremely high throughput through it.
And there's a number of teams exploring this design space, mega-eath being one of them.
And this now becomes very interesting because this is kind of like the wild west of blockchain.
scaling. We're like for the past few years, people have been working on L1 scaling and have really
kind of pushed the limits of what we can do there, like Solana, Monads doing that as well. And now you're
seeing the layer two's, you know, push even further. So the ceiling's there maybe in a few years
time, one million TPS on a layer two could be the new normal. Yeah, I mean, one question I, I guess like
from the perspective of experimentation, this is great, right? Because we're getting lots of different
teams working on different ways to scale like L1s.
I think like that space has been explored as you said and then now L2's you know in the end
like when I when I hear of a team building like another framework to build
roll-ups I think like oh yeah like yet another roll-up framework like and and it it feels like right
now at least in the cycle we're still in the infrastructure phase or like if you want to call
that, but there's not like that many applications and end user applications being built. It's a lot
of infrastructure. What's the end game here for all of these rollout frameworks? Yeah, I think it's
everyone's going to have a different opinion. And I think people over index on zero knowledge here.
I say that because not because I'm not a fan of zero knowledge. In fact, I've thoroughly enjoyed
learning about zero knowledge tech and also working on zero knowledge tech as well.
But from the perspective of cost and scale, I think zero knowledge tech is in a catchall.
I don't think that everything in like the future state is all going to be ZK verified.
And I think that different teams will end up optimizing for different workloads.
And that's kind of why the app chain thesis exists.
To give you an analogy here, or maybe a story from my past, when I was working at Uber,
internally at Uber, you could use generic database systems.
So maybe the analogy here is like, use a generic database, use the EVM.
So EVM is like generic compute.
You can pretty much write whatever program that you want in it.
And then you run it in this runtime, which is actually emulated in an actual runtime on your hardware.
but from a efficiency perspective, these generic databases are not cost effective.
Or you need to add a ton of hardware to get the same amount of work done.
So when I was at Uber, what ended up happening is as Uber scaled as a business,
they started shifting away from, okay, we can't actually use these generic databases.
This is costing us way too much money.
We need to save on infrastructure costs.
And as Uber tried to scale on these infrastructure costs,
they started developing specialized databases.
They were like, okay, we have these different workloads.
Let's take each workload, make a database that's specifically optimized for that one workload.
Because you get these huge benefits, right?
Like you get benefits around how you encode the data because you can say that I only have a certain type of data that goes into this.
I'm going to create a compression algorithm custom just for this one workload.
I'm going to create a search algorithm custom just for this one workload, indexing algorithm custom.
And now you get into a world where like you can save.
90, 95% on your infrastructure costs by running these specialized databases for these specialized
workloads that you couldn't get otherwise with this generic like EVM or generic like a regular
database. Right. So in this case, you're like you're talking about like using Postgres versus
building something custom in-house or like using some framework that allows you to do your own
custom database. Yeah, yeah, exactly. Like at where there was like, I believe they were using Cassandra
for their time series data at one point,
but they ended up using way too many resources
so they switched over to building their own
called a custom time series database,
which I ended up working on called M3DB.
No, that's interesting. Okay.
I mean, I guess like, yeah,
if you haven't worked in that environment, right,
or like built a custom database,
like most developers will build their web app,
build, you know, using whatever, you know,
like Postgres and Node.js and like off the shelf components, uh, when you get to like a certain
scale, you know, you have to build your own highly scalable, uh, highly cost effective, uh, systems.
Where like where do you see the overlap with between that and, you know, crypto as things are
currently going? I think the applications need to find the balance here.
So I'm still a proponent of if you have an application, you have no users and you want to get an MVP out, write it in solidity, deploy down the EVM, deploy in an existing blockchain, and look for PMF first.
I think like going out and building an app chain ahead of finding PMF is probably not the right approach unless there's like clear reasons for why it doesn't work.
So I really like the story protocol journey so far.
maybe not some of the social commentary going on in Twitter and some of the social tension there.
But from a technical perspective, it's very interesting.
So they had this journey from writing a bunch of smart contracts.
And then they were working on building over like a generic Opiestack EVML2.
They tried implementing a graph traversal algorithm in that EVM.
It turned out to be highly inefficient.
And they're like, we need a different environment,
environment for this and we need to something that's a little more efficient for what we want to do
and they ultimately ended up arriving on the cosmos SEC and and are building what they
call like a purpose-built chain for their specific use case. Okay, yeah. No, that makes sense.
So let's maybe talk a little bit about IBC here because I think a lot of people, you know,
see Polymer as like building IBC for Ethereum.
or like implementing IBC on Ethereum.
If you've just like sort of heard of polymer,
I think that that's probably a very generic way
to think about what you guys are doing.
Of course, you know, it's much broader than that.
But like I do want to talk about IBC a little bit.
And I think there are still some misconceptions
about what IBC is and confusing the transport layer
and the messaging layer.
This is something we talked about
on the last podcast together earlier in Europe.
So maybe just give a high level overview
of like what is IBC?
And, you know, how is it fundamentally different from some of the other interrobability products that exist that people are familiar with, you know, like Hyperlane or Axel R like, you know, Layer Zero, etc.
So actually, since we've had that conversation, I'll say that there are a number of protocols that are kind of moving in the same direction in terms of splitting out the stack.
So you notice that like since then, layer zero has kind of like modularized their stack.
They're like, oh, we need to separate verification from execution, which is a valid design.
And that's the design that IBC took.
So if you think of IBC or interop is in three layers, you have application, transport, and verification or state that most protocols are trying to move into this area where they are splitting those layers up.
there are kind of allowing people to supply different verification mechanisms and so on.
So now I would say the major differentiator is not in that like three layer design.
It's in that obviously just offers a lot of features in that transport layer.
And a lot of these features are forward looking.
So from a practical perspective, a lot of applications may not want or need these features at the moment,
but there will be a time where applications aren't.
complex enough that they'll be able to leverage a lot of these features. And these features range
from everything messaging related, meaning that I can set a packet, I can receive an acknowledgement,
or a timeout to things like authentication, to things like versioning and upgrades. Because even
when you think about one problem alone, like upgrades and versioning of an application across
many chains, it's actually kind of a tricky problem. It's not as simple as I'm just going to easily
upgrade my application across all these chains because each chain is is a different place you can't
atomically do that so you have to handle all these edge cases and one edge case could be just to give you
an example is the crossing hello's problem what if you have an upgrade initiated from three
different chains all at once how do you resolve that what if you have messages that aren't flushed for
the old version that you know in the middle of the upgrade how do you handle like flushing of these old
transactions. And like the IBC designers have had a lot of conversations around this. If you
could go look at the specs, there's a lot of debate on every single topic. And it's good to be
able to build off of that debate. Another interesting thing for us is obviously also makes
coordination happen on chain versus off chain. This is something that we didn't get to talk about
in the last podcast, but it's something that is becoming very clear that is a differentiator of
IBC from its competitors is that a lot of competitors, they'll use identifiers such as
chain ID for communicating between chains. And if you think about what a chain ID means,
you realize that that's not uniquely enforceable or programmatically enforceable on chain,
meaning that you have to have social consensus around chain IDs. Because any chain can join a network
and say, like, you know what, my chain ID is also Ethereum, or like, I'm also Ethereum. And from
IBC's perspective, you have this concept of connection IDs and channel IDs. Channel IDs uniquely
identify an application. Connection IDs uniquely identify a chain. And if you think about connection
IDs and channel IDs from the perspective of a map, so let's say you have this like universe of
chains and every chain has a map and every chain has a unique map that they own of the rest of the network.
That's what IBC gives each of these chains. It means each chain has sovereign ownership.
over what they see or their view of the rest of the galaxy.
And this is also programmatically enforceable so that all coordination happens on-chain
versus having to have social consensus, having to have a centralized relay or set that just knows
what to do.
This kind of prevents bad actors from misrepresenting themselves and so on.
So it's a very unique design and unique to IBC.
How much of IBC's design do you think?
is inspired by
designs of like TCPIP and networking protocols.
I would say that it is very much inspired by TCPIP.
I think, I believe they use like Chris goes and a lot of the original IBC team used TCPIP as a model
for which to base their protocol off of.
Like the handshaking protocol is to prevent like man in the middle attacks.
There's authentication baked.
in kind of similar to like a TCP connection, which is like an authenticated like port
essentially. And even the port concept is, I think, pulled from this concept of like TCP ports.
So there's there's a lot of similarities there. The ports are a little bit different.
Like port in IBC land is kind of like a spark contract and a port in TCP land is an actual
socket on your computer. Yeah, yeah.
That's interesting. You gave a talk at Modular Summit, you know, describing the evolution of network protocols.
And I didn't realize that there were so many, you know, that kind of came up in the 70s and 80s and then died out even as late as the 2000s.
and one interesting thing about that talk was your description of like how network topology
manifests where you have like when and then you have you know lands and then you might have like
virtual lands how does that overlap with the landscape of interoperability protocols today but but also
you know as we have more and more L2s but also sort of L2s but also sort of L2s but also sort of L2s
L1s, how does that overlap with the landscape and the topology of applications in L1s in crypto?
Yeah. To give viewers some background here, in the early 70s and 80s, you had a lot of different intranets being built.
And what was happening was a lot of different companies had maybe they had a lot of devices, like Xerox is an example.
but one of them that wanted to have those devices networked or connected to each other.
So they developed their own protocols for how these clusters of devices would communicate.
And let's just call these local area networks and these protocols, local area network protocols.
We're starting to see the same thing happen in the roll-up landscape.
Each roll-up ecosystem is essentially building their own, like, native inter-op solution.
and these solutions can be made analogous to local area network protocols from the early internet.
In the talk, I call these virtual local area networks.
If you think of maybe the ag layer as one virtual local area network,
and I define this as the roll-ups in that network or either directly
or have one degree of separation in between them.
So either they're connected to a central hub or they're connected.
directly to one another. And optimism is doing this. Arbitrism is doing this. And now you start to see you
have all these different virtual local area networks that are spawning up, each kind of with their own
separate protocols. And this is very similar to what happened in the 70s and 80s. But you also started to
see that there was the growth of ARPANET. So ARPANET adopted TCPIP. And I kind of equate the early ARPANET
to the early interchain.
And I have these like diagrams where I show early ARPANET
where you had a few colleges connected to one another
and an early screenshot of map of zones
where you have just a few of these Cosmos L1s connected to each other.
And now as we're entering a growth phase
for the IBC network with Interchain,
you start to see a lot more of these different ecosystem
or different virtual local area network clusters
being connected to this wider area network.
or WAN.
And over time, this WAN, I use the term virtual WAN in the blockchain setting, starts to
consume these virtual local area networks.
And at some point, the custom networking protocols that are used within the clusters
are deprecated in favor of TCPIP, or I believe in perhaps like 10 to 20 years,
will be deprecated fully in favor of something like IBC or a variant of IBC.
Why do you equate IBC to TCPIP?
What gives you the certainty that IBC is the protocol that, you know,
in this kind of internet story mirrors TCPIP and all the other ones are, you know,
equated to like Apple Talk and all these other, you know, networking protocols,
that no longer exist.
I think there is a angle of credible neutrality where TCPIP, like IBC, was adopted across
different ecosystems and ultimately became not associated with any one company, whereas a lot of
these role-up ecosystem-specific protocols are used within a single ecosystem.
This is not to say that there may be contenders because TCPIPP also had competitors as well.
So besides the credible neutrality point, there's also a technical angle.
So if you look at or if you evaluate every single interop protocol today, whether that's a native
interop protocol of a roll-up ecosystem or a third-party inter-art protocol made by some of these other
interop providers, you'll see that IBC is the only interop protocol that's even capable of
operating in a wide area networked fashion, meaning that it's the only protocol that allows two-parties,
or two chains to communicate directly while being physically indirectly connected,
meaning that there could be more than one degree of separation.
So maybe there's like two chains in this network that are separated by 10 hops.
They can still efficiently communicate directly to one another via multi-hop IBC channels,
which is different from multi-hop IBC packet forwarding.
Those are kind of like two different solutions.
But yeah, IBC is the first to have this property.
and this technology and behaving this way.
So it's kind of, it's in the lead now.
I guess time will tell what happens, what happens here.
Right.
So that kind of mirrors routing in TCPIP where, you know, my computer is nowhere near your computer
and there may be like four or five degrees of separation,
but we're still able to communicate because, you know,
there's a network of data centers that like will relay those packets.
through the internet and like overseas cables etc interesting yeah and it's the only protocol that
allows you to um i always say that the difference is because you can try to do multi hop packet
forwarding with some of these other existing interr protocols but the major difference is that with
multi hop ibc channels what you're actually propagating through the network is compressed state
I mean that it's the um like compressed state of all of the chains is being gossip across
the network and the packets themselves the actual data which is uncompressed is only made it at the
source and the destination so this uncompressed data does not flow in between which makes for
far more efficient communication than with with alternative methods okay i didn't know that cool well let's
talk about the polymer hub so um one of the questions people on twitter said that i should ask you is
when mainnet. And so I guess you don't have an answer for that yet. But but but but we can talk about
what is going to be main net, which is the polymer hub. So what is the polymer hub and like what
role does it serve as this interoperability hub for Ethereum? Because I think like one very important
thing for people to realize here is that there isn't a single interoperability standard.
for Ethereum roll-ups, and different roll-ups have had to implement their own
dropability solutions, applications have had to implement their own interrobability solutions,
which is kind of crazy when you're coming from Cosmos, right?
So, yeah, let's talk about the hub and what role it isn't to serve.
Yeah, I'll say that at a high level, it does implement IBC.
We will bring that feature set that we just discussed to these roll-ups,
and using the wide area network, local area network analogy here,
polymer hub straddles these roll-up ecosystems.
IBC as a standard, similar to TCPIP, straddles all these roll-up clusters
and provides connectivity across cluster and with the rest of the inner chain as well.
So from like an API service perspective,
you get the standardized API for accessing all these different chains.
Additionally, I'll say that the, so back to kind of earlier in this conversation, we talked about the difference between layer two and layer one scaling limits.
We're starting to see these rollups such as mega-eath push towards one millisecond block times, 100,000 TPS and perhaps even a million TPS.
And we realized that we needed to build polymerized L2 to be even able to scale to the point that we can support these roll-ups.
Because if you imagine you have a sovereign layer one, hub and spoke interoperacle,
and you're trying to connect two mega eats to one another.
But just consensus alone puts your block time at like a few hundred milliseconds at the very least,
usually like a few seconds, then it's very hard to match that latency.
It's also very hard to match that throughput without making the solution very, very simple.
Of course, if you put three nodes in one data center in the same server rack and you throw on some consensus, you can probably go very, very fast.
But then you're like closer to, but then you're like very centralized and you don't really have the decentralized properties of a layer two, which inherits those from the layer one.
So there's like this scale reason why we built polymer hub this way.
And the other reason is from a cost efficiency, a standpoint and latency standpoint.
So it's cost efficient in that we can add different safety features in a much more cost efficient way than if you were to add those safety features in a point-to-point protocol.
So with a lot of point-to-point protocols, perhaps you can handle the scale because you're connecting these chains directly.
But it's very cost-inefficient from checking things like data availability, ordering, and even validity.
Because you imagine you partner with a company like we're partnered with a company called Lagrange.
They're working on a solution called Lagrange state committees, which offer, I guess, a quote-unquote like client for optimistic roll-ups and perhaps even other roll-ups as well.
The issue there is if you want to connect using the state committee solution in a point-to-point fashion, you would have to generate a state committee proof per chain in the network.
So this is going to be in chains for in connected chains at some amount of latency.
And then these proofs are not going to be free or cheap to generate.
With polymer, we can stream all the attestations to our central hub and only generate one proof for in chains.
This becomes much more cost effective for us from a validity standpoint.
And we're able to kind of validate this information across all these different chains.
So is this like ZK proof aggregation that you're using?
So I want to, I think when people talk about ZK proof aggregation in the context of the ag layer,
they're talking about ZK proofs of validity, meaning that this is a ZK proof of the execution of a chain,
of the full execution of a chain.
What I'm talking about here is a ZK attestation.
So Lagrange state committee nodes are not generating a ZK proof for the execution of the chain.
rather they're generating a ZK proof of the attestation.
So it's much more similar to a light client,
like a tenderent light client,
than it is, I guess, like a ZK-EVM,
like validity proof, ZK validity proof.
Okay, got it.
Can you talk a little bit about Palmer Hobbs' architecture
and what it's built on?
Yeah, so we're Cosmos SDK at the application layer,
And we use this framework called Monomer for us to be able to deploy this Cosmos SDK app as a roll-up over the OP stack.
And the Monomer framework kind of establishes a compatibility layer between the Engine API, which is expected by the OP stack, and ABCI, which is expected by the Cosmos SDK application.
And the reason we wanted to build it this way, so there's some technical nuance here.
So I'll dig in a little bit, is that there is logic in the OP stack that derives the L2 chain from the layer 1.
And this is useful for us from the interoperability perspective, meaning that if we can associate every layer 2 block with a specific layer 1 block,
what we can do is we can essentially communicate across different layer 2s beneath Ethereum finality.
So today, most protocols either wait for Ethereum finality or they'll go faster than Ethereum
Finality and push this reorg risk onto the applications themselves.
Then there's a tradeoff here to be made.
And we're thinking of, so how can we break this finality barrier?
And what we were able to do is we developed a reorg protection protocol for sub-finality
communication across L2s, which essentially builds a dependency graph of transactions.
based on some history of the layer one.
And at the end of this finality period,
so once Ethereum begins to finalize,
we can have a condition that says,
if these transactions that were built on the history of Ethereum,
let's call that L1 Prime.
If L1 Prime gets finalized,
we will commit this entire dependency graph of transactions.
If L1 Prime does not get finalized,
we will revert this entire dependency graph.
So it offers additional safety guarantees
around communication at lower latencies,
therefore bringing reorg risk from the application level
down into the protocol level
and handling it in protocol while providing extremely low latencies.
We're trying to target less than 10 seconds.
Okay, maybe this is a good time to talk about
the different types of finality and fast finality mechanisms that exist.
I was recently researching this for in the context of movement
labs and I saw that like Talik was getting into some some some debates with with Rushi about like
what is a roll up and finality etc so you know maybe just going across like the different
types of finality mechanisms and like you know if a listener is explaining okay like what are these
pre-confirmations and what are the different tradeoffs and like if we add fast finality to that like
what are the tradeoffs there as well yeah so I think there's
some confusion around like what finality means.
So I'll start off by saying no protocol can guarantee finality on behalf of Ethereum.
Ethereum has its own finality mechanism.
If you attempt to guarantee finality, what you actually get is a completely different role of construction.
So if, you know, hypothetically, you were to say there's this other consensus layer that I'm going to define
the canonical ordering of my chain or my finality on,
now that chain becomes a roll-up of that other consensus layer.
And if you move down the direction of this analogy,
you'll start to find that like,
actually this looks very much like the first version of Polygon.
You have this separate consensus layer.
You then define your finality based on that consensus layer,
and then you post a state commitment to the Ethereum Layer 1.
and that's basically a side chain, more or less.
So for roll-ups that do define their finality or like final finality on Ethereum,
you can have something called a pre-confirmation.
And what this is, it's a guarantee that if a block proposer gets a specific proposer slot,
that they will include your transaction in that slot.
But this is an optimistic guarantee, meaning that they can't.
fully make this guarantee. Basically what they're saying is that if my slot does not change and I
proposed within that slot, then you will have your transaction included in the chain. However, if there is
a reorg and that proposer's slot now becomes maybe perhaps moved further away, now you find that
that guarantee doesn't hold anymore because the proposal no longer, the proposal has essentially
like lost their slot. For the full guarantee, you have to wait for full of their infidelity,
which is roughly told to 16 minutes.
Okay, and so this mechanism that this, this, this reorg protection mechanism that you're implementing,
I think the term you use was it subplants Ethereum finality?
Is that?
Oh, no, no.
It does not, it cannot replace Ethereum finality.
Instead, it's a, it's sort of like a guarantee around eventual finality.
So it's an optimistic guarantee, the same way that pre-confirmed built,
the same way that builder proposal pre-confirmations are also an optimistic guarantee.
So it's an optimistic guarantee on the Ethereum, on Ethereum spinality.
Yes, yes. But it has guard rails in place where if the guarantee is not met, then all the
transactions that were committed or like, I guess, pending by the protocol are all invalidated,
which is a very important guarantee. Okay. And then what happens to those?
transactions if they're invalidated. So like as the user of the roll up, like how does that translate
into your user experience of like having just made a transaction that gets rolled back?
Yeah. So from user experience perspective, 99.9% of the time their transaction is going to go
through. This is not so dissimilar to using a layer two today. So if you use uniswap on optimism
today, you'll make a swap. It'll say it's confirmed. You get your assets. And it'll look like
it all happened in like two seconds. And that's what it'll look like to the user most of the time.
But then in like a tiny fraction of the time, your swap will just like disappear. You'd be like,
oh, actually, I never made the swap. I have my original asset. And that's a nice guarantee because
if you have all these different transactions interacting with each other across chain, you never want
to get into a state where there's a double span. You never wanted to get into a state where
like one actor has made money on one chain and also like never transferred their funds on the
chain. So in order to be able to make this guarantee across any number of roll-ups, we had to
have this like atomic, like, prototype or conditional, like commit or revert protocol. So it's better
to be, it's better to revert all the transactions than to end up in an inconsistent state
across the different chains. And so how does this mechanism facilitate interoperability between
different Ethereum roll-ups?
What this allows is that for Ethereum roll-ups that implement the EIP 4788 standard,
where there is some layer one information on that roll-up that we can utilize,
we can allow those roll-ups to communicate essentially as close to block time as possible,
meaning that if they're producing blocks at two seconds,
they'll be able to communicate at two seconds.
If they're reducing blocks at 100 milliseconds,
they could technically communicate at 100 milliseconds with, I guess,
additional, accounting for additional, like actual network latency of this communication.
Since we're still on topic of the polymer hub here, you know, for for roll-ups that want to
utilize polymer hub for interoperability. So like Arbitrum, you want to send assets onto base,
what is required for those roll-ups in order to onboard or integrate polymer?
Yeah, so with our virtual IBC protocol, it's completely permissionless standard rate.
So either the Polymer team or a third party would be able to deploy our smart contracts through the setup work.
There's some IBC specific setup that's required, but there's no opt-in by the layer two.
Like the layer two doesn't have to opt-in to using some like, you know, special process they have to run.
They don't have to change their deployment infrastructure.
Like, nothing needs to change on their end.
Okay, so it's permissionless for the roll-up.
A user can't permissionlessly use polymer interoperability
without the role of having implemented this.
Without having polymer's integration deployed,
but technically we plan on making this very easy
to request connectivity to a new chain.
So for users that maybe aren't sophisticated enough
to run these scripts to do these deployments.
We're planning on automating all of this so that perhaps you could have like a
permissionless pipeline or some loosely permission pipeline of just requests to like new deployments.
And this is true in the IBC network today, which if anyone spins up a chain,
it does require some technical expertise to be able to set up a connection, set up a channel
and so on or set up an IBC relayer.
But we plan on providing some like more user-friendly front end for that sort of
thing. So we've talked about Ethereum. So like essentially polymer will allow any any rollup on it, any Ethereum based roll up, uh, to interoperate over IBC. What about the existing IBC ecosystem and sort of Cosmos broadly? What does that look like? And, you know, is it, is it, is it the case that kind of polymer sits at the middle and interoperates with IBC? Um, as that kind of middle hop are, are Ethereum,
based roll-up is going to be able to send assets directly to and from IBC, the IBC network.
So can I move assets from like osmosis to like some decks on base permissionlessly?
Yes. So do we be able to use the protocol permissionlessly? Although because of how our virtual
IBC protocol works, Polymer handles IBC execution and implementation on behalf of the roll-up. So for a roll-up,
that isn't communicating through polymer.
They don't actually speak IBCs,
so they wouldn't be able to connect directly
with the IBC network.
However, we are working on our monomer framework.
So if you do have a Cosmos SDK roll-up
that natively implements IBC,
technically that that roll-up could communicate directly
with the IBC network.
So yeah, let's talk about monomer a little bit.
So monomer is an SDK, I guess,
the polymer hub uses monomer.
But you can also build your own roll-up using Monomer.
It's, I guess, a generic roll-up SDK that uses the OP stack.
Who do you think, what kind of applications do you think will use Monomer?
And what is the kind of differentiating factor, you know,
as opposed to using like some other roll-up framework or, you know,
using the OPE stack or something like that?
Yeah, so the way I would describe this,
And I'm going to use an analogy here.
I'm going to use a cookie analogy, even though I'm not the greatest chef in the world.
I love cookies.
It's awesome.
Little known a fact, during the COVID, during the pandemic, I spent like six months trying to perfect chocolate chip cookie because I wanted to build a business.
Oh, wow.
Where you could buy your cookies fresh in the oven.
But then I went back to crypto.
I mean, that I love.
cookies that that sounds a it sounds very tasty um and like a like a very interesting service so
i guess from a uh a monomer and to compare it like the like built or cooking with them cooking
with monomer or cooking with the bottom wearing the cosmos sec versus cooking with uh the just the op
stack today uh it's what you have in your kitchen cabinet so if you were to cook with the op
stack today uh you have the evm you have like this limited set of ingredients uh
in your kitchen cabinet.
And maybe it's just like salt, pepper,
oil, like very, very basic ingredients.
The OB-SAC doesn't come with a lot of custom modules,
and they don't have, like, this, like, huge library of custom modules
that you can just, like, pull and use within your app chain.
So it's very hard to customize it.
You can customize it, but you have to make your own ingredients.
So, like, if you want, I don't know, fish stock,
you have to go make your own fish stock and then add it to your...
And, like, making fish stock,
is a lot harder than just having powdered fish stock in your kitchen cabinet.
And if you're cooking with the Cosmos S.E.K. and modernware, what you have is you have this,
great kitchen cabinet full of ingredients, everything ranging from different spices to fennel to
different herbs to different, even different types of salt, like Himalayan salt, like all the different
colors there. So you get this because there's a number of teams building in the Cosmos
SCK, there's already this great collection of Cosmos SDK modules that you can just use off the shelf.
So you can build something, a really cool custom app chain without needing to make these ingredients yourself.
And maybe a great example of this is Skip's Slinky Protocol.
So you can have this built in Oracle directly into your app chain without having to like negotiate a contract with ChainLink and like get these price feeds at much lower latencies and a broader selection of price.
and also custom price feeds as well in a simple way.
Another example is recently did a little bit of announcement with Fairblock.
Fairblock has an encryption module for the Cosmos SEC that allows you to have
encryption and decryption or pre-execution privacy is what they call it in your application
so you can have like mev protection.
You can have like encrypted limit orders,
lower encrypted minpole for these like techs or defy applications.
that's hugely beneficial for a large class of different apps.
Yeah, okay.
I mean, that makes sense.
So essentially, like, what you're saying is, you know, Monomer gives you the benefit of being
able to use the Cosmos SDK and all its modules, but have that roll up settle to Ethereum,
which gives you, you know, which gives you access to Ethereum with liquidity, gives you
access to that whole ecosystem of applications.
Does Monomer allow developers,
is Monomer opinionated about like DA and settlement and sequencing?
Or can developers choose, you know, if they want to use like eigenDA or Celestia or some
other DA instead of using Ethereum Data availability, can they swap that out easily?
you know, if they want to implement their own sequencer,
is that something that's possible?
Or like use some decentralized shared sequencer.
Yes.
So Monument inherits all of these architectural components from the OP stack,
which is a benefit because any support for alternative DA,
support for fraud proofs, different fraud-proving mechanisms,
force exit mechanisms, force withdrawal mechanisms,
or forcing your transatlpture.
inclusion mechanisms, these are all sourced from the OP stack itself.
So one of the design goals for monomer is that we don't want to rewrite all this chain derivation
logic and settlement logic, which is quite complicated.
We want to rely on this open source community-built infrastructure that will improve over time.
So with OP stack, there are modifications to use LDA.
I believe there may be some modifications to use different, I guess,
I guess people are calling them super builders now.
So there's a lot of these kind of modified or OP stack modifications that developers
will be able to leverage.
Like is the goal here to build an ecosystem of applications that, you know, use Monomer
and the Cosmos SDK?
And like, you know, how do you see Monomer faring against, you know, some of the other
Cosmos SDK frameworks out there, you know, ethos is one.
you know that like utilizes eigen DA or that utilizes eigen layer restaking.
You know, we have now like, I guess it's now it's called Grug.
Larry's project, Cosmwasum, Demon, you know, there's like Lair like Jake Hartnell's project
that also uses like more uses Cosmwasam, I might say, more so than the Cosmosst SDK.
But is the future of Cosmos development, does it lie in the Cosmos SDK or does it,
Do you think that Cosmwasm can surpass it as a more flexible?
Because I've talked to a lot of developers that love the Cosmosis SDK because it's great.
It has all these modules and everything.
But also kind of hate it because it's so hard to build new modules and, you know, Go,
it makes it like a, there's like a lot of configurations to, you know, for modules.
Whereas building, you know, natively with Cosm was easier.
And, you know, Osmosis, of course, I was like built most of their infrastructure.
we're using Cosm wasam.
Yeah, like, do you see Cosmwasum, perhaps, you know,
surpassing the SDK itself as, you know,
a scalable and kind of faster way to build applications?
Like with any sort of like VM and building for that VM versus just building in like
Native Go or some native runtime of a particular language,
there are inherent tradeoffs.
I think the applications will decide like what,
fits best for them.
My hunch is that when they want to hit a particular scale or when they want to build something
very specific, perhaps they will select the Cosmos SDK.
Like coming from Web2 like myself, I pick the Cosmos SDK because it's, maybe the
analogy here is it's the closest thing to react for building a blockchain that I could
find.
The docs are incredible.
The tooling is incredible.
There's a lot of momentum.
behind this project.
Like binary builders is great.
Marco is great.
So I am very bullish on the Cosmos SDK,
but ultimately I think it will come down to,
if they just want to build an application to get something out the door,
I think they probably are better off starting with Cosmwasum
or some existing VM,
and perhaps not even building in their own chain,
just deploying some smart contracts
through some existing blockchain network.
Yeah, that makes sense.
Yeah, like compared to some of these other, you know,
Cosmos SDK frameworks.
Why would someone choose Monomer over, say, like, ethos, for example, you know, like.
Oh, before I talk about that, I don't want to add one point to this, this scaling and, I guess, control, the question of control.
So if you use something like Cosmwasum, you don't have hardware level control.
If you use, or like, I guess, kernel level control, for certain applications,
that they want to be fast or they want to make certain optimizations,
they will want that level of control.
They kind of have to break the abstraction boundary.
But that's not for every application.
I think only a few applications will want that or feel at a certain scale they'll want that.
But yeah, it just depends on the app.
From a framework perspective, a lot of the frameworks that you listed,
I believe are like ethos that's providing like I can layer restake security
for like Cosmos chains, those are still technically layer ones, even though they get their security
from somewhere else. They're not technically a layer two. The cost of security is you're paying
for security budget versus paying for settlement costs, which is a layer two consideration.
And also the architecture of your chain, you're not inheriting censorship resistance. You're not
inheriting some of these blockchain properties from the layer one. And some of these blockchain
properties are some of the hardest to achieve.
Like censorship resistance is probably the most difficult,
when combined with liveness,
to achieve properties to achieve within blockchain.
So it depends on what the application cares about.
If they don't care about censorship resistance,
they don't care about these fundamental blockchain properties,
they just want to have some security budget they borrow.
That makes a lot of sense.
So it depends on what they're looking for.
Okay, yeah.
Thanks for clearing that up.
Okay, so like Monomer is squarely a roll-up framework.
And so therefore it inherits all the properties of a roll-up,
whereas like some of these other frameworks may be still considered like an app chain framework
or at least inheriting its security from, in case of ethos, like restaking, restate-Eth.
In the case of, you know, building on the Cosmos hub, like you're inheriting security from the hub.
Yeah.
And also you're making a bet on, on the tech.
technology. So I guess to give some insight into how we thought about technology at a company
like Uber, when we decided on using a particular database technology, building our own, we're
taking a long-term bet at Uber's size and scale. It's very hard to change course. Moving to a new
database system is very costly for the entire company. So the way we would evaluate is we look
on like a 10-year time horizon, maybe even 20-year time horizon.
And we have to look at things like, do we think this technology that we're building on
will continue to improve and scale for the next 10 years?
Who are all the contributors?
What is the pace of development in this project?
Which direction are they going in?
What are they optimizing for?
How customizable is this?
There's all these questions that you want to answer.
And if you bet on monitor,
Essentially, you're betting on a number of different technologies.
You're betting on there to be great application frameworks that are ABCI compatible,
whether that's Cosmos SCK, whether that's another framework such as Grug.
You're making a bet there.
You're also betting on the OP stack.
You're betting that there will be, it continue to be a lot of contributors to the OP stack,
and the OPE stack will continue to improve for the next 10 years or more.
So depending on how you look at the space and how you look at technology,
you'll pick a framework based on what you want to bet on long term.
Because if you pick a framework that perhaps is maintained by one party or one or two parties
and that maintenance gets dropped, then you're kind of stuck maintaining this thing on your own
and that's not a great position to be in.
So what's the next steps here?
Like in terms of mainnet and where can people follow you and follow Polymer updates
to get the latest.
on what's happening with protocol.
Yeah, so definitely follow us on Twitter.
It's Polymer underscore Labs.
We have our website and Discord information in there as well for folks that want to join
our community.
You can follow me personally at ZeroX Shake on Twitter, or I guess, Zeter now.
It's since the rebrand.
But yeah, like Maynett's coming very soon.
I can't give an exact date, but I promise you it is, it is coming very, it is coming very
soon and they look out for a lot of updates in the coming months.
Cool. Well, I'm excited for it. A really big fan of what you guys are doing. I think,
you know, bringing interoperability, like better interrobability between cosmos and
Ethereum is something that I've been waiting for for a long time and like, I'm very bullish
on like that space generally and yeah, excited to see you guys go on a main net.
Yeah, thanks, Seb.
