Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Arjun Bhuptani: Connext – Speeding Up Secure Bridges Between Chains
Episode Date: June 9, 2022Connext is a crosschain liquidity network that speeds up fully-noncustodial transfers between EVM-compatible crosschains (xapps) and L2 systems. Connext works in tandem with nomad bridge technology, e...nabling fast transfer of value between blockchains and interchain DeFi protocols. Their goal is to create a world where users never need to know what chain or rollup they're on, and developers can build applications that utilize resources from many chains/rollups simultaneously. We were joined by Connext founder Arjun Bhuptani to chat about bridge technology in general, Connext's pivot from state channels to bridges, their recent partnership with Nomad, and what is coming next.Topics covered in this episode:Arjun's background and how he got into the spaceArjun's involvement with the Moloch DAOWhat is Connext?The history of interoperabilityConnext and the bridges they utilizeConnext's partnership with NomadBridge hacksFinality and rollbacks - dealing with reorgs and probabilistic finalityWhat is Connext's capital efficiency modelEpisode links: ConnextNomadConnext / Nomad partnershipConnext on TwitterArjun on TwitterSponsors: Tally Ho: Tally Ho is a new wallet for Web3 and DeFi that sees the wallet as a public good. Think of it like a community-owned alternative to MetaMask. - https://epicenter.rocks/tallycashChorus One: Chorus One runs validators on cutting edge Proof of Stake networks such as Cosmos, Solana, Celo, Polkadot and Oasis. - https://epicenter.rocks/chorusoneParaSwap: ParaSwap aggregates all major DEXs and makes sure you beat the market price at every single swap and with the lowest slippage - paraswap.io/epicenterThis episode is hosted by Friederike Ernst. Show notes and listening options: epicenter.tv/447
Transcript
Discussion (0)
This is Epicenter, episode 447 with guest Arjun Buktani.
Welcome to Epicenter, the show which talks about technologies, projects and people driving decentralization and blockchain revolution.
I'm Friedriche Ernst, and today I'm speaking with Arjun Bubtani, who is the founder of Connects.
Connects is a bridge project, and we will talk about this in detail in just a bit, but let me tell you about our sponsors today.
Our first sponsor is Teleho.
Teleho is redefining the wallet as a public good.
You can think of it as a community-owned alternative to Meta-Mask.
With Teleho, you can enter the Metaverse with a Web3 wallet that's fully community-owned and operated,
and it's the first wallet that is also a Dow.
Tele-ho's commitment to community ownership and public goods stretches beyond the wallet.
In January, they became the first sponsor of EtherJS,
an open-source JavaScript library helping developers connect to Ethereum.
and they recently announced a pledge to commit 2.5% of their total token supply to Gitcoin aqueduct.
Head over to Teleho.cash to try Teleho Community Edition and play around with its features before its upcoming version 1 in Dow launch.
Our next sponsor's course 1.
Securing blockchains and earning rewards need not be energy intensive or complicated.
And by staking your assets with course 1, you contribute to network security and earn rewards too.
1 has been a pioneer in this space since 2018 and secures billions of dollars in assets on
over 25 decentralized networks, including Solana, Cosmos and Ethereum.
If you're an institution or you want to run your own branded node, you can use Corse 1's
white-label service and their battle-proven infrastructure to participate in proof of state
networks in an easy way.
Corse 1 team also released an exclusive report on the important events and trends from the
first quarter of 2022, and you can read it now for free.
on their website at chorus.1, where you can also start your staking journey.
Our final sponsor is Paraswap. Paraswap is a multi-chain dex aggregator.
This means that through pariswap, you can easily access the liquidity of various different
decentralized exchanges.
The protocol automatically finds the cheapest liquidity for you, so you can trade knowing
that you're getting the best price.
Paraswap is also gas-friendly, and that helps keeping your transaction costs slow.
Paraswap recently added support for Avalanche, Polygon and BSC, and Phantom.
You can also use Parasop directly from your ledger and ledger life.
And they are becoming a DAO, so if you have PSP tokens, maybe from the adrop, that is something
you can participate in.
There's also a vote on the gas refunds program that just passed in the Parasop Dow, and this
will allow Parasop stakers to get up to 100% gas refund on their trades on top of their
auto compounding yield. So visit Paraswop to learn more at Paraswop.io slash app center.
Arjun, it's so good to have you on.
Thank you so much for having me.
Arjun, tell us about yourself. How did you get your start in this industry?
Yeah, so I started being interested in, I started building on top of Ethereum in 2016.
I was always kind of like tangentially interested in crypto because I was involved with a lot of the like P-to-P community in like the IRC days.
But I ended up missing the boat on Bitcoin for some reason.
I was just, I was personally just not very interested in it for some reason.
And then it wasn't until Ethereum came along and I kind of discovered it in 2016 that it clicked for me that you could use this technology to build public goods, like truly public, non-sovereign, non-corporate goods that are similar to the internet.
can be like globally accessible and can we can be this like big equalizing force in in the in the
world economy after that so i started playing around with the technology in 2016 um built some
infrastructure and worked with a couple of projects at the time um i started the ethereum sph
ethereum developers meet up which was pretty awesome because that was one of the first like communities
that was like actively building on top of on top of ethereum um and then in 2017 uh i
ended up starting Kinext because I sort of had a lot of conviction on like Ethereum as a broader
technology and on this decentralization movement. And my goal was just how can we bring this technology
to a billion people as fast as possible? Cool. So before we actually dive into Kinext,
you were also one of the seminars of the Moloch Tao, right? Yes. Yeah. So I helped design the Mollok Dow
alongside Amin Soleimani and then helped build it with my current co-founders, Lane, Heber and Rahal Sethrom,
and then one of the co-founders of Spank Chain at the time, James Young, along with Amin.
And then, yeah, I guess the idea at the time was we were really interested in solving this, like,
coordination problem that we kind of felt that we had around, you know, working, collaborating with
bank chain on on building scalability infrastructure. And also was just like a broader coordination
problem that we saw in the space and in the world more generally. And the idea was that like
we wanted to create some sort of like public resource for organizing or like community action
around public goods without necessarily having like basically having that be a public resource
itself. And the that idea kind of became the mall at Dow. The first really really the first like
Dow framework that actually ended up getting traction, which is pretty awesome.
I mean, we didn't really like ever expect that to be the case.
We were just kind of playing around with the ideas.
And the goal behind it was never like, here is a solution to coordination.
It was like, here is a process by which, like, here is an initial step and then a process
and a mean by which we can eventually hope to solve coordination more generally.
But the idea was always like, it won't necessarily happen.
It might not even necessarily happen as a Malik Tao, but hopefully this can be.
the catalyst to start people thinking about this problem more generally.
Oh, yeah.
I mean, it was definitely a right time, right place moment.
And I mean, basically the narrative behind it was just compelling in the light of the perceived weakness of the Ethereum Foundation and basically building things for the community.
So yeah, I think this was a great launch.
So tell us about Connect.
Connects actually started off as a state channels protocol, right?
Yes.
So, yeah, when we started Connects, like I mentioned earlier,
the key goal was always this technology, Ethereum,
specifically, and I guess, like, decentralized systems,
has the capacity to really improve,
to meaningfully rewrite the way that human beings coordinate
at a global scale,
to move away from public infrastructure that is owned by governments
or, you know, large-scale infrastructure that is owned by, like, operated by corporations.
And towards making that infrastructure part of the commons globally accessible in the same way
that the Internet is globally accessible.
And it comes from this, like, shared belief that the team had, or at least the founders
had, that, like, things like, you know, Google search, things like money, things like
coordination tooling, like voting and things like that, are should be accessible to everyone
in a fair and egalitarian way, regardless of where they live,
which is not currently true for the majority of the world.
And so the goal was always, like, let's take this technology
and let's find a way to scale it to the world.
And of course, that pretty quickly led us to scalability research.
Because I was like one of the biggest kind of blockers to being able to have many,
many potentially a billion people use Ethereum.
In 2018, when we started looking at this and it started doing like scalability research,
there wasn't really a lot out there.
Most of the research had been, at that point,
been focused on state channels,
and like Raiden was kind of like the leading project on that at the time.
And then there was new research that was being done about plasma.
We, of course, ended up jumping into state channels
because to us, it seemed like the lowest hanging fruit use case
was actually going to be payments.
And this is just a hypothesis that we came up with
based on what we saw in the space at the time,
which was projects like Spank Chain and others who were doing, you know, some form of payments.
And then, of course, like, you know, plasma research continued.
Over the course of the next few years, we kind of, like, expanded our understanding of state channels and things like that.
And we also moved towards, like, building, you know, roll-ups as, like, a more generalized scalability solution, which has less scalability, but is, like, can be used for any kind of, like, arbitrary smart contracting.
One thing that we found, and this is the reason that we ended up moving away from state channels is just, like, a lot of our,
are, you know, we, we were largely an R&D org for a very long time. We, we like had a very,
very small team that was very, very lean and very hungry and like, extremely careful about,
like, the things that we built and about making sure that we only built the most minimal possible
solutions. Um, one of the things that we consistently found, though, was that while there
was a lot of interest in payments and around scaling payments as a market, there were actually
very few use cases that scaled associated with payments more generally.
And you can actually, if you look out of the space right now, you can actually, it's pretty
easy to tell that like payments actually hasn't taken off. Crypto payments surprisingly,
despite the fact that many people have had a thesis about it for like almost a decade now.
For some reason, payments haven't actually achieved a lot of market penetration in the broader
audience compared to things like defy and governance.
And so we ended up realizing that like the solution that we were building, the technology
that we were building and researching was not necessarily a technology that was mapping very well.
to the real needs of users.
And at the same time, what we realized,
this is actually very fortuitous,
but in 2020, we,
there was a, I don't know if you remember this,
but there was a, there was a, a,
a bake-off that Reddit created.
It was like a scalability bake-off
between, like, the different L-2 solutions.
Yeah, yeah, I totally remember that.
Yeah.
So we actually, this was actually the time
when we were having this kind of, like,
these questions around payments
and around, like, payment-related use cases.
And one thing that we saw was, like,
in this bake-off,
essential example of like a decentralized ecosystem that wanted to build on top of a scalability
solution. And what we found was like all, like looking out at all of the other submissions that
didn't put in place, almost all of them operated like chains. And we didn't. We had like,
we basically had to, would have had to go and build a very custom piece of infrastructure for Reddit
specifically designed for their use case. And even then there would have been like gotchas and
things of that that that we had to worry about. Whereas, you know, which is nowhere near as like
nice is just deploying the exact same contracts that you have.
on to optimism or arbitram.
And I think what that helped us realize was that maybe we were just on the wrong track, right?
Like if we're trying to compete with roll-ups as a state channel network to try to scale something
that is not payments, we're not going to succeed because we would have had to do a ton of
custom work.
But at the same time, the secondary problem that we saw, and this was actually just really,
really lucky, we just sort of said, okay, well, you know, there's all of these projects have
this big drawback, which is, you know, once you're on one of these solutions, it's really
difficult to get in and out of it. It's really hard to get between these different solutions.
Why don't we just reappropriate our exact state-town infrastructure and use it to allow people
to send community points, right at community points, between these different options?
And that's what we did. We actually submitted an alternative solution to the scalability
bakeoff. We obviously didn't win because we didn't even technically participate.
but we actually did get a lot of attention,
both from the community and also for Reddit,
where people looked at,
and also from all the scalability solutions,
because people realized,
oh, wait, you can use state channels,
you use other kinds of technology
to allow for seamless bridging,
seamless communication between,
at the time,
it was like optimism, arbitram,
X-di, polygon, scale,
and a bunch of other test nets.
So, yeah, it was kind of cool.
That actually, we sort of did it as a, like,
let's not play a game that we know that we won't win thing.
And we kind of felt like we had to submit something, but we didn't really know what.
So we did this instead.
And it turned into us stumbling upon this really, really, really big, really, really interesting market that at the time, no one else had really even thought about.
Yeah.
I mean, let's talk about the history of interoperability and the history of bridges in just a little bit.
Just as an aside, why do you think payments has never really taken?
off because it seems like it's such a low-hanging fruit, right?
Yeah.
Yeah, that's a really, really good question.
I think that there's a lot of reasons.
The first main reason is that the payments market itself is just incredibly complex,
and there's like these really, really deep entrenched network effects associated with
the existing structure of like payment facilitating banks,
payment processors, visa mastercard and other like payment systems that makes it really,
really difficult to break into this market in like a functional way outside of, outside of like
places where that are just like completely unbanked. And I think, I think that made it quite
difficult at the time because like we just had a really hard time finding markets of users that
actually cared about crypto payments. You know, we ran a bunch of experiments in like gaming.
We ran a bunch of experiments in content. We ran a bunch of experiments in like countries.
where there were a large percentage of people unbanked.
And like, we kept running into these like problems associated with like people being like,
okay, well, you know, you're having to go through all of this additional friction to get crypto in the first place.
What is the benefit of doing this versus like finding some other mechanism to pay?
And like why not just use mobile payments or why not just use like in game payments and things like that?
So I think I think the real reason is really just that like it's the same reason why we can't just like fully replace all voting with crypto voting.
right there's some of these there's some of these like really obvious use cases that people always talk about like oh well someone should just build a voting system on top of blockchains and then like we can just use that everywhere as an alternative to existing voting and it gives transparency and things like that and it would but it would only work if like it's like not intrinsically a 10x improvement against what exists right now and because there are these entrenched network effects with what exists right now I don't think that we'll be able to get to the point where it's like worthwhile to make that kind of upgrade for any user
unless they already are onboarded into crypto.
So, you know, I think for that reason, things like Defi are a much better, like,
initial onboarding mechanism because it's a 10x improvement against what exists currently
because as a user, you have access to, like, being able to earn a lot more money online
than you would have ever been able to in the past, right?
Like, Defi has onboarded large parts of Southeast Asia, you know, like sub-Saharan Africa,
Latin America, because in a lot of those places,
like contributing to different ecosystems or participating in air drops and things like that
actually means life-changing money.
Like that's an access that you would never have had before, whereas participating in, like,
getting on board into a payment system or to a voting system isn't necessarily going to be
as big of a change to you.
Would you venture a guess when we will see the first large-scale crypto-based payment system?
I mean, technically it already exists, right?
technically there are some instances of like large scale crypto based payment systems.
The graph is a good example.
Like we worked really closely with them when we were doing stay channel stuff because they're like,
I think they're the single largest operator of like a micropayment network in the world right now.
I believe.
I mean, depends how you classify micro payments, but at least at the scale that they're doing it.
And then similar to, you know, similarly like SIA, Filecoin and many others are also like
working on building out their own bespoke like say,
implementations if they haven't already. I think Sigh already has one. And so I think I think it exists.
It's just like, it's just like been a much slower burn. I would expect that this would happen
for micropayments. It'll basically happen when the like web three infrastructure boom of like
different kinds of web three protocols computing networks and resource networks scales. It hasn't scaled yet,
but it's starting to get there. And then I would think for like consumer payments,
it's not going to happen until like everybody has a crypto wallet.
Yeah, I think that's a fair guess.
So let's talk about the history of interoperability because basically that that's been
one of those arenas of Web 3 that have actually gathered a lot more traction than initially
thought.
Right?
So basically back in the day, kind of the idea of different blockchains kind of talking to each other
and having like trustless bridges and stuff.
this seemed like magic.
And I remember even hearing about, you know,
Jay's IBC vision back in the day,
this seemed like I asked myself,
is this actually feasible?
I mean, obviously it is feasible and it's clear to us now.
But kind of let's talk about, let's talk about the history.
So basically, I mean, if we kind of look back,
the very first kind of bridges, so to say,
We had, you know, the wrapped asset-specific bridges, right?
So things like wrap Bitcoin and so on.
Well, technically the earliest ones were actually the atomic swaps.
So like, you know, BTC, LTC, atomic swaps and things like that.
Yeah, fair point.
The asset-specific ones, do you remember them taking off?
Kind of.
So, like, technically, ShapeShift took off.
and shape shift is in theory
in atomic swap system, atomic swap like system.
They do some other stuff, but
you know, that was like the idea behind it.
There were of course a lot of like proposals around
using lightning and then extending lightning
to things like stellar and like
like coin to do swaps there.
And then once Ethereum came along, the idea of like
oh, you can have like I think Ethereum
came along that was like when there was a bit of a transformation
because people started thinking about like
instead of just, okay, let's swap BTC for ETH,
which has a whole host of problems associated with things like,
you know, front running and free riders,
or free option, sorry,
instead the conversation turned to,
okay, how can we have a representation of BTC on Ethereum that can then be used?
And that's, I think, part of when things got a lot more interesting as well.
I think the earliest projects there were things like BTC relay
and a couple of others.
And like, it's pretty interesting because, like,
the things that are really taken off are, like, WBTC,
which is, of course, like a fully custodial BTC bridge representation by Bitcoin.
I think a part of the reason why a lot of these things didn't take off
is because the need for them wasn't actually as strong at the time.
You know, like, the idea was always like, okay, well, you can swap BTC for something else
like between these like, you know, account, like, between these like UTXO style chains where
there isn't any like broader functionality, but the, but that, that functionality was always
like fully encompassed inside of a centralized exchange anyway. And so if you were, unless, like,
there was like the novelty of like, okay, I can do this in a trust minimized way, but the,
the fact that you could always use an exchange for it was also like, just made it so that there,
there wasn't like a huge improvement against that to begin with.
I think what's changed in recent times is that is the explosive growth of the Ethereum ecosystem and how that has resulted in this like fragmentation across multiple Ethereum like chains.
And of course, now moving to like non-Etherium like chains like Slona and like StarCware and things like that as well.
But largely it is all EBM compatible at the moment.
I think the like that has instigated this like question of, okay, well, should common protocol?
protocols, especially like blue chip defy
protocols, should they just be deployed everywhere?
And I think most of them just said, yes, that should absolutely
happen. And once you do that, now you
have this problem where it's like, okay, you have
different rates for these different protocols and different chains.
Users now want to optimize the returns that they're
getting. Users now want to use applications
that run in many ecosystems all at once.
And now you've created this huge problem around
where do users go? How do users
actually do that?
The other piece of it is also,
which I think is actually a little bit
unintuitive, is that
a lot of the conversation, at least like in the earliest days of this big bridging growth that happened, which is really like in like January and February of 2020, what we saw was that the vast majority of people that were bridging were not like Ethereum L1 users.
Surprisingly, it was not people who were already using these chains. It was new users coming into crypto because, you know, there was just a massive growth of like end users at the time.
It was new users that were onboarding, finding that Ethereum L1 was too expensive for them.
And so they were onboarding directly into like BSC through finance and then going to Polygon or onboarding directly into Polygon and going to X-Dye or something like that.
And so what we found was like, and this is still the case actually is like the Polygon BSC nosis chain combo is actually by far the most used set of chains that people bridge between.
And, and, you know, even a lot of the node operators in our network are actually surprised by this because they're like, well, we would have thought that tons of people would be bridging in and out of Ethereum.
But that actually doesn't appear to be the case at all. It seems like most of the people that have their funds on Ethereum are just like not touching them as much as they possibly can't.
Do you think that's just because every time you touch them, you incur horrendous gas fees? Or why do you think that is?
I think that's a big part of it. And I think the other part of it is also that.
that like one of the things that made Ethereum so interesting, interesting and exciting in the early days was just that it was like, it was very experimental.
And like, so there was a lot of room for people to go and just build things and like try new things.
And even if those things were not gas optimized or were really poorly built or had like problems, it didn't really matter that much.
Right. People were just like playing around.
There is like a fun.
There's just so much like it is so much fun for developers, builders to be able to like play with things.
And I think that element was lost when prices, the price of everything on Ethereum escalated significantly.
And what we found was that a lot of the new developers that were coming in the ecosystem that wanted to do the same thing that early developers did on Ethel 1, which was just play, have a good time, like play with building things.
They all were doing this on Polygon and VSE, all of them.
And so what we found was like, it was really just like a lot of completely new people.
Like most of the support requests that we had in our ecosystem were actually,
associated with how do you speed up transactions in MetaMask, not anything related to
bridging itself. Yeah, super interesting. When you think about like the different bridges that
you can have, can we look at them kind of buy the thing that they bridge, right? I mean,
so basically there's there's, yeah, C20 tokens, 721s, 1155s, there's message bridges. So how do you
think about that just from, you know, zoology kind of point of view? Yeah. Yeah, that's a really
good question. So the most general primitive is just like arbitrary massive passing, right? It's like,
how do I get some data from one chain to another chain? Ideally, trustlessly, which basically
means ideally without introducing additional trust assumptions against those that exist on the underlying
chains. And of course, that is an extremely difficult thing to do. And like there are really very few
or perhaps no constructions that fully achieve this.
But because that is like the base layer,
you do sort of end up in a situation,
unless, and I want to carve out an exception here,
which is atomic swaps,
which are like a very special case scenario.
But for anything else, the idea is like,
you sort of need to have this ability to like pass around data
to begin with before you can do anything else.
Now on top of that, so that's kind of like the message passing layer.
Now on top of that, what you'll usually have is like,
you know, other layers for other kinds.
bridging. So, you know, you could allow for, uh, some wrapper around the arbitrary message passing
that allows you to mint and burn NFTs and now that becomes an NFT bridge. You can allow doing the same thing
with the RC20s and now that becomes like a token bridge. Um, and that, that specifically is a token
that allows you to like mint and burn wrapped representations of that token. And then, uh, and then
the last piece that kind of like potentially sits on top of all of this is like the liquidity piece.
So, uh, and this is, this is a big part of what Kinex does. And, and, you know,
we'll talk a little bit about like Connects and Nomad,
which is the like arbitrary message passing network that we sit on top of.
But, you know,
liquidity specifically refers to how do you ensure that the user gets the correct asset on the,
like that they need on a given chain,
especially given that in many cases that correct asset may be different than the
minted representation that you create through an arbitrary message passing bridge.
And,
and that is something that requires.
you know, things like liquidity pools in order to make sure that you bootstrap liquidity
in the asset that you need to like transmit to the user.
There's a lot to unpack here.
So maybe just to rewind, just to make sure this is absolutely clear.
So basically when I take an asset, say from, from Polygon to BSE via a specific bridge,
that that asset that is de facto minted at that point in time.
time on BSC, basically is that asset underscore the bridge that it came over.
And it's not fungible with that asset that lives on that chain natively or came by means of another bridge, right?
Exactly.
So the problem here is basically like, when you have an asset representation on a given chain,
if it's the canonical asset.
What makes, first of all, there's this bigger question of, like, what makes this asset the canonical asset?
Generally, what we say is, like, okay, it's the most widely adopted asset.
It may be, like, in Polygons case, it may be the asset that's coming over the Polygon POS Bridge.
In other cases, it may just be the asset that happened to get the most traction.
So, like, on Avalanche, it's like USDA, which is, like, the representation of USDC that everybody, everybody just started using.
And now it's the canonical one.
The second piece there is, second question there is, like, who,
actually owns the authority, the permissions to create more of this canonical representation.
So typically this will be like, in Polygon's case, will be like the like chain sponsored bridge,
right, the Polygon POS bridge, and Avalanche is the Avalanche bridge.
In roll-ups, it'll be the roll-up bridge.
And so roll-ups actually have like an easier time with this because they already have a
trust-minimized bridge that is a canonical one and nobody can ever dispute that.
but if you if you don't have one dedicated bridge that is minting like a canonical token,
you don't have necessarily a canonical token begin with, things get a lot more confusing.
So what happens if, for example, you have another bridge like Nomad, for instance,
that is like minting an asset on Polygon, the minted acid on, like that Nomad will likely
not have the permissions to increase the supply of the Polygon POS, USDC representation.
And so instead, what Nomad would do is they would have to create their own representation,
which is a pointer to wherever those tokens were first locked on whatever chain.
And now you have an asset that is Nomad-wrapped Polygon USDC,
and then you have another asset that is Polygon POSUSDC.
The Polygon POS-USDC is the canonical one because it is used widely across all of the defythe
applications on Polygon.
And so as a user, you now have this like user experience problem of,
I have this asset, how do I get to the correct asset?
Because I can't actually use this asset anywhere.
Sounds like a mess.
It's definitely a huge headache.
How do you go about it?
Yeah.
So what we do is wherever possible, we try to swap the user into the canonical asset,
the most widely used asset.
And so this kind of gets into a little bit of the technical details of like how Nomad works
and things like that and how connects fits into it,
which you can definitely get into in more depth.
But at a high level with how Connects works currently,
we just basically swap into liquidity pools of whatever asset is like the most widely used.
And in the future, what we'll be doing is we would use,
so rather than swapping directly between chains,
just as a mechanism to improve the usability of Connected
and the experience of running a node in Connects,
we will mint a Nomad representative asset that goes across chains.
this will be like the sort of default asset that is used in our system,
but then at the exit point when the user is about to receive their liquidity,
that will be swapped for some local asset if needed using a stable swap.
So the construction here is a little bit similar to something like Hop,
where, you know, Hop basically utilizes the arbitrary messaging bridges on roll-ups
to create H tokens, which are their own representation,
and then they swap them at the end into like the,
the roll-ups token if needed.
How do you make sure you have sufficient liquidity
when people try to move across large sums of money
or even tokens that you have,
I mean, you need to onboard tokens, right?
So you can't offer this for just any token.
You kind of need to know which tokens are coming in advance.
Yes.
Yeah, that's definitely a challenge.
I think this is something that we're still trying to understand
the best sort of user experience around.
it's a developer experience around because it's complicated.
And it seems to really change based on use case.
What we have right now is the ability to set like slippage tolerance in these transactions.
So you can you can at least like be sure that as a user you're not going to get completely wrecked by slippage because there wasn't enough liquidity.
And then in failure modes, what we do is we, we just like allow the developer basically like exit the user's funds onto, exit funds onto like a given chain and then have the developer actually be responsible for.
figuring out how to handle the error case themselves.
And over the long term, the idea is, like, we're going to try to build a better taxonomy of
use cases and the way that those are handled from an error perspective or from a, like,
failure mode perspective, and then find, like, default mechanisms to handle those failure modes.
But at this stage, it's like early enough, it's too early for us to really be able to say
definitively that, like, every kind of use case of X type needs to be handled in this way,
because we just, we just don't know yet.
Yeah, I mean the complexity behind kind of having all these different flavors of more or less the same acid.
This is actually really mind-boggling.
So in a way, you kind of need someone who behind the scenes.
I mean, it's kind of like having this massive ball of yarn, right?
And someone behind the scenes kind of needs to, you know, perpetually kind of order it and unwind it and make sure that kind of it doesn't tangle too badly.
And it actually gets worse too. So like on chains, you know, there's, there's been like several waves of chain launches. In a lot of the earlier chains like Polygon, BSC, Avalanche, there were already like chain built canonical bridges, right? And like the chains themselves were like, you know, had lead time to have those canonical bridges be kind of publicly used before other bridges came and started like building, creating their own like representative assets. But a lot of the newer chains that have launched in this like second wave,
of L1, like L1 releases have things like Moonbean and EVmos, things are a lot more messy.
So like, for example, on Moonbeam and Evmos, connects to no matter technically the default bridge,
where technically the officially supported bridge.
But that hasn't really meant anything because, like, these are permissionless systems.
It's possible for a lot of other projects to come and deploy on these systems.
And that's actually a good thing if they do.
But what that means is, like, now on Moonbeam, for instance, there are also like
multi-chain
representative tokens.
So any tokens,
there are seller representative tokens,
there are synops representative tokens,
wormhole representative tokens.
And now it's an extremely confusing
problem for the user.
And there's no, there's no,
you know, even if the Moonbeam team says,
okay, X, Y, Z tokens are the canonical representation.
It ultimately isn't even really up to them.
Like they, they,
it really is up to
the network effects of the applications
that are running on top of this
in this ecosystem. So we have yet to figure out a good way to solve this mess. There are definitely
a lot of proposals out there to do things like allow multiple bridges to mint the same token,
but then that just increases risk massively across the entire space. So we're generally pushing
back against things like that. But yeah, it's a very big, very hairy problem that at the moment
doesn't really have a solution. I want to talk about security a little bit later, but kind of
let's talk about this for a little while longer. So I mean, basically, we've seen,
seen a similar version of this problem across dexes, right? So basically the arbitrage opportunities
that you have between different dexes. And the way that the market has solved that is by kind of
having market makers who kind of, I mean, in a negative reading, they're arbitrageurs,
but in a positive reading, kind of they make the market more efficient because you can kind of
trade across, because the price of different assets kind of normalizes across different
dexes, right? Do you think having such radically decentralized approach to the bridge problem
would be a good solution, or do you think it has drawbacks? Yeah. So basically the question is,
like, is it a good idea to just offload the balancing between these different kind of bridge options
and basically moving between these different assets
to just liquidity providers and market makers
who can ensure that users will get the right asset
and as a result of that,
we can still maintain like a good user experience
while having multiple averages.
I think so.
I mean, I think that's, regardless of whether or not
it's a good solution,
which I think it is, I mean, I think it is in the sense
that there really is no other option.
So I do think that, like,
we are headed in that direction regardless, which is that, like, there will likely be a lot of
stable swaps on all of these chains that will allow you to move between these different
representative assets. And I think eventually, long-term, all of these projects will eventually,
they just, like, plug directly into the stable swaps so that the user is just getting the right
asset on a given chain. But I think, I think that core problem of, like, how do you even determine
what the right asset is, is just, at the moment, just a huge mess. It's just like an open field,
right now where like, you know, we and other bridges are actively, like, you know, chains like
Moonbeam are actively battlegrounds where we and other bridges are fighting for market share and
trying to work towards having our version be like the canonical representation. And of course,
like on our end, it's not the worst situation in the world if that doesn't happen because we can,
we can just allow for swapping into the right one. But at the same time, you know, if our whole model is
we want to make sure that users are like a big part of what we care about is like a globally trust minimized option you know we really feel strongly that like the inherent risks associated with bridging and with like cross-chain interoperability are much higher than even just like chains themselves there are systemic problems associated with that and a lot of the systemic problems are also arise from like potential economic failures right not just you know the bridge gets hacked or there's an implementation bug or or
like a like a security vulnerability but instead uh there is economic risk where uh markets or like
the the economics of bridges could be manipulated to attack them um and this is this is basically what
happened you know with terror for instance where uh you know markets terror markets were manipulated
to exploit a vulnerable economic vulnerability in the way in terrorist in terror system in the
USD system and and the idea is that like long term if we want this ecosystem to be sustainable we
to build systems that remain invulnerable to those kinds of attacks, you know, because those
kinds of attacks will happen, uh, either from, you know, uh, theoretically shady Wall Street
organizations or perhaps governments or perhaps large-scale corporations or billioners that
want to find ways to extract value. Um, and so my, my concern is like, I don't, you know,
on the one hand, it would be, it doesn't really matter that much if we end up in a world where like,
you know, you're utilizing Nomad to go across, Nomad and connects to go across chains.
And then at the, at the exit, we're swapping into like any USC or something like that.
The, but at the same time, that of course means that users are now holding any USDC in their
wallets. And so now they are still subject, always permanently subject to the risk of any swap or multi-chain.
Yeah, I have so many questions about security and basically how security guarantees kind of transfer from.
But I kind of, I want to save them because first I want to hear about nomad and connects and basically who does what and how they interplay and how this partnership came about.
So I can start with how the partnership came about.
So we, you know, we've been researchers in the space for a really long time working with a lot of like key research teams that are out there.
So, you know, we work super closely with like, and we have worked super closely with like, you know, the optimism founder.
the arbitral founders,
the KSync founders, et cetera, et cetera, et cetera.
And that, I think a lot of people don't realize
that that community is actually really small.
So, like, even though it seems like a lot of these projects
are competitive with one another,
we all, like, share notes, we all talk to each other constantly.
We all present at the same conferences, research conferences,
because ultimately, you know, there is, like,
of course there is competition,
but at the same time, like, we sort of all recognize
that the market for this is so massive that at this stage,
it's, like, pretty positive sum.
for us, you know, early on, one of our key advisors around this interoperable piece was always James Preswich,
because he is just like one of the foremost people who has been thinking about bridging for many, many years longer than anyone else has at all in the space.
Yeah, we had him on for summer probably like four years ago or so.
And basically he had just launched the Bitcoin Ethereum auction bridge.
Yeah, James is awesome.
Super cool.
And he's been thinking about this like very, very deeply for quite a long time for a good reason.
And he has like pretty nuanced opinions on this stuff.
It's not, you know, he understands that there are like very big tradeoffs and understands that there is like a, there is room for multiple different kinds of solutions out there.
And so I think a lot of a lot of the reason that we ended up working with Nomad was just because of our very deep relationship with James.
And as a result of that also like the very strong culture.
similarities between the Knex team and the Nomad team, which is that like we're all just very kind of,
we are all people that have been very, very focused on like producing value in the space and
around research and around like building the sustainable public goods that are actually
trust minimized and actually trying to do something good. And we are also all both like teams that
care about the same kinds of things that like have the same kinds of attitudes towards like
building communities and and like being sustainable organizations.
But then beyond that, I think, I think there's also just, there was just like a natural fit
as well.
So like we, what we found was that over time, you know, Connects historically had been
focused on atomic swaps because we were really interested in just solving the like liquidity
piece first before expanding to other things.
Now, of course, our, we really do think that the Holy Grail is to be able to do any kind
of arbitrary messaging and also have liquidity built in.
And that was something that we were struggling for a while because,
struggling with for a while because what we found was that there was just no really good way to do that
out there. And this is, this kind of gets into like the interoperability triloma piece, which,
which, which, which, which, which, which, which basically breaks down, uh, the tradeoff space around
bridges and, and shows that it's actually really difficult to have, uh, any system that
simultaneously is deployable to many different chains, um, is, uh, supports arbitrary message
passing through just generalized and, and then also is trust minimized.
And as we were looking out in the space, what we found was like we wanted to
create some sort of, either create or work with some sort of mechanism to do this.
And the only mechanism that we found that actually had acceptable tradeoffs was no-man.
So we became involved pretty early.
You know, we were been collaborating closely with the team for quite a long time.
And at this stage, the way that the relationship works is that like we sort of think of
ourselves as the shared stack.
We actually call it the module interoperability stack, which is the, this thesis.
that like it is impossible similar to like the scalability trilama and modular blockchains as a solution to the
scalability trilama there is because there is this interoperability tradeoff space it is not possible for us to
just have a single solution that solves everything associated with bridging and interop. Instead we need to
find ways to split out the responsibilities and build a stack of protocols that can work with each other
in a limited way with kind of like fixed interfaces and fixed delineation of responsibility.
that can then provide a solution that actually is as close to ideal as possible.
So the way that the delineation of responsibilities works is that Nomad provides this like base layer of message passing.
Nomad allows you to do generalized communication between blockchains with very reasonable, very trust minimized assumptions, like security assumptions.
and and then but with the trade-off of needing 30 minutes to do to pass messages between trains
and that 30 minutes is the amount of time needed to like instigate a dispute if if something
if fraud occurs within nomad so so that's because it's it's it's inherently optimistic that's why
you need the okay so bcd exactly so like nomad in nomad's model uh the assumption is not like
in something like ibc where you know in ibc you have what is called a validity proof where you have
one chain, like the validators of one chain are like verifying the consensus of another chain.
And they're doing this for every single message that that goes between chains.
But of course, the downside of validity proofs, and this is mirrored onto like CK.
Rolos, for instance, is that you're for every single transaction or batch of transactions
you're having to do this proof.
And so the cost overhead of it is quite high.
The complexity of is quite high.
The, like, cryptographic and like, you know, consensus dependencies of this are quite
high because you have to kind of figure this out for every single chain.
Whereas in Nomad, it's the other end of the spectrum, similar to optimistic roll-ups where you actually say, okay, well, we're not actually going to validate anything unless something actually goes wrong.
And so you don't validate, like you don't submit proofs that any given state transition or any given update is valid.
Instead, you just say, let's just assume it's valid and then wait 30 minutes or wait a certain amount of time for someone to say, oh, they have a problem with this.
But this is predicated on the assumption that there's enough people kind of watching this stuff, right?
So basically, yeah, and this is something that, I mean, if you look at the past six months or so with what has gone wrong with bridges, I mean, the Ronin bridge hack, no one even noticed for like five days.
And I mean, it's like, I mean, do we have enough analytics to be alerted to things?
how many how many how many uh how many uh you call them watches right how many watches do you do you have
on these kind of uh bridges so right now the watcher set nomad is permissioned um the reason for this
is just that it's like a stepping stone a progressive decentralization process um
is the same reason why like optimistic roll-ups for instance don't actually implement fraud
proofs so technically they're sort of custodial but like obviously obviously
Obviously, people recognize that the model itself makes sense and recognize that it's a process to get there.
What Nomad is actually working on right now is expanding that watcher set.
So to move away from just like them running a bunch of watchers to to like allowing other people to run watchers,
it'll still be permission at the moment, but it will be much larger.
And one of the key goals there is to move towards like, unlike with many other systems,
in Nomad's case, there are already a bunch of actors that are watching the chain and looking for fraud.
a good example of this is Connects nodes.
So like, I guess like one one kind of piece that's missing here, perhaps this context is like
if Nomad is providing this like messaging layer, Connects is providing liquidity there that sits on top of it.
And what we do is we short circuit the 30 minute Nomad latency in certain cases where it's safe to do so.
And those cases are cases where our nodes are willing to front capital for transactions.
They have the permission to execute the transaction on the receiving chain.
So basically like it's a it's an unpermission call.
So like something like a un-swapswap.
swap rather than something like a token mint.
And then lastly, and most importantly, they do so only when they recognize that
their fraud hasn't actually occurred because they are the ones that are taking on the risk.
And what's interesting there is that they are actually already performing the function
of a watcher.
All we have to do is add a little bit more code for them to start a dispute on chain.
But all of the like resource overhead of them watching for fraud has already occurred.
and we have, I think, 131 routers on our test net for the next upgrade that includes nomad.
So that's already 131 watchers that could go live pretty much immediately,
which already, you know, if you assume, I think if you assume like a 10, like an 80% uptime for watchers,
I think the odds of all of the watchers being offline at the same time then becomes like
in the 10 to minus 20 range or something like that, which is pretty awesome.
That's slow.
Yeah. As a final point here, this actually, there is a live system where this works and it's been shown to work. So like the Ronan Bridge hack is a, is a great counter example of like why multi-sig bridges don't work and why you need to make sure that you have like this is like it was like the first example of like a root of trust compromise for a bridge and shows like the risk of having like, you know, this like permission based bridging mechanism where you have keys that have the ability to arbitrary element.
on other chains versus a revocation-based bridging mechanism like Nomad,
where you can dispute if anyone can dispute if something occurred.
But you're totally right that it wasn't noticed.
And I think that that was definitely like a huge failure in like that ecosystem's part,
that like there weren't better analytics around this.
But a counter example is like the rainbow bridge hack that happened recently.
So for context, the rainbow bridge is a bridge that exists between near,
and Ethereum.
It is a fully trust-minimized bridge.
And the way that it works is that in one direction,
it is a fully native bridge, similar to IDC,
where I think the near ecosystem is running a like-clined of Ethereum.
And then in the other direction, it is an optimistic bridge.
And the optimistic direction was attacked.
And the watchers of the Rainbow Bridge actually successfully detected that an attack had occurred.
The attack wasn't even like fraud from like the bridge updater,
but was instead a hack of the contracts themselves.
The watchers successfully detected the hack and paused the bridge.
So basically stopped any sort of fallout as a result of like the hack occurring,
unlike with Ronan or Waramhole or others.
So how do you incentivize the watchers, right?
Because basically if you don't incentivize them, basically they can just,
hold the chain and grief everyone without any cost to themselves, right?
So in that way, you kind of, you would actually end up trading security for liveness.
That is actually the main research question and optimizing that process is the main research question
that remains to be able to make watchers permissionless.
The general idea is that you can use a combination of things like token incentives and then
also like bonds and slashing of those bonds to be able to ensure that, you know,
updators are penalized for fraud and then watchers are penalized for false reports of fraud.
And because the like, there's no real like material like financial upside for for a watcher
to do this other than like the griefing vector of dosing.
And so as long as the the downside risk for a watcher of, hey, I'm going to lose
X, Y, Z amount of funds if I've, like, fraudulently, you know, stopped this bridge is, as long as
as that's the case, you can be reasonably certain that it doesn't really make a lot of sense for,
for, like, watchers to do that. Now, there's, there's, there's definitely a lot of, like, research
that is currently in progress around this to figure out what are the bounds around that? Like, how can we
be sure that, like, the penalties for this are high enough for watchers to be, uh, to be
disincentivized from, like, you know, dossing? Um, and then similarly, uh, how do you ensure that the,
the rewards are high enough for watchers to be incentivized to actually, like, attempt transactions.
And how do you make sure, for instance, that, like, if there are rewards, it is not possible to front run those for, like, you know, MAB bots to front run those rewards in the MMP pool, which is, which happened in the near Rainbow Bridge case, which is quite interesting.
So these are the kinds of questions that, that I think, like, nomads researchers are dealing with at the moment, that are the main blockers to being able to completely open the system up.
Cool, yeah, super exciting.
For what it's worth, by the way, these are also the same research questions that exist in optimistic roll-ups around incentivizing watchers.
So basically seeing that Konex basically offers liquidity or liquidity underwriting on top of the bridge.
How do you deal with, how do you think about re-orgs and probabilistic finality, right?
because basically if something happens on the chain that you send something from and basically
it reorgs, then, and you have already paid out the money on the other chain.
I mean, is it kind of priced in or do you have, can you somehow mitigate that danger?
This is a part of the risk of running routers in Connected.
And this is the risk that we try to mitigate.
It's like the risk as a router, basically what you're doing is you're saying,
I see that there is this slow 30 minute transaction that's coming over Nomad.
I see that it is possible for me to complete this transaction faster.
And I see that I have enough liquidity to do so, and I can earn a small amount of fees by doing that.
And so we sort of like mitigate the latency tradeoff of Nomad.
We also ensure that the user gets the right asset that they need.
And we make sure that and like the only kind of the, I guess the tradeoff or not not necessarily the tradeoff,
but at least like the decision matrix around doing that is then based on what is the risk to the router that is actually making this happen.
And that risk profile is based on, you know, how likely is it that fraud has occurred at Nomad?
What is the risk of some sort of like chain event, like a reorg or a 51% attack?
And what is the risk of like some sort of failure mode on the receiving chain that results in like me as a router not being paid out?
In the re-org case, at the moment, what we just do is wait.
We just wait for enough blocks.
We've done a lot of statistical analysis over the course of the last year and a half to just, like, just because we've been live for that long, to be able to understand how many blocks we need to wait on each chain.
And generally, what we found is, like, the reorg risk is really only biggest on, like, fast chains that are, have, like, very, very low fees, like BSC and Polygon.
where you can see like deep reorgs and on most other chains that isn't as much of a concern.
So usually waiting about two minutes everywhere appears to work quite well.
What we'd like to do is move towards a model where we actually don't need to wait for reorg risk,
but that's quite difficult.
So some things that haven't talked about in our community, for instance,
are the possibility of building some sort of reorg insurance where you actually just like have users underwrite the risk of the reorg risk of a router.
And because you can detect, like, reorgs in on chain within a contract, you could actually
deterministically pay out that insurance bond to a router if a reorg occurs.
But the, again, like the, this is something that's been like talked about in theoretical terms,
but like the economics of it are something that we'd need to figure out.
Like, basically, what is the likelihood of reorgs?
What is that scale of risk versus reward look like?
And what kind of fees would you need to charge as an insurance provider in order to mitigate the
the massive additional risk that you might have on some other chains.
Yeah.
I mean, as soon as, as soon as Ethereum and possibly a lot of other EVM-based chains move to proof of stake,
I mean, the real risk, I mean, it doesn't fully go away, but it reduces greatly, right?
Reduces, yeah.
And we've been working with, like, we've been working closely with the Polygon team, for instance,
on this problem to try to talk to them about, like, what are ways to think about this
I know that this is like a huge priority for them as well, that like they are trying to work on improving their own consensus mechanisms and the way that like nodes operate in their network to be able to reduce the rate and depth over your orgs as much as possible.
But yeah, it's definitely, it is definitely a problem right now with problemistic finality.
And the solution of the problem is just for us to wait longer, for routers to wait longer, long enough that they are comfortable.
So how do you guys think about capital efficiency?
So basically if you move, if you're kind of liquidity underwriter, this kind of entails having capital at hand to kind of pay people out.
So how do you think about not having too much on any one stockpile?
Capital efficiency is a really interesting question and problem.
Like the ideal sort of scenario, which is what we had originally tried to optimize for, was like you have a certain amount of
liquidity available on each chain, and then you're just like utilizing that pool of liquidity
to do transactions, and there's no additional liquidity that's required, and there's no
lockups of liquidity that's required. And so our existing system, you know, the kind of V1 of our
interoperability network does this thing where like you, you know, you send transactions to a router
and through an atomic swap, the router receives funds on one chain, and then they give you funds on
another chain. And in theory, this is the most like capital efficient option. But what's interesting is
that there's a capital efficiency and then there's capital utilization.
What's interesting is that like the utilization is actually not that great, even though,
even if the efficiency is great.
And the reason for this is that while the liquidity, you know, say you're transacting,
like currently there's a lot of people that are, you know, transacting, getting out of
FTM and going to other chains.
So say you're going from FTM to Polygon, while the, the liquidity that a user sends on FTM
is immediately usable by the router to send a transaction the opposite direction, what we found is
that in many, in most cases, the movement of funds between chains and the patterns by which people
tend to rotate between chains are unit directional, week to week. So, you know, the chain may change,
but like everybody will flood to a given chain or flood away from a given chain in a given week.
And the difficulty with this is that now you end up, at least with what exists currently,
you end up with like liquidity actually just piling up in a given place. So like, for instance,
you know, tons of liquidity piles up on FTM.
because everybody's trying to exit that chain.
And in order to fix that, in order to make sure that that capital,
you know, while that capital is usable by someone who's going into FTM,
because the demand for going into FTM is low,
we then need to have this like secondary problem of how do we get liquidity off of FTM
and to somewhere else where it's more likely to be used.
And so utilization of that capital ends up being quite low.
What we ultimately came to the conclusion of was like,
it's actually better for, instead of, you know,
it's better for us to actually take a hit with capital efficiency if we
can increase utilization.
And what we decided was to move towards this model where routers in our network send
and receive capital on the same network.
So they send and receive capital on the same chain so that they don't have to deal with
this process of rebalancing.
And the process of running a router becomes as passive as possible.
They can just like sort of turn it on, put liquidity in, and then just like let it happen.
And this results in the best utilization because routers don't have to.
to think about, you don't have liquidity piling up on chains that are not being used.
And the trade-off here is that now there is a capital efficiency problem.
So, or not really a problem, but you have a slightly reduced capital efficiency.
So now what happens in our network is like you have this, the base layer of this whole process
is the transaction that happens across chains with nomad, where you burn a nomad representative
acid on one chain and then mint another nomad represent an acid on another chain.
But in order to make that process happen faster, the router needs to, you know, and basically
like avoid the 30 minutes of waiting for the user, the router now needs to take on that 30
minute lockup.
So the router now has their capital efficiency reduced because they are, their capital is
locked up for 30 minutes every time they use it.
And then in addition to that, you now also need user provided, kind of passively provided
liquidity on each chain in a stable swap that, that, where if needs,
you are swapping from the nomad representative asset to the canonical asset on that chain.
That said, I think because of the new mechanism, we actually will end up with much better
capital availability and utilization and also probably much better pricing.
And the reason for this is just that the price curve for, like basically the incentive to
rebalance the system, in this case, what rebalancing means is basically swapping assets back into
nomad flavored acids and sending them in the opposite direction to another chain to basically generate more nomad
the credit there. The incentive to do that is now concentrated in the stable swaps on each chain.
So the pricing is concentrated, which means that you have the best possible pricing, the least amount of slippage,
versus having, you know, each router have their own pricing curve and things like that.
And then in addition to that, while there is a lockup of funds, that lockup is actually
relatively small compared to the amount of the possible utilization of those funds and the frequency
with which you could rebounce it in the past. So for example, say, you know, pessimistically,
say like the nomad lockup takes a full hour in case we decide to do batching or things like that,
which we're not at the moment, but say we do that in the future. Even if you do, even if it takes a
full hour, if the network has, say the network has $100 million of liquidity, or currently we have
about, you know, $40 million of liquidity.
So say the network has $40 million of liquidity,
that is locked up for a maximum of one hour every time it is utilized.
That means with $40 million of liquidity,
you can do over a billion dollars of daily volume.
So that's far more capital efficient than many other things in the space anyway.
So at that point, we're like, okay, this is, this makes sense from a,
from a efficiency and utilization perspective.
What's the fee you take in terms of basis points?
Yeah, there's several different kinds of fees in the network.
There is a fee that routers take for the lockup of liquidity, and that is five basis points.
Over time, that may reduce or increase depending on like the network dynamics and things
of that, we haven't quite figured out yet, but we have found that five basis points is like
usually about 10x cheaper than most other options out there.
and even if it isn't, it is definitely significantly cheaper.
It is definitely the cheapest option by far out there at the moment.
And then in addition to that, there are two other kinds of fees.
So there are the LP fees for the stable swaps,
and we don't quite know exactly what those will be yet,
but that will be like a small fee that is paid out to the passive LPs of the stable swap.
And then in addition to that, of course, like the slippage in that stable swap,
generally we expect these to be fairly tight because these are all using stable swap amms.
They're highly optimized for this.
And generally speaking, this is passive liquidity.
So we expect that users, it's like much easier for us to bootstrap liquidity when users are able to passively LP.
The last kind of fee is gas fees.
So users need to pay the gas of the transactions that they do on both the sending and receiving chains.
they pay this all of this in the sending chain asset
sending chain like native asset
and the idea behind this is that like
we realized in the past we have users pay gas fees
from you know the transacted assets so like us DC for instance
if it's going across chains what we realized is that like if we end up having
long tail assets
it may not be the case that you know relairs and other other like service
riders are willing to accept fees in
XYZ long-tail asset chip coin that doesn't even have like a market price.
And so what we decided was that the most acceptable form of asset that we can be sure
the users will have and that relairs and other service riders will be willing to accept
will always be the sending chain native asset.
And the user experience of this is kind of nice as well because, and developer experience
is kind of nice as well because all users are doing is that they're just paying some additional
gas fees on top of the gas fees that they would already pay to do a transaction on that
chain.
And similarly as a developer, you are just, you know, making sure that the user pays this additional fee and then monitoring and bumping that fee if needed, you know, if you need the transaction to go through faster or something on the receiving chain.
But, I mean, that sounds like really good business model, though.
So, I mean, basically if you, I mean, I'm talking about the five basis points for the liquidity provision for the half hour, right?
So basically if you kind of, if you had like perfect capital utilization and you, that meant you could kind of have a billion dollars of volume a day.
And basically you got your risk management, right?
So that basically you wouldn't be paying a lot for that that's like five basis points on a billion dollars is like a half a million a day, right?
Yeah, there is a
It is a really lucrative model
And it is extremely low risk
So this is a way for people to run infrastructure
And get an extremely low risk
Like potentially, I mean, of course it's demand-driven
And I think like the demand of the network ultimately
is going to be what results in returns for the router operators
And basically the ratio of the demand and the network to like the amount of liquidity provided
By router operators.
But at the same time, like in a demand-driven scenario,
So you can easily get like 50 plus percent on any asset that you're providing liquidity on APR, API.
That's crazy.
Actually, it would be APR.
So it's actually even more APY.
Wow.
So let's talk about, we're kind of over time already.
But there's one thing I really want to cover.
So let's talk about the user experience.
So I mean, ultimately what you want is the user shouldn't.
have to know which chain they're on.
So, I mean, so basically if I want, I should go to a DAP and it should just work.
And I mean, especially with the DFI primitives we've kind of seen in the past, how is
composability of those DFI primitives or other primitives, to be honest, and bridges,
how is that going to come together?
because it seems like two difficult problems.
Is this a tractable problem?
Yeah, that's a good question.
I think what we're experiencing right now is like the,
like basically what we need to go through is this transition from
like decentralized application development going from being like a something
that you do in like the synchronous model where everything runs on a single chain
and like you can be sure that like you have results within the same block for anything that you build.
versus moving to an asynchronous model that is more similar to how web applications are built more broadly.
And I think the difficult part will be figuring out how to make that transition happen whilst
making sure that the developer experience and user experience remains as similar to what exists as possible.
I definitely think that like, I definitely like our thesis has always been that most users and users
especially are not going to really care at all about what chain they're on. They don't want to. And like,
really they're just going to want to use an application.
And so if you're operating under those assumptions,
then it should just be possible.
You should move towards a world where if you're building an application,
that application,
to be able to accept transactions for many chain
and should be able to potentially even have
liquidity pools on multiple chains that are connected to each other.
I think that's possible,
but it will involve like a transition
that we are trying to make happen right now,
which is that developers are going to have to move to a mental model
where they are not necessarily receiving the results
from a given transaction immediately.
They're instead having to do what you currently do
when building an application with JavaScript, for instance,
where you make an asynchronous transaction
or asynchronous call to another function
that's just another process of living somewhere else on the internet.
And you don't know when you're going to get a response.
You don't know if you're going to get a response.
And so now you need to have handlers for this.
You need to have an error handler that handles what happens
if you don't get a response or you get an error.
And then you also need to have a callback
that, or at least a callback handler
that basically is executed when the return data comes back from another chain,
which may be at some,
may, you know, given the nomad latency, be like 30 minutes or so in the future.
But yeah, it's a good question.
I think, like, it's going to be quite interesting to see how this stuff plays out.
Generally, from our perspective, what we've seen is that, like, most, if not all,
user-facing interactions can be short-circuited by Connects.
And so those can happen in under two minutes.
And so what that means is, like, if you're a user that's like trying to use un-swap on another chain or you're trying to just, like, get the best, say, for example, you are using paraswap and you want to get the best rate across all chains.
Paraswap can actually, like, create transactions that go through Connects that aggregate all of their liquidity on multiple chains altogether.
And that can happen in two minutes because the, you know, the, the, the, the, the, the text calls on each chain are unpermissioned.
And given that that's the case, like, we expect generally that that's going to be an acceptable user experience for for most users, who,
are used to like using web applications and dealing with that kind of latency.
And we think there are ways to kind of like, how is that latency, like, at least the latency
increase from being like zero seconds to two minutes in a way that makes sense for users.
You can show them like transparency.
You can show them like ConnectScanters or network explore to show like track the transaction
lifecycle.
And if something goes wrong, you could surface that really accurately.
Cool.
So, Arjen, tell us what's next for Connected?
So what do you plan to get done within the next year or so?
Yeah.
The two biggest things the team is focusing on right now are the launch of the Amark
upgrade, which is the upgrade that incorporates Nomad and moves towards this like generalized messaging pattern.
We already have that upgrade on TestNet, or I guess there's three big things that we're focusing.
So we already have this upgrade on TestNet.
There are a lot of people already building against it.
I think we announced the TestNet publicly,
last week. And since then, we've already had 131 routers set up on the test net and then about
15,000 transactions, which is incredible. It's mind-blowing that so many people have been
interested in building on it and experimenting with it. So we're super excited about getting that live
as fast as possible. And we have audits scheduled to begin starting in about, I think about a
week now. So it should be like first, like second week of June and then running until like the
start of July. And we should go to life shortly after. The other two main pieces, so that's,
that's a big part of the focus of like the engineering team and, and protocol team at the moment.
And then the other two big pieces are we have a contributor program that is ongoing. This is a way
for people to start getting involved with working with the connects ecosystem. And a big part
of this is that we want to take existing processes around, you know, running the router ecosystem
and like growing it, running the community and growing it and things like that, and spin them out
to the community entirely.
You know, like, we want it to be the case that, like, our community is self-operating.
We want it to be the case that, like, you know, routers are self-organizing to apply for grants.
They are working internally to improve the experience of operating a router and, like, onboarding new routers and versus the core team doing everything.
And so that has already kicked off.
There is absolutely room for people to participate.
So if you are a person outside of the U.S. and you are interested in working with Connects, or at least just, just,
interested in participating and being involved, you can sign up at contribute.connects.
Network and earn tokens to basically work on helping us build this ecosystem.
And then lastly, the last main focus is, of course, our token launch.
So we announced about a month ago that we are heading towards releasing the next token,
which is going to be a governance and stinking token in our network.
There are also other kind of token mechanisms that we've been experimenting with and thinking
about, but we want to be like very conservative with the way that we implement them because it's,
it's really hard to take back a token model once you've implemented it if that token model is
incorrect. And we really want to make sure that there's a lot of like community input once the
Dow goes live on it. But we're currently expecting that to go live within the next like month or so.
We're currently working on like finalizing things like air drop allocations, finalizing things like,
you know, the remaining legal pieces and things like that,
and like the rollout of the token itself and how it will be distributed.
And of course, given market conditions and things that we're thinking really carefully
about making sure that we, that, you know, community members are like treated fairly
and that, you know, people are getting materially rewarded for their work or materially
compensated for the things that they do in the ecosystem, both now and after the Dow goes
live as much as possible given that, you know, the market, you know, may end up, we may end up
heading towards a bear market or things like that in the future.
Cool.
Arjun.
Sounds like, exciting times.
Yeah.
Looking forward to see.
None of us are sleeping.
What comes out of Kinext over the next couple of months.
Yeah.
And it's been a pleasure to have you on.
Thank you so much.
I really appreciate it.
Thank you for joining us on this week's episode.
We release new episodes every week.
You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud, or wherever you listen to podcasts.
And if you have a Google Home or Alexa device, you can tell it to listen to the latest episode of the Epicenter podcast.
Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter, so you get new episodes in your inbox as they're released.
If you want to interact with us, guests or other podcast listeners, you can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show, and we're always happy to read them.
But thanks so much, and we look forward to being back next week.
