Unchained - How to Build the Fastest Onchain Experience With Monad, Sei, and Eclipse - Ep. 694
Episode Date: August 27, 2024In this episode of Unchained, Keone Hon of Monad Labs, Jay Jog of Sei Labs, and Vijay Chetty of Eclipse Labs share insights on their distinct approaches to scalability and performance in high-throughp...ut blockchains. They discuss the technical advantages of parallelized EVMs, the strategic decisions behind blockchain architecture, and the innovations driving the next generation of high-speed chains. Show highlights: 00:00 Intro 02:02 How Monad got started and its mission from the very beginning 03:46 The features that enable Monad to be a high-throughput blockchain 07:32 Why Monad chose to make a new blockchain instead of an L2 08:36 Why Keone believes that Monad offers the best experience for developers and why he doesn’t like the ‘Ethereum killer’ description 15:48 Monad’s big venture capital raise and how they’ll use the money 17:30 Monad’s strong community 19:21 The next steps for Monad and whether we’ll see a token soon 20:02 What Sei is and the role of the GameStop saga in the creation of it 21:31 Why Jay believes the EVM developer ecosystem is so strong 25:45 Why Sei pivoted from Cosmos to the EVM that led to the launch of its v2 27:14 What allows Sei to be “the fastest chain, even faster than Solana” 33:03 How Sei DB works, and why Jay says that the monolithic approach has many advantages to the modular one 45:35 How Eclipse works by combining Ethereum, Solana, and Cosmos 53:22 How Eclipse deals with the complexities of its modular architecture 54:54 What ways there are to transact in SOL on Eclipse 55:44 Vijay’s reaction to how Eclipse Labs has responded to the allegations against its founder and former CEO Neel Somani 57:21 How Eclipse aims to attract developers 1:01:15 What areas within crypto Vijay expects will flourish on Eclipse 1:03:49 The next steps for Eclipse and when the mainnet could launch Visit our website for breaking news, analysis, op-eds, articles to learn about crypto, and much more: unchainedcrypto.com Thank you to our sponsors! Polkadot Token 2049 Mantle Guests: Keone Hon, Co-founder and CEO at Monad Labs. Jay Jog, Co-founder of Sei Labs. Vijay Chetty, CEO of Eclipse Labs. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Hi everyone, welcome to Unchained, your no-hype resource for all things Crypto.
I'm your host, Laura Shin, author of The Cryptopians.
I started covering crypto nine years ago and as the senior editor at Forbes was the first
mainstream reader partner to cover cryptocurrency full-time.
This is the August 27th, 2024 episode of Unchained.
Get ready for the world's largest crypto event, token 249 Singapore, September 18th to 19th.
Srinie Bawson, Richard Tang, Arthur Hayes, and 300 others will hit the stage, joining 20,000 attendees.
Visit token249.com for 15% off with the code unchained.
Link in the description.
Mantles Ameth is now the fourth largest LST with $1.3 billion in TVL.
M-Eth offers holders cumulative incentives and airdrops, in addition to native ETH POS yields.
This includes exclusive rewards like Eigen and Cook.
Check it out at M-Eath.
dot XYZ slash campaigns.
Pocodot is the original and leading layer zero blockchain with over 2,000 plus developers,
and the Pocodot 2.0 upgrade will be a massive accelerator for the ecosystem,
making it faster, more secure, and adaptable.
Perfect for GameFi and DFI to build, grow, and scale.
Join the community at Pocodot.network slash ecosystem slash community.
Today's topic is next generation parallelized EVMs.
Instead of our usual format, I'll be conducting many interviews about three innovative blockchains,
Say, MonEd, and Eclipse, each of which employs distinct methods and faces unique tradeoffs in their quest to achieve high scalability.
First up, we have Keoney Hahn, co-founder and CEO at Monad Labs.
Next, we'll speak with Jay Jogue, co-founder of Say Labs, and finally we'll hear from Vijay Chetty, CEO of Eclipse Labs.
First I'm talking to Keonee Han, co-founder and CEO.
at Monad Labs. Welcome, Keone.
Thanks for having me, Laura.
Monad is an Ethereum virtual machine-compatible blockchain
that features what is being called a pipelined architecture
that enables a high throughput of 10,000 transactions a second.
And it also has one second block times.
So tell us how Monad got started and what problem you were trying to solve with it.
Monad started in the beginning of 2022.
There are three co-founders, Eunice Giarda, James Hunsaker, and myself.
So James and I are former co-workers from Jump Trading.
We worked together for about eight years, building performant trading systems,
and then spent a little bit of time in the crypto team at Jump,
mostly working on Solana Defi,
when we realized that there was a need for much more performant EVM execution
and decided to leave Jump and start monitoring.
along with the third co-founder, Eunice Giarda, who has a technical product management background.
And when you were saying a more perform in EVM, you just mean like faster or what were some of the
other things you wanted to try to push forward? Yeah, the core focus of Monad is to get the
maximal performance out of really minimal hardware. And in order to do that, we need to build
new software from the ground up and introduce new architectural improvements. So kind of the premise
of Monad is stacking four major improvements on top of each other that each have a bit of a
multiplier effect on the overall efficiency of the system. And by stacking those together,
we can deliver really exceptional performance. And what are those features? Right. Just to mention them
really quickly from sort of lowest level to highest level,
within the stack. Those improvements are a custom state database for storing Ethereum
Merkel tree data natively on SSD and enabling much more efficient access to that state,
utilizing all of the capabilities of the SSD. The second improvement is optimistic parallel
execution for running many transactions in parallel while still ultimately getting to the same end
state as if those transactions were run serially. The third improvement is
asynchronous execution, which is creating separate swim lanes between consensus and execution,
thus allowing execution to occupy the full block time as opposed to only a small fraction of it.
And then the final improvement is Monad BFT, which is a performance consensus mechanism for
keeping hundreds of nodes that are globally distributed and sync with each other.
Okay. So I think the part that probably a lot of people might have questions about is parallel processing,
just seeing that immediately sparks the question of, for instance,
how you would keep defy composability while having parallel processing.
Yeah, that's a great question.
Just to emphasize in Monad, the blocks are still linear,
and the transactions within a block are still defined in a linear fashion.
Like there's a total ordering between the transactions still.
So everything is exactly the same as in Ethereum from that perspective.
The parallel processing is all done under the hood.
It does not affect the outcome of running those transactions.
And that's the guarantee of optimistic parallel execution is it's doing a bunch of work in parallel
and doing enough bookkeeping so that when the results of that parallel processing can be committed serially,
i.e. in the original order of those transactions, we could be safe.
we could be sure that every single commitment is correct and re-execute if there's any inputs that have changed since then.
And the other features you mentioned, you mentioned a custom state database. I'm assuming this is to address issues like blockchain bloat.
Yeah, that's a great question. Agreed that the single biggest bottleneck for execution is state access. And then a very related topic is as the state gets big,
bigger and bigger, the cost of accessing state, like the latency to access any particular piece
of state grows in existing Ethereum clients. And the reason for that is that the databases
that Ethereum clients use, whether that's Gath or Rath or Aragon or other ones, they're basically
defining a database, but they're embedding it inside of another database. And the actual database that is
being used has a separate tree structure. Therefore, when data is being stored in the Ethereum
Merkle tree and someone's trying to navigate to read one of the pieces of state within that tree,
there's actually a huge amount of interaction that happens because each of those individual
nodes in the Merkle tree are being stored inside of another database structure. So basically,
we call it MonadDB. It's this custom database restoring state. And it's,
specifically optimized for the problem of storing Ethereum-Mercl tree data. But it's a very significant
undertaking because building a database from scratch is, it's a very complex process to build that
entire thing and define how all the storage is used. And so why build this as a separate blockchain
rather than as a layer two? There are a couple of major reasons. The first is decentralization.
At the end of the day, we think it's really important that there is decentralized.
block production with many nodes that are participating in consensus. We think that's important
for censorship resistance. We think that's really important for decentralizing control over
the network overall. And with existing layer two solutions that we see even today, they pretty much
all still have a centralized sequencer. The other major reasons are performance and cost,
which go hand in hand. But at the end of the day, in order to build a really performance system,
One needs to optimize all layers of the stack, whether that's the execution component,
whether that's the consensus component, which is keeping all these nodes in sync with each other,
or other considerations like the data availability consideration.
And so Monad is an effort to tackle all parts of the problem and build a really efficient
singular system that delivers maximum performance.
You also mentioned asynchronous execution as another feature.
Tell us more about that.
The first thing to know about in order to understand asynchronous execution is to know that, for example, in Ethereum, although they're 12 second blocks, the actual budget for execution is roughly 100 milliseconds, which is only 1% of the block time.
And so it's kind of like that, I think there's a movie limitless, which has its premise of like, you only use quote unquote 10% of your brain.
What happened if you could use 100% of your brain?
How much smarter would you be?
And there's this pill that people take.
and a bunch of hijinks that ensue from doing that.
So asynchronous execution is kind of the same concept.
It's the idea of trying to utilize the full block time for execution,
and the way that that's accomplished is by creating two separate swim lanes for consensus and execution.
And when producing a block, the consensus mechanism only involves communication between different nodes and the network
to come to agreement about the official.
ordering of transactions without executing it. And then as soon as that consensus completes on a
block, then two things can happen in parallel, the first of which is consensus over the next
block, and the other thing of which is execution over that block, which just got consensus.
So it's basically the idea of creating this pipeline of two phases, which ultimately allows
the full block time to get used for execution. Oh, okay. Oh, interesting. And then you also mentioned
Monad BFT is another characteristic.
Can you describe that?
Monad BFT is a high-performance consensus mechanism
that is responsible for keeping hundreds of nodes
that are fully globally distributed in sync with each other.
Monad BFT is a derivative of the hot stuff consensus mechanism
and it has linear communication overhead,
meaning that nodes basically, instead of having to all communicate
all to all to each other, which results in a huge amount of traffic. Instead, there's only
direct communication between each leader and all of the other participating nodes in the network.
So it's really efficient, and it can carry a very high bandwidth in terms of the overall
payload. And both of these are really characteristics that are needed in order to allow the
consensus mechanism of Monad to keep up with the high performance execution capabilities.
And so since Monad will be or is its own blockchain, but then is also EVM compatible,
how would you pitch this to developers who want to be in an EVM environment,
but then, you know, could also just go on Ethereum itself?
Right.
I think that Monad at this moment right now is delivering sort of the best of both worlds for developers,
both performance and portability.
As you said, most developers are building for the EVM, like they're building in solidity.
They're using existing tools and libraries.
People don't think about the applied cryptography research component, but almost all of the research is being done in the context of the EVM as well.
So there's a really significant network effect around this existing virtual machine architecture.
And Monad being fully EVM compatible allows developers to reuse existing components or even when they're building new things.
to build them knowing that they're backward compatible with this really well-defined standard.
But then at the same time, developers get really significant performance improvements
from deploying their applications in the Monad environment.
But because there are already network effects on Ethereum, then how do you deal with that in your pitch?
Yeah, I think there's a variety of different network effects.
I would say that the benefit of being able to use existing libraries, tools, many other applications that have already been built for the EVM upon which new applications can compose, all of that is really significant.
But the other part is that ultimately the Monad environment with many applications that are already in existence in other EVM environments that are migrating over to Monad, there's a network effect for.
all of those applications and users as well.
And so I did see Monad described by Coinesque as, quote, an Ethereum killer, and I wondered
what you thought of that description.
I, yeah, definitely I'm not a fan of that description.
I think that at the end of the day, the Monad project is an effort to focus on orthogonal
directions of research that haven't been focused on as much.
Our team is really focused on, you know, exploring a vertical in the Ethereum.
scaling space that I think is really needed. I think of, you know, there's many different
pillars. Some people are focusing on data availability. Some are focusing on roll-up design and
optimistic mechanisms or zero-knowledge mechanisms. We're just exploring another vertical in
the Ethereum scaling space. I think the other thing to mention is that, you know, our team
really hopes to contribute back to the Ethereum scaling roadmap overall by proposing
improvements as EIPs and just more generally collaborating with other researchers in the space.
So it's really not by any means an effort to try to kill Ethereum.
It's really an effort to try to enhance the capabilities of Ethereum.
Well, that's so interesting.
So you would submit proposals at EIPs to the Ethereum community, which would be, you know,
if they're adopted, would then go to Ethereum itself.
Right.
But then is that more just because you would like to see more compatibility between Ethereum and Monad?
Is that because it's not going to like create a change in Monad?
Correct.
I think that you could think of Monad as a vanguard environment for certain Ethereum improvements that that our team thinks are needed and that over time can evolve to, you know, pioneer certain improvements that might then also find their way into Ethereum or other environments.
Okay. And are there certain types of applications that you think are best suited to Monad?
I think that any application that is aspiring or any developer that's building an application
aspiring to cross the chasm to mainstream adoption will benefit from some of the improvements in Monad.
Like if you think about an app that is number one on the iOS App Store, that means it has hundreds
of thousands or millions of daily active users. You know, it's not really a lot of
math required to say a million daily active users, 50 transactions per user per day is 50 million
transactions per day. But that would already be 500 TPS, which is far more than what Ethereum
L1 directly can offer right now. So it's just performance improvements that are really needed to
scale even to support a single application that's really achieving mainstream adoption.
So Monad has had the largest fundraising of 2024, I believe, at least.
at the time, $225 million. How do you plan to use that money? It's a lot of money. It definitely
puts our team in a great position to grow the team and to add folks that can really help push
the Monad effort forward and hopefully the space forward. We're growing not only by recruiting
crypto natives, but also bringing just really talented low-level engineers who have not worked
in crypto before and kind of crypto-pelling them and enlisting them in this effort.
But, yeah, we're very well-resourced and just appreciative of all the support of our investors
and the show of confidence there.
There's also been a surge of interest from VCs in funding apps in the Monad ecosystem.
I saw that A-Priority, Kintzu, and Kuru are all Monad apps that have received funding recently.
And I wondered, you know, what you thought was drawing attention already to, because, I mean, you're, you know, you guys haven't even launched.
So your test net, I guess, is coming out later this year.
So what do you think is drawing attention already?
Yeah, I think it's a reflection of those individual teams' capabilities.
There's some really talented builders in the Monad ecosystem already.
And then also a reflection of the fact that the investing in community is excited about what's possible with much higher throughput, much lower fees, full EVM compatibility.
and then also any other improvements that our team continues to push to help embrace and extend the EVM over a longer period of time.
I also noticed that Monad has a very strong community.
There's a lot of memes.
People seem to love your purple frog mascot.
And I wondered how Mona developed this strong community and meme game.
Yeah, I think it's a reflection of a couple of things.
First of all, it's just amazing contributions from individual members.
of the community, as well as individuals within our team. I think community growth, honestly,
ultimately is just driven by community members that individually feel that they have something to
add. And the only thing that our team really does is just to try to cultivate a fun environment
where people are welcomed and policing spam and sort of setting the tone in the right way.
So I'm really grateful to honestly, like, individual members of the community that have
stepped up a lot in terms of setting really high bar, creating incredible memes, incredible artwork,
organizing initiatives.
So, for example, we have the Monad Run Club, which is a Strava group that a number of members
are just recording their runs in with the goal of getting healthier.
There's Mon Lingo, which is an effort to learn a new language together.
There's so many different community-led initiatives that ultimately,
I think are just really fun and
show the right
set the right tone that it's not all
about farming
and all of the
financial aspect that sometimes
you see in early projects
and instead is more about
building friends, building relationships,
taking on more
leadership opportunities and
building that social fabric
that's ultimately really, really important
to a successful blockchain community.
All right. And so when you expect
to launch your test net and then your main net?
Pretty soon, honestly.
There's a lot of the team is working really hard.
We don't have exact dates to share, but it's coming up soon.
And what about a Mon token?
I guess the name had been leaked in some documents, but then was retracted.
Any news on when we might see a token and what it would be used for?
Unfortunately, can't really comment on that right now.
All right.
What have I not asked you that you think we should mention?
we should mention. Nothing really honestly. I think we covered a lot. Okay. All right. Great. Well,
thank you so much. Thanks, Laura. Hi, everyone. I'm here with Jay Jogue, co-founder of Say Labs.
Welcome, Jay. Hello. Thanks for having me on. How did you come to launch Say? Yeah, yeah.
So I guess our story, with Sage started back in 2021. At that time, I was in engineering lead back at
Robin Hood. And I was there when the entire GameStop saga happened, which I'm sure you must have been
I mean, as you can imagine, it was just a total shit show internally when that happened.
Like, I mean, everything was essentially on fire. There was all that negative public
kind of sentiment when they turned off buys on GameStop and several other stocks.
And as an insider, it just felt really bad because you have no idea what was actually
happening behind the scenes, right? Like, you have no context on anything, but it's essentially
your reputation on the line if something goes wrong. So after going through that experience,
my co-founder, Jeff and I, we initially wanted to build something like Robin Hood,
except build it in a decentralized way.
And that led us down the journey for what eventually became Say.
So initially we wanted to build an on-chain decks, and we're like, wait a minute,
there's no ecosystem where you can actually build an on-chain orderbook base exchange.
So let's go and build it as its own chain.
And that was like the initial inspiration for Say, and then eventually it's become what it is now,
which is a fully general purpose, layer one blockchain.
Okay.
And your co-founder is Jeff Ding?
Jeff Finn, yeah.
Okay.
And so you were looking basically to try to create that sort of just like a trading environment,
but in a decentralized way.
Yeah, yeah.
I mean, initially, that's exactly what we wanted to do.
Since then, I mean, there's a lot of things we've learned in the past few years
since we started building.
I think the biggest learning that we've had,
the biggest thing that we've built up conviction around is that the EVM is here to stay.
And for any listeners that might not be as familiar,
the EVM is the Ethereum virtual machine.
It was initially introduced by Ethereum.
It's what's used to process transactions on Ethereum and several other chains.
And if you look at kind of developers out there in crypto right now,
around 80 to 90% of them are EVM developers.
So a huge majority of devs right now are EVM devs.
And if you actually like ask them to go to a new execution environment,
and I mean, we actually went through this with Kausum last year,
most of them are very strongly opposed to going from the EVM to a different execution environment.
And it's not just like technical reasons.
Like part of it is technical.
If you're writing code for a new execution environment, then it's easier to introduce bugs if you don't understand how things work.
And a bug in a smart contract can result in your entire project, getting grain, your entire company essentially shutting down.
So it is scary from a technical side.
But even beyond that, it's more of like ideological reasons.
the EVM is not just a tech stack.
It is more fundamentally an ecosystem.
And there's all the tooling, the developer mind chair,
everything else that is there around the EVM,
it really makes it sticky.
So we don't think the EVM is going to be replaced by,
I mean, we don't think it's going to be replaced by SVM or move
or any other type of execution environment out there right now.
And the question from our side just became like,
what is the biggest thing that can be approved about the EVM right now?
And the, I mean, essentially what we saw is that the biggest limitation for the EVM is the lack of throughput.
Like when you're not able to process too many transactions, that results in a poor user experience because users have to pay a lot of money in gas fees.
And it also results in a more restricted developer experience.
So from the user experience side, if it's like 50 transactions per second, which is what Ethereum L1, most there are two is on top are currently supporting.
If you have 50 TPS, then let's say there's like 10.
thousand people that are each trying to submit a transaction, it suddenly becomes a very competitive
kind of atmosphere where you need to keep increasing the amount of money you're willing to pay
for gas fees. And I mean, we saw this earlier this year when gas prices like crossed 100 way.
It just becomes completely inaccessible for like most of the human population to be actually
doing stuff on chain. So that was like a pretty significant thing that stood out to us.
And in addition to that, if you're a developer and you need to build for 50 transactions per second,
it's really restrictive.
And it results in you having to make use of anti-patterns.
Like one example of an anti-pattern here would be the concept of an automated market maker, like an AMM.
AMMs don't exist in traditional finance.
The only reason they exist on chain is because they fit the limitations of Ethereum.
So, yeah, I mean, we think that a lack of throughput is the biggest thing that can be improved right now.
And the way that we're approaching that is by paralyzing the EVM.
So the way the EVM is built right now is it's single-threaded.
So if you have a bunch of transactions that are submitted,
they will all get processed one after the other.
This is like really, really simple to implement from the, I guess,
EVM core developer side.
But the downside is you're not taking advantage of modern hardware.
Like the laptops that we're recording this on,
the laptops that people might watch the video on later,
these are all multi-core machines and they're able to process multiple work streams
at the same time.
Like, they're able to be on the internet.
You're able to have like a browser extension open while you're also having like Spotify running in the background, for example.
So you're able to process multiple work streams at the same time.
And it's just like super inefficient to not be taking advantage of that hardware to be able to process multiple transactions at the same time.
So that was kind of the core insight that we had.
That's exactly why we decided to build, say, the way it is right now.
It's the first paralyzed EVM.
And, yeah, I mean, it essentially results in us being able to get the best of both Ethereum
and swanoma in a way, you're able to get the EVM and all the mind sure that is there with that
and all the tooling and all the developers while also getting the kind of performance you see
with a chain like Swano.
Yeah.
So it feels like there's kind of a whole wave of these types of blockchains that are using
this parallelized processing in an EVM environment.
But as you kind of alluded to, say, did start on Cosmos.
And you also started using the rest language.
So tell us, you know, what?
what initially led you to that, and then, you know, you launched your V2, which is this EVM.
So just tell us, like, about that journey.
Yeah.
So the initial, I was initially describing like the more application specific chain that we wanted to build.
When we initially got started, like the most straightforward way to build an application
specific chain was by making use the Cosmos SDK.
It was undoubtedly the most battle tested type of framework out there.
And it made it much easier to get started.
afterwards we decided to become more and more general purpose and the cosmos SDK still continued to be
one of the best tools to get started there were a lot of things that were inefficient about it and we ended
up making a lot of different kind of improvements on that side one part of this would be a long tendermint
which was the consensus mechanism we initially got started with more vanilla tendermint and then there were
a lot of things to be optimized around the way that block propagation works around the way that
block processing works and that ultimately led to twin turbo consensus which helped us be the fastest chain
just point blank out there. We're currently getting 400 millisecond finality, which is faster than
Solana, which has multiple seconds for blocks to be finalized and faster than essentially any other
chain out there right now. And can you explain the features of twin turbo consensus?
Yeah, yeah. So the way that something like tendermint would work, so one part of it is around block
processing. In order to process a block, you would first have two rounds of voting. So there'd be a
pre-vote step of voting, then a pre-commit step of voting. And then afterwards, you start
to process the block. It's really weird, though, because you get the block before the pre-vote
steps. You get the block, then there's two rounds of voting, and then you start processing the
block. The insight that we had is that what if you just start concurrently processing that block
while the two rounds of voting are happening? And this is something that no one had surprisingly done before,
even though it's like a really straightforward idea once I kind of explained it to people.
But that was super helpful in terms of improving performance because you're able to just, whatever
time it takes to process those two rounds of voting, let's say it's like 300 milliseconds,
you're able to benefit from those 300 milliseconds and just process the block at that time.
So that's one improvement on twin turbo consensus.
The other one is around block propagation, where when you're submitting a block,
the way it would work normally with tendermint is you would have to send the entire block across the network,
even if every validator already had most of the transactions for all the transactions in their mempool locally.
So the insight that we had is what if we just send transaction hashes,
which are each 32 bytes.
So it's like around like around 10% or even less than that of what you would see.
with the actual transaction, like full transaction,
let's just send a list of ordered transaction hashes.
And then validators can look at their local men pool
and then reconstruct the block locally.
So this also led to a ton of performance improvements.
And this was just like one of the insights that we had,
like around Twin Turbo Consensus.
There are other things that we did,
such as introducing parallelization that also helped improve performance
at the execution layer as well.
But yeah, I mean, this is what we got started with?
And I think your question was like,
what led us down the path of like starting to focus on the examinal?
Yeah, so I think all these improvements that we just talked about around twin turbo consensus,
around paralyzation, those were fantastic.
The issue when we went live in August of last year was that we only supported Cosm Watson Smart
contracts.
So Cosmodom Smart contracts are in Rust, and you can't really write them in solidity.
So trying to take something from the EVM, like from Ethereuml1 and trying to deploy it on
say would just not work back in the day.
And people didn't want to learn how to rewrite smart contracts for a Cosmolmolm.
And the overall community of cosmossom developers out there was really, really small.
So we had this issue where we had built this incredible tech that was like the fastest chain out there,
but there wasn't really anyone that was using it. And we're like, okay, let's talk to developers and
understand why. And the kind of consistent feedback that people from the foundation were getting
is that developers want EVM support. And once we got enough of that feedback from the foundation,
we're like, okay, we'll go ahead and support the EVM.
And we'll try to understand, like, what actual secret sauce we can have to, like, make it even better.
And that's where the idea of paralyzing the EVM came from.
That also led to us looking down, like, what are the tradeoffs over here?
And then adding in, say, DB to help with both the state storage side of things and then also the state access side of things, which I can go into as well.
But, yeah, overall, I would say it's been just fantastically received.
Like, we put out the announcement for V2 last November.
It ended up going live at the end of May.
So a little bit over two months ago is when it went live on Mainnet.
And currently we're the only paralyzed EVM that is live on Mainnet.
And since then, there's been a ton of things that we started seeing in the ecosystem.
The first is there's been a lot of new applications that have started to deploy.
One of the examples over here would be Yeh Finance.
Yeah, Finance was able to get, I want to say it's like 60, 70 mil of TBL right now,
but it contributed to, say, crossing 100 mil of TBL, which was pretty insane just to see that kind of growth happening
because it's only been a couple of months since most of that activity started to take place.
So there's been a huge spike in TVL.
There's a lot of new projects launching.
And now there's also a lot more investor interest in the ecosystem.
So like one project calls silo.
It's building an LST plus MEV software on C.
They recently closed around with the Tier 1 VC, which I mean, I won't spoil it for them.
They'll announce it from their own side pretty soon.
But yeah, I mean, it's just there's a huge amount of activity that started to happen.
And I think that's only because we've been able to support like improving the
performance on VVN like that.
Okay.
And so just so I understand, when you have this V1 and the V2, is it like the same with Uniswap or
whatever where it just is like a totally different environment or are they connected?
Yeah.
So it's just one chain.
It went, it was just a network upgrade from V1 to V2.
So in that sense, it's that same blockchain.
And so did that create issues like in terms of either fragmentation or security or
usability. It just feels like if you're making that type of it, it's like almost like swapping out
the operating system or something. It feels like it could create issues. Yeah. So the way that we made it,
it was a purely additive change. So it wasn't getting rid of anything that already existed. So it made
that upgrade a lot easier. From the technical side, I would say the upgrade so far has been
extremely stable. There have been no major issues on Mainnet. So that's been fantastic. I would say the
biggest difference from the user standpoint is previously there were only causal smart contracts. Now there's
It's Cosmwasam and Evm smart contracts.
So each user has both an EVM address and kind of a say native address.
So that's, I believe, say, is only the only major chain out there right now that has something like that on Mainnet.
So I think that's been like one thing that users have had to adjust to a little bit.
If you only use the EVM side, it's totally fine.
If you only use the like say native side, it's totally fine.
If you want to do things between both of them, it becomes different than what the experience you had to deal with before.
But I mean, overall, it's like fully interoperable.
So like if you have an NFT on your EVM side, it's accessible from the say side and vice versa.
So from that standpoint, it's like just there's no fragmentation of liquidity or fragmentation of assets happening because of that.
You mentioned earlier that you could also talk about state storage and something else.
Oh, yeah, yeah.
So basically when you paralyze transactions or when you have higher throughput, the side effect of that is that there's a lot more data that is being written to the blockchain.
And when you have more data getting written, this leads to the idea of state bloat, which is one of the biggest reasons why chains outside of trying to keep things simple.
That's one of the biggest reasons that chains like Ethereum haven't necessarily introduced paralyzation in the baseler yet.
That's because when there's more data that's being written, there's more states that needs to be stored.
This increases full node requirements if you're not thoughtful about how that state growth will be happening.
It also makes it more difficult for you to sync a new node.
So, like, if you want to start running a new node and you need to import, let's say, 10 terabytes.
of data across the network, that's going to take a while to do.
And it could also end up becoming a pretty significant issue for the network if, like,
enough nodes go down, like enough validators go down and they're not able to recover in time.
Like, you could have issues around voting as well.
So the way that we've approached this is with CDB.
And CADB has two core ideas.
The first is a memory mapped IVL tree.
So the way that we were previously doing things with V1 is we had an entire tree that was
stored on disk with V2.
and to keep things at a higher level,
we basically split it up into different files
and removed a lot of metadata.
The result of that is that there's less data
that is being stored,
around 60% less data,
which is a huge win.
It allows you to scale much more quickly,
and it makes it much easier to run a full note
and kind of pushes back issues
until the future around that
when disk space will have grown a lot anyway.
So that's one side of it.
It reduces state storage.
And then the other side of it
is we have asynchronous rights to disk.
I guess for a simpler explanation around that,
like you're just having everything get like a state route gets created in memory.
You don't need to write it to disk.
And because of that, you're able to just have much faster like 287x improvement in speed
when you're writing like when you're committing a block.
So that helped improve performance a lot.
And both of those two things I think have helped contribute to the type of both latency,
like the time to finality we're continuing to see and also the throughput of around
5,000 TPS that we observed in the internal low test cluster.
Okay. And so there was something that I thought when I was reading this, and you can correct me if I'm wrong. But from my understanding of say DB, it sort of felt like it was combining the functions of data availability as well as the long-term storage. And I wondered, as I'm sure you're seeing, there are a lot more chains that are going this more modular route. So why spin up your own solution as opposed to using something, you know, a more modular?
Yeah, so I guess two thoughts. In this case, this was more focused on data storage. Data availability happens in a different way. Like, because we're a layer one and validators in full nodes need to process transactions to be able to generate a state route, data availability has to happen implicitly. You can't really have a data withholding attack if you're expecting people to publish date route. So I think that was less of concern with the approach we took. And then I guess to the core question on like, why monolithic instead of a modular solution? It's actually really interesting because there's a
kind of, I guess, things you can see in Web 2 that are pretty similar to what we have happening
in Web 3. And in Web 2, there's this entire microservice architecture kind of push, where instead
of having like one monolithic service, you have a bunch of different microservices. And I saw this
play out for firsthand at Balmanhood. When you have a bunch of microservices, like let's say there
could be one service, like that one team manages, it might just be for trading. There could be
another one that's for KYC, another service for something else. It introduces a lot more complexity
to the overall system. Like with a monolithic service, it's just one.
thing that you need to be thoughtful about. When you have any modular approach, and in the case of
crypto, let's say there's a separate execution layer, separate settlement layer, separate DA layer,
when you have all these different systems that are all relying on each other, first of all,
it increases overall complexity of the ecosystem. Secondly, it increases the blast scope for any
liveliness issues that could happen. Like, if one of these parts go down, then it could impact
the other parts, and then you won't be able to have finality happening as quickly. That's another downside.
And then the third thing is the end-to-end performance is just strictly worse.
with a modular system because there's communication that needs to happen between every single
part of the stack, in this case, the execution, settlement, and DA layers. And if you were to just
combine all of them into one chain, then there wouldn't be any of that communication complexity.
So the top end performance you're able to get is always strictly better with the monolithic
chain. And that's why we're like, okay, if we want to focus on top end performance, we have to go down
the modular route. And I think there's like other use cases where modular chains make more sense,
such as if you're trying to have your own dedicated block space, for example.
I think in that case, from a technical standpoint, a modular chain makes much more sense.
And since you are your own blockchain, how are you finding it trying to pitch developers
who either already are working in Ethereum but also benefit from the network effects there?
You know, how is it trying to pitch them on say?
Yeah. I would say that overall it's hard, but I would also see it's like 100x easier than it was
like last year.
Because it's going to introduce the EVM.
It makes it much more accessible for developers.
But yeah, with any L1, like Kenley,
I think it's much harder to get,
especially TVL and liquidity,
like getting more assets on the chain
and having people start trading more.
I think things like that are just like much,
much harder to do on a new L1.
And I think every L1 that gets started
has similar issues around that.
That's why it's been like,
honestly pretty incredible to see the kind of growth
that we've had happened since B2 went live.
Like crossing 100,
old TBI is a pretty big deal. And it just happened really fast after B2 went live. And like,
I mean, I was mentioning a lot of the other indicators of like projects deploying VC interests,
things like that. So I do think it's trending in a very positive direction. And it's still so
early for B2. Like it's only been a couple of months that I'm like honestly quite optimistic
around the direction it's going in. One other thing that we've noticed, like, Save Foundation put out
creator grants. So there was this $10 million creator grant program with Gitcoin.
that SAFE Foundation announced a few months ago.
And it's been interesting because there's like multiple rounds as part of this program.
And right now the second round is kind of coming to, I guess, in a week or two,
it'll be coming to a completion point.
And there's a lot of community interests, like support projects that are building on
say right now.
So yeah, I mean, from that standpoint, I would say that it's actually been going much,
much better than like you would expect for any Alt-L-1 that is getting started.
Okay, yeah.
I see that the TVL is roughly 80 million right now.
But yeah, it had been about 100 million, I guess just a few weeks ago, actually.
Yeah, yeah.
I mean, TVO is the function of the token price, like of token prices for different assets on the chain.
But if you normalize for token price, there's actually even an increase in the number of assets that are coming in.
So I think that's like one of the things that is like really turning in a strong direction.
Right, right.
Oh, okay.
I see that.
And so are there any particular types of applications that you think are best suited to say?
Yeah, yeah.
So whenever you have a high performance chain,
fundamentally any application that benefits from higher performance does well.
One example of this would be an order book base exchange.
So Banker launched Carbon Defi on Say,
which is an order book basic chain where like everything happens on chain.
And currently like Bankor launch carbon on seven different chains,
including Ethereum L1, five other chains and then Say.
In terms of volume, Say was able to get the same amount of volume in two months
as Ethereum was able to get in a year and a half.
And that makes sense if you think about it because it's just much easier to have trading activity happen on chain if gas fees are a lot lower.
So from that standpoint, it's been pretty incredible just to see the kind of traction carbon defytes been able to get on, say, versus any other chain.
And I mean, there's also another thing that they launched, which was the, I think it's called like Arb Fastlane Bot or something.
But it's the bot that arbs between the different carbon defy dexes across multiple chains.
And 80% of the activity that bot was doing was Onsay.
And there's like seven different chains that it's on, including Ethereuml1.
Do I think for anything related to trading, chains like say just fundamentally end up being much better.
And like we're seeing this play out on Mainette on say right now.
So that's, I mean, pretty surreal to see.
Outside of that, there's a social application that's going to be launching pretty soon.
It's in stealth and it's, I mean, there's a phenomenal team behind it.
But that specific application, like we've been supporting them.
And they have every single thing that's happening.
Every single action that the user performs actually happens on chains.
Like if someone posts something, it happens on chain.
If there's a like that happens, it happens on chain.
And this is pretty different than what you see with like most types of Web 3 social applications.
Most of them don't write things to the base layer at all.
They either have it be completely centralized or they have a separate network of like,
with greater trust assumptions that they've created where all that data gets stored.
So in the case of say, it's actually really incredible to see that playing out firsthand.
And I would say another use case would be games.
When games have a lot of like token movements that are happening,
It just becomes much more accessible when gas prices are lower.
There's actually several games that will be launching on, say, in the next few months.
And in their case, they're not having every single action happen on chain.
Like, in their case, the 400 millisecond finality is good for token transfers.
But if, like, the actual gameplay mechanics, if you're, like, moving around on the map,
you don't want that to take 400 milliseconds.
They have stuff like that happening on chain.
But at least the token transfers end up being much, much more efficient happening on, say.
All right.
Are there any particular topics or questions that we do?
didn't cover that you would want my audience to know? I think this was pretty comprehensive,
so nothing else comes to mind for me right now, Laura. Okay, great. Well, thank you so much.
Perfect. Thank you. All right. So in a moment, we're going to talk about Eclipse Labs. But first,
a quick word from the sponsors, he make the show possible. Mantle LSP is a permissionless and
non-custodial ether liquid staking protocol deployed on Ethereum and governed by Mantle. M-Eath
serves as the value-accumulating receipt token of Mantle LSP and is now
the fourth largest ETH LST with $1.3 billion in TVL. In addition to native ETH POS staking
yields, M-Eath holders can access various yield opportunities across DAPs on Mantle Network L2 integrations
and more. M-Meath holders have previously received over 1 million in Icon token air drops. With the
upcoming October 24 launch of Cook, the new governance token of Mantle LSP, METH holders can start
accruing powder rewards under Season 1, Methamorphosis, which will be convertible to Cook. This
Visit mith.mantle.xyz slash campaigns to learn more.
Pocodot is the original and largest layer zero blockchain with over 2,000 plus developers,
and the anticipated Pocodot 2.0 upgrade will be a massive accelerator for the ecosystem,
upgrading the infrastructure with eight times higher transaction throughput and twice as fast
block times, perfectly tailored core time for the needs of every protocol,
trustless bridges internally and into Ethereum, Cosmos, Near, finance smart chain,
and revised tokenomics and the implementation of a token burn to reduce inflation.
Perfect for GameFi and Defi to build, grow, and scale with one of the most active crypto communities in this space.
Pocodot recently announced a partnership with mythical games,
bringing top games like NFT rivals with over 650,000 players and 43 million transactions
to pave the way for GameFi and the Pocod ecosystem.
Get your Web3 ideas to market fast with economics that work for you.
Think big, bills bigger with Pocodot.
Join the community at Pocodot.network slash ecosystem slash community.
Join 20,000 attendees for the world's largest crypto event, token 2049 Singapore, September 18th to 19th.
Anatoli from Salana, Kyle Simani, Imod Mostock, and 300 others will hit the stage for an immersive festival experience ahead of the Formula One Grand Prix weekend.
Singapore will transform into a buzzing crypto hub from September 16th to 22nd,
with over 500 side events taking over the city.
This is an event you've never seen before,
with paddle courts to rock climbing monoliths and mixed martial arts shows,
as the global crypto community takes over the iconic Marina Bay Sands
to spark connections and define the future.
Visit token 2049.com for 15% off tickets with the code unchained.
Link in the description.
Hi everyone. I'm here with Vijay Chetty, CEO of Eclipse Labs. Welcome, VJ. It's great to be here. Thanks for having me, Laura.
Eclipse is an Ethereum L2 running a Solana virtual machine or SVM. What is the vision for Eclipse? How did you come to launch it and what problem were you trying to solve?
Yeah, so the vision for Eclipse simply put is to build the highest throughput L2 on Ethereum by an order of magnitude. So when LXEclipse, so when Eclipse,
L2 fundamentally is in the act of selling block space, and so it should be doing it as efficiently
as possible.
And so for us, that really means optimizing around the execution environment.
And so that led us very naturally to the SVM as the preferred VM on which to really launch
Eclipse and continue to optimize from a performance and throughput standpoint.
So if you look at the existing EVM L2 landscape, there's been a lot of great derivation there
from optimism, arbitram, and other teams, but they've really been trying to also manage against
the decentralization aspect, right? Whereas as an L2, you can uniquely do things that an L1
is not able to because you don't need to worry about the shared security and validator set
component. It's extremely difficult to spin up a new validator network for an L1, but as an L2,
you can uniquely lean into what makes an L2 and L2, and that's optimizing the execution.
piece. So as a part of this whole modular wave of the last one to two years or so, we really saw
an opportunity to take the SVN, which offers native parallelization and an order of magnitude
higher throughput than what the EVM does, and use that to settle to Ethereum. So you're still
tapping into Ethereum users and assets and inheriting matured security, and then posting the
data blocks to Celestio. So as a result, we're able to offer an L2 that has the
highest throughput and lowest cost out there. And the motivation to build this was really from the
perspective, you know, for me personally, having been in the space for the last 10 years and launched
DYDX and UNSwapX and these other kind of blue chip defy protocols, you really started to see
a trend towards moving the transaction process off chain due to the inherent limitations of the
EVM and keeping things fully on chain. So I saw an opportunity with this.
vision and idea of clips to really enable being able to build institutional grade defy and other
use cases, but keep them fully on chain, right? Because the alternative right now is what's
happened with in the broader theory of landscape where you've started to see fragmentation to a variety
of different RFQ and intense players. And that starts to introduce a lot more coordination problems.
It sort of re-centralizes because now you have teams building centralized backends and then
centralized solvers providing liquidity for transactions.
So this is really a compelling alternative to build one general purpose layer two that can
support the needs of 99% of apps out there.
Yeah, it's interesting because so this is part of a show where there's going to be a couple
of other projects that are doing parallel processing, but they are mod ad and say.
And so as you are probably aware, their own chains.
So this is the only one where it's an L2.
But it seems like, so both of them are EVM environments because of that need to like draw developers.
And so you're sort of going where they are.
And yet you're importing this SVM, which is really interesting.
So Eclipse is using, so Eclipse is like all in on this modular kind of approach.
It's using Celestia for availability, Ethereum for Consensus and Settlement.
It's also using risk zero for fraud proofs, which it's like a ZK thing as far as I understand.
So can you explain kind of that whole setup and why you chose this architecture?
Yeah.
So the motivation for that was to build what we saw as the best stack from a first
principal's perspective, right?
So just starting from the top, at the execution level, and that's really the piece that
Eclipse is building and innovating on.
The SBM offers the highest throughput of any VM out there.
currently, right? And there's a very well-established Rust developer base in Web 3,
thanks to a lot of the great work that Solano's done. And Rust is also a very popular language
in the Web 2 world, right? So there's a big opportunity to onboard Web2 devs who are interested
in building really interesting applications that take advantage of that high throughput.
So that was the motivation in terms of selecting SVM. And I can talk a bit more about the work
that we're doing to continue to accelerate that and reach even higher TPS than
say Sala Kan.
But that's the execution layer.
And then if you go down to the settlement and consensus layer of Ethereum,
Ethereum is still the largest user asset base in crypto by wide margin, right?
There's a lot of interesting assets and user types there from kind of standard tokens to RWAs,
to interesting consumer apps.
And so that's created kind of a very vibrant user base that could benefit a lot
from being able to take advantage of the developer experience in the U.X that Solana offers, right?
The U.S. of using a Solana app or something built on the SVM is much smoother than the
UX of traditional Ethereum apps.
And that's slowly changing, but I think there's a lot to learn there.
Maybe the one catch is still around the block explorer piece, right?
where using a block explorer around Solana or the SBM is still definitely more complex,
but there's some interesting work that we're doing there with the Ether's scantene.
And so that's at the settlement and consensus layer, right?
And being able to tap into Ethereum users of assets, but do that with Solana type apps
or the experience of the SBM.
So that's a very compelling combination to us and one that has not been explored to date.
and I think there's a very interesting design space around that,
especially of tandem with some of the leading waltz out there.
So with the Celestria piece at the data availability layer,
we chose Celestria because they're the most trying to prove in modular DA solution out there.
They have led the way for a lot of other players now,
and they also offer the best in class lowest cost of publishing blocks of data.
And that's something that we've directly seen in production
since we launched our mainnet for developers on Tuesday of this week.
So that's been a great choice for us since Celestia is a great partner
and one that we think is really going to be at the forefront of modular DA.
And just for listeners, we're recording this a few weeks before this comes out.
So we're recording this on Friday, August 2nd.
So by the time this comes out, this will be three weeks later.
Yeah.
So hopefully we'll have a lot more flocked data from Celestria.
that point to also talk about. And then the last piece is that the fraud proof layer, right? So
ZK fraud proofs are an important part of creating a trust minimized layer two with Eclipse.
And ultimately, we think all kind of optimistic approaches will transition to ZK fraud proofs.
And so we want to be at the forefront of that. And so that's a piece that we haven't yet implemented
in our main app for developers. But that's something that's something that.
that we'll start to work on over the next few months as well.
And one issue with modularity is that when you have that setup,
it can create a little bit of friction in the sense that all these different layers
have to communicate with each other.
So how do you deal with that to still maintain efficiency?
Yeah.
So there's a lot of internal and dev work our team did to really test out that environment
and work closely with Celestia to make sure that.
we were publishing data as efficiently as possible. And even now, you know, I think there continue
to be learnings around that, right? For example, Eclipse has saw the largest daily amount of data
that was published just last year on one day yesterday or a couple days ago. And that's a staff
that we're very proud of, but it's also a function of the fact that there was a lot of other
metadata that was included in the compressed Solana block beyond just a list of transactions of the
votes, right? So I think continuing to reduce the amount of metadata that's being included and
we're to increase, we're to decrease the ratio of the amount of data that's published
versus the amount of activity over time as well as activity increases, but we make the publishing
of data more efficient. So I think that's an example of the design area that we continue to optimize
on, but we spend a lot of time ensuring that, you know, the celestial that we built was the best in
class, you know, now I could cause other players who are voting versions of that as well. So it's sort of
this collective battle testing that we've done with Celestia and then other partners in the ecosystem
as well. And so something that I was curious about. So this is an SVM, but it's, you know,
on this layer two on Ethereum. So how do people transact in Seoul on eclipse? Do you need a bridge?
And if so, is it like a wrapped Soul token? And is it decentralized or centralized?
How does that part work?
Yeah, exactly.
So you need a bridge.
Eclipse will be operating a canonical bridge,
specifically for purposes of moving ETH onto Eclipse.
But beyond that, we partnered with other players like Hyperlane
and other bridge partners that will soon announce to really help to facilitate the bridging
of assets from other L1s and L2s.
But you're right in that soul is basically a rapt version of soul.
That's bridge.
Okay. And so you recently came in as CEO after the former CEO, Neil Samani, resigned after
multiple people accused him of sexually harassing them. This actually all unfolded on Twitter or
X. And CoinDesk also later reported that he had also pledged a stake in Eclipse tokens worth
$13 million to Narajh Pant, who formerly used to work in Polytechnic Capital. And it was
part of a little deal to actually obtain the funding from Paul.
chain. So since you've come in, how have you been trying to reset the foundation at Eclipse
post Neil Simani? Yeah. So I think first and foremost, I've been focused on the future and the
product and the roadmap for what we're building, right? And ensuring that we have a very strong
team in place that's moving towards that in a united fashion. So 99% of my focus has really been on
the product, the team, and board facing, and that's mostly what I think about. You know, I won't
speak to Neil or his actions. You know, the veracity of these allegations is not, it's not my place
to speak about those, but definitely support the voices that have spoken out and their right to do
that. But I think for me, it's really been focused purely on the company and the product
that we're building, right? Ultimately, these sagas that unfolded on Twitter are largely a personal
matter for Neil. And so he stepped away from the company and the team has continued to focus
on the good work that we have been focused on. And so what's your pitch to developers who are
trying to figure out where to build? Because obviously we've seen there's a huge community in Ethereum.
There is a burgeoning community of developers on Solana. And so you're, you're, you're,
sort of this hybrid. So, you know, what is your pitch to the two different groups?
Yeah. So it's interesting. You know, I think Eclipse is, when Eclipse first came out, a lot of
people didn't know how to think about it and it's sort of elicited these very kind of passionate,
tribalistic reactions. But I think now it's sort of been a lot more refined and people appreciate it
more. But there's sort of a couple of lenses through which to view it. Right. I think first is really,
from the lens of existing established
Solana developers, Eclipse offers them
a way to easily pour over their applications
to tap into Ethereum users and Ethereum assets, right?
So this is a growth story for protocols deploying
like Orga, Mango, Soland, and so on.
And it's really about tapping into more users and assets
and bringing Solano-grade application experiences
to those Ethereum users.
And then, of course, if you look at the Ethereum developer set, like, I think Solidity and Rust
have always been kind of these separate guidelines, right? You have Rust devs in Salonah, Soliton
Solidity Devs and Ethereum and EBM land, and there hasn't really been a ton of crossover until
very recently, right? I think part of that was tribalistic. Part of that was just ease of developer
developers working with these languages, right? Slidity and Rust are very different. The smart
contract environment on both is very different. So it's a lot of,
recently that their Debs started exploring Rust and Slana and vice versa, right? And you're starting
to see tools like Reth to enable developing with Rust on Ethereum, for example. So I think they're
starting to become more crossover there. But to me, the really interesting opportunity is you
had a lot of these Ethereum developers have moved to, you know, moved a lot of their defy
transaction processes off chain or to totally siloed app chains as a result of the limitation.
of Ethereum. So if you look at a lot of what Ethereum developers have done with their app experiences,
they've moved transaction processes off-chain due to the limitations of Ethereum, right? So you
have these RFQ and intent systems where a lot of the actual transaction flow and transaction data
is moved off-chain or to proprietary backends as a result of the limitations of scalability.
And then you've also had some developers, even D-YDX, where I previously was moving towards more of an
app chain or app-specific roll-up model in order to deal with these limitations of scalability
on Ethereum. But the end result of that is you have these very fragmented experiences,
and it's really hard to get to a unified pool of liquidity or to realize kind of the benefits
of transparency and composability, which is a part of the on-chain vision in the first place for
crypto, right? So I think Eclipse offers an interesting solution for those developers where they can
deploy institutional grade clobbs. They can deploy seamless RQ systems, but have that be entirely
on chain. And I'm sorry, just explain what a clob is? A club is a central limit order book.
Oh, okay. Yeah. So, DYDX is an example, though, right? And so I think the, being able to bring
these kind of institutional grade T1 DeFi app experiences to users, but keeping them fully
on chain is what Eclipse helps enable. And so I think we run counter to a lot of the recent
development where these intents and RFQ systems have moved off chain.
Okay, so by the time the show comes out, it will be after you recently announced hackathon,
which is from August 7th to 21st.
So I know I'm going to ask you to prognosticate, but since this will come out at that period,
what are some things you would like to have seen by the time the hackathon finishes?
Yeah, so we've got a few tracks.
So there's a defy track, a gaming track, consumer mean coins, and infrastructure.
So I think I'm definitely excited to see some institutional great defy experiences come out.
So a central limit order book and our key system I think would be great to see on the DFI side,
especially the one that has a really good accessible UX for retail users.
And then, you know, obviously with mean coins, there's been a ton of activity on Solana to date.
I think there's some really interesting things to do with kind of fair launches and community ownership
that can come out of that and be applied to kind of venture back crypto companies as well.
So excited for some of the stuff that our community developers have been talking about there.
And then with consumer, I think there's a huge opportunity to take advantage of the throughput
of eclipse to build best in class consumer experiences.
And so this is an area that we're really focusing on because of that high throughput.
If you look at mainstream kind of Web 2 social apps out there, they have an insane amount
of transactions per second, right?
Or interactions.
So being able to power all those with Eclipse is, I think, a very interesting, kind of interesting
design space that we're excited to see what comes out.
And I think there have been some strong recent examples of this, right, with FrenTech,
forecaster and lens, which have really set kind of, I think, early precedence for what's possible.
But building something that can tap into a much more mainstream audience is something I'm
excited to see from the consumer track. And then lastly, in terms of infrastructure, if you've kept up
with the D-PIN space, right, to decentralized physical infrastructure, I think there's a lot of
interesting stuff going on there right now with decentralized sensor networks, decentralized AI
compute. And obviously it's important to separate kind of substance from pipe there,
but I think there is really interesting design spaces around being able to power these large-scale
sensor and deep-in networks with the close.
Okay. And so when do you think you'll have your mainnet launch?
Yeah. So we recently launched mainnet for developers, right? So what that means is we have not
been active or encouraging of front ends or other tools that retail users can use. And it's very
much not something that we want to actively facilitate yet. Of course, there's been some
organic activity. But our focus for right now is largely on helping developers and info partners
deploy their apps into the mainnet environment. We plan to do a public mainnet launch around
September roughly is what we're targeting. And so when we do that, we'll be launching for retail
users. There'll be a variety of different interfaces and then quests and kind of a gamified experience
for users to go through all those interfaces. So that's our rough timing right now. And in the
meantime, we're doing a lot of developer education, answering questions on Discord and just
kind of hand-holding anybody who's interested in building.
Okay. And you have said that the initial launch will be, quote, full of training wheels.
And by that, what you meant was a centralized bridge, no functioning fraud proofs, what's
Rawls. So can you just talk a little bit about kind of what your roadmap will be after
launch and the priority and, you know, getting some of these training wheels removed?
Yeah. So before we do the public mainnet launch in September, we will, we will,
allow for withdrawals. Of course, that's table stakes. And then beyond that, once we launch,
we plan to transition to a phase one roll-up over the subsequent months going to end of year.
And so that will include launching permissionless fraud proofs, I trust to minimized bridge
and forced inclusion. So that will really get us to that stage one milestone. And then over the
course of next year, we'll work on becoming a stage two roll-up.
up, so becoming fully permissionless.
And in addition, we'll progressively open source parts of our stack.
So for right now, there are a few key pieces that we've launched with the Apache 2.0 license,
so academic and allowing for open usage and repurposing.
And then there's some pieces that will launch source available, and our goal is to
transition all that to open source over time as well.
And then governance will play a role with Eclipse as well.
well. So decentralization is something that's in our future. And so the Eclipse Foundation is
working through timelines around that. And then the last piece I'll mention in terms of roadmap
is how Eclipse can continue to remain competitive and innovate over the longer arc of time.
So right now there's, we've talked a lot about the early opportunity in terms of bringing an
order of magnitude higher throughput to a Dereem of users' assets. Beyond that, there's an opportunity
you to be the first to implement FireDancer, which is an independent validator client that
the Jump team is building for Solana. And there's also Agave is kind of another solution there.
So bringing hardware accelerated throughput to Ethereum is something that we're very excited about.
Specifically, if you maintain a more minimized sequencer set, you can inherit hardware
improvements through chip technology. So for example, FireDancer is using FPG.
GA's, there's an opportunity to also use ASX to accelerate throughput even further.
So Eclipse plans to really move in that direction of implementing Byrador Answer and doing a lot
of research work around hardware acceleration and hardware-enabled throughput because we're
maintaining that smaller sequencer set.
So that's an area of research that we're very excited about.
And the eventual goal is to bring throughput on the order of hundreds of thousands of
transactions per second so that we can really power Web2 scale apps.
web three. Okay. And just to ask you about the comment that you made us about decentralization
and the foundation moving toward that, I presume that means that you'll be launching a token.
So I can't speak to specifics of that at this time, but governance and decentralization are
definitely part of the roadmap for Eclipse. Okay. And then last quick question, are there any
particular types of applications that you feel like Eclipse is best suited for? Yeah. So in particular,
a few things I'm very excited about are on-chain central limit order books and RFQ systems on the DFI side,
mainstream consumer apps that can reach millions of users and potentially have some gamification embedded in them.
And then lastly is D-PIN and these large-scale sensor networks that have distributed networks of mobile sensors or temperature sensors or AI capacity as well.
All right.
Well, thank you so much for coming on Unchained.
Thanks for having me, Laura.
Thanks so much for joining us today.
To learn more about Next Generation Parallelized EVMs, check out the show notes for this episode.
Unchained is produced by me, Laura Shin, with help from Matt Pilchard, Juan Aranovich,
Mechangavis, Pamma Jimdardt, and Market Korea.
Thanks for listening.
Unchained is now a part of the Coin Desk Podcast Network.
For the latest in digital assets, check out markets daily, five days a week with host
Noel Atchison. Follow the Coindesk podcast network for some of the best shows in crypto.
