Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Monad: The EVM-Compatible 10,000 TPS L1 Blockchain - Keone Hon
Episode Date: December 18, 2024The status quo for developers choosing an ecosystem for their blockchain usually revolves around trade-offs: do they go for Ethereum’s network effect, liquidity and decentralisation, or sacrifice so...me features in favour of a higher throughput. Monad aims to combine the best of both worlds, while not being limited by excessive hardware requirements. Monad built an EVM-compatible L1 from the ground up, completely rethinking execution and consensus, in order to achieve the infamous 10,000 TPS. This extreme scalability is made possible through Monad’s optimistic parallel execution which is asynchronous from consensus. The latter has also been optimized in order to achieve single-slot finality. Monad’s proprietary database architecture allows for states to be stored on SSDs instead of RAM, which ensures that consumer-grade hardware can run a Monad node, further increasing decentralisation.Topics covered in this episode:Keone’s backgroundTradFi vs. DeFi and how Monad was foundedEVM’s network effect vs. other VMsHow Monad aims to improve EVM’s performanceMonadDBMonadBFT - New consensus mechanismAsynchronous executionMEV and proposer-builder separation (PBS)Monad’s throughputFurther scaling and limitationsAlternatives & trade-offsDevExCommunity & ecosystem developmentEpisode links:Keone Hon on XMonad on XSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: Chorus One is one of the largest node operators worldwide, supporting more than 100,000 delegators, across 45 networks. The recently launched OPUS allows staking up to 8,000 ETH in a single transaction. Enjoy the highest yields and institutional grade security at - chorus.oneThis episode is hosted by Brian Fabian Crain.
Transcript
Discussion (0)
That was really the core idea was to make the EVM execution a lot more perform it
and then build a consensus mechanism that could keep up with that really high execution throughput
in order to maintain a really high degree of decentralization and then full decentralized block production.
I think Solana is processing somewhere between 2,000 and 3,000 transactions per second,
you know, whereas Monad is supporting over 10,000 transactions per second.
And then also Solana is using relative to...
high hardware requirements. I believe the requirement is 256 gigs of RAM right now, whereas
Monad requires notes to have 32 gigs of RAM. Welcome to Episode on the show which talks about
the technologies, projects and people driving decentralization and the blockchain revolution.
I'm Brian Crane and today I'm speaking with Keonee Hon, who is the co-founder and CEO at Monad Labs.
Monad is one of the most ambitious and interesting new layer ones coming up,
expected to launch next year.
So I'm really excited to get into the details of that with Keone.
Now, before we get into it, we just want to share a few things from our sponsors this week.
If you're looking to stake your crypto with confidence, look no further than course one.
More than 150,000 delegators, including institutions like BitGo, Pintera Capital and Ledger, TrustCore as one with their assets.
They support over 50 blockchains and are leaders in governance or networks like Cosmos, ensuring your stake, is responsibly managed.
Thanks to their advanced MEV research, you can also enjoy the highest staking rewards.
You can stake directly from your preferred wallet, set up a white label note,
restake your assets on eigenayer or symbiotic or use their SDK for multi-chain staking in your app.
Learn more at chorus.1 and start stacking today.
This episode is proudly brought to by NOSIS, a collective dedicated to advancing a decentralized future.
NOSIS leads innovation with circles, NOSIS pay and Metri, reshaping, open banking and money.
With Hashi and NOSIS VPN, they're building a more resilient privacy-focused internet.
If you're looking for an L1 to launch your project, NOSIS chain offers the same development of
as Ethereum with lower transaction fees.
It's supported by over 200,000 validators,
making NOSIS chain a reliable and credibly neutral foundation for your applications.
NOSIS DAO drives NOSIS governance where every voice matters.
Join the NOSIS community in the NOSISDAO forum today.
Deploy on the EVM-compatible NOSIS chain or secure the network
with just one GNO and affordable hardware.
Start your decentralization journey today at nosis.io.
Cool.
Well, thanks so much for coming on, Keone.
It's really great to have you on.
Yeah, thank you for having me, Brian.
So I like to start by kind of asking people about how their crypto journey began.
Like, how did you first become interested in crypto?
Yeah.
So I'm 35.
I've been working for...
like 13, 14 years of this point, the first 10 years of my career were spent in high-frequency trading, working as a quant. So I traded in a number of traditional markets, mostly on the futures side. These are really high volume, like very liquid markets, like S&P 500 futures or 10-year treasury note futures or crude oil futures, building perform in trading systems that trade.
in these really efficient markets, generating very, very small statistical profits over a long
period of time. And over some period of time, our team started to trade crypto, initially
just sort of like without a lot of understanding about the underlying assets that we were trading.
But crypto is really interesting from a trading perspective because there's so many different
assets. Like there's so many different coins themselves, but then a lot of different perpetual
futures and deliverable futures and many different exchanges. So there's a lot of interesting
correlation structure across the space. And that was initially what got me and my trading team
at jump trading involved in crypto. But then at the same time, I was also just like getting really
into crypto Twitter and enjoying all the narratives, all the memes, all the characters in 2020 and 2021.
So that was sort of the introduction of the space. And then my team ended up merging into the crypto team at jump trading in mid-2020.
And started working on Solana Defi for a little bit of times. Then at that point, I was fully professionally in crypto prior to starting Monad at the beginning of 22.
Right, so you basically first started trading, you know, just on centralized venues and then also started doing more like on-chain stuff.
Yeah, we were, from a trading perspective, yeah, I was more focused on the centralized side.
I mean, in a personal capacity, I was trying out different dexes and, yeah, buying NFTs and, yeah, getting rugged on on coins.
you know, on meme coins of 2020, 2021, aside from Dogecoin, which did really well.
But yeah, it was mostly in the professional capacity more on the centralized side.
I'm curious, sort of from all your knowledge you had about, you know, how markets work on,
in the traditional markets work. And then when he kind of came to crypto, what were the things
that, you know, just seemed most weird to you or like most surprising?
I think two things that come to mind.
The first is just the general inefficiency of the markets and the typical spreads that you end up paying as a retail trader.
Like in the traditional markets, of course, depends on the liquidity of the asset.
But for, you know, most normal futures or most normal equities, people will end up paying single-digit basis points in spread.
and in slippage, like spread and slippage combined.
Like their execution price is never more than, you know, like one or two cents different from whatever the midpoint of the market is.
And this is for equity that's trading at $100 or $200 or more.
So single digit basis points.
And then if you go to the defy space, it's, you know, the default is like a 30-bip fee.
And then in addition, like when you trade, there's like some impact.
And then when you price them like some slippage and then you end up getting sandwiched attacked, you know, it's just like it's very common for people to end up paying like 50 basis points or 1% or 2% in slippage.
So it's just, you know, like two orders of magnitude more than you would in the centralized space.
So that's the first thing.
And then the second thing is just the fact that when a transaction is submitted,
It's an pending state in most blockchains.
Therefore, it's subject to the actual discretion of the block producer in terms of the ordering,
which then has an effect on the execution price as well.
And then tell me, what was the origin story of Munat?
Is that what you did after Jump?
Yeah.
So I met James Hansaker, who's co-franchiser.
who's co-founder and CTO here at Monad Labs in 2014
when we were both a jump on the same trading team
and we've been working together ever since then.
We both left Jump at the very beginning of 2022
and then started Monad shortly after that
along with the third co-founder.
And what was the, what was sort of the vision that
or like you know the thing that cost you said?
okay, this is the thing I want to work on.
Yeah, I think with any startup, you really need an idea
and an idea of also how that idea fits into the broader landscape of the space that you're working in
and a clear problem that it's solving.
For us, prior to starting Monad when we were at Jump Trading for about six months,
We were mostly working on Solana Defi, and at that time in 2021, we could see that there were a lot of advantages to building on Solana because, you know, Salana was offering like 500 to 1,000 real TPS of throughput.
So much, much more throughput than Ethereum.
Ethereum was about 10 TPS.
And also much more throughput than any of the roll-ups, which were, you know, also in the like 10 to 30 TPS range.
much more throughput than other EVM layer ones,
which are also in the like 10 to 100 TPS of throughput range.
So there are a lot of advantages to Solana,
but at the same time,
builders were building in Salana had to build
with a completely different bytecode standard.
They couldn't reuse any of the work that they'd done
if they'd built for the EVM already.
They couldn't use any of the really rich array of tooling
and libraries and so on.
And we just realized that we could give developers
is the best of both worlds and give them both performance and portability,
focusing very heavily on optimizing all parts of the EVM stack.
And that was really the core idea was to make the EVM execution a lot more performant
and then build a consensus mechanism that could keep up with that really high execution throughput
in order to maintain a really high degree of decentralization
and then full decentralized block production.
So I'm curious because I feel like this,
EVM discussion, you know, it's been a very long-standing discussion where, you know, some people
will be like, oh, EVM is fine. And then a lot of people be like, oh, it's kind of like this,
you know, odd thing that was created early on. And just because it was the first, it, you know,
it has so much, so much adoption. Do you feel like, is the, like, how do you feel like
comparing the EVM with other VMs? Do you feel like using the EVM is,
The biggest advantage is just existing developer ecosystem, tooling, contracts exist,
or are there other pros and cons versus other VMs?
Right.
I think that, well, so EVM is the, just to state it very explicitly,
EVM is the low-level by code standard, but then typically developers are building in solidity
or occasionally in Viper or Huff or other front-end languages.
and that that high-level language gets compiled down to EVM.
But it is really true that almost all capital on chain is in the EVM right now.
Like over 90% of all TVL on chain is in EVM DAPs.
Additionally, there's a ton of existing libraries and tooling.
Almost all the applied cryptography research is being done in the context of the EVM as well.
People don't really think about that part, but, you know, very much on
the research side and a lot of zero knowledge research is being done ultimately to interface with
the EVM bytecode standard as well. So it's really like for developers, they would probably
prefer to build for the EVM given all these network effects and the fact that by building for the
EVM, they're not tied to like a one ecosystem or a very small,
subset of ecosystems, they can really deploy their work almost anywhere in the future.
So it's really just future-proofing the work that they've done in addition to having access
to all of the existing libraries and tooling.
Do you think this is something that like kind of network effect is going to just continue
compounding and grow bigger in the future?
Or you feel other things like, you know, Slana VM or like Move VM or any of these others
have a real shot at some point.
taking over?
I think the future is very fluid.
It really depends on the path of
least resistance for developers.
And our team thinks that the work that we're doing
to make the EVM a lot more perform it,
so that then developers are really not forced to choose
between performance and portability
will have an impact on how people think about
what VM to build for.
You're right that,
in the current regime at this very moment,
developers might be moving over to Solana
because they feel that that's where they'll get the best performance
and also because there's a lot of excitement
around the Solana ecosystem right now.
But that all just can change with the introduction of new technology.
So when it comes to scaling,
so moving all the bottlenecks and making sort of the EVM very performant,
What are the things that you guys did to achieve that?
Yeah, there are at least four different major areas of optimization
or four different new architectures that have been introduced
in addition to just more generally optimizing.
First of all, modad is completely from scratch.
So all parts of the stack are built from scratch for performance in C++ and Rust.
and then we've introduced several new architectural improvements,
which I can describe sort of briefly.
I think the main way to think about it is that there is execution improvements,
there's consensus improvements,
and then there's improvements in how consensus and execution interact with each other.
So the first two improvements that I want to mention are both related to execution.
The third one is related to consensus,
and the fourth one is related to how consensus and execution interact.
act. So on the execution side, the two
things that we've, the two major things that we've done are
introducing a new database and introducing
optimistic parallel execution. And the reason why
these two things are both needed in order to make execution a lot
more performant is because, well, first of all, to talk a little
bit about the job of execution. So like in Ethereum,
there's a block. It has a whole bunch of
individual transactions that are supposed to be run sequentially in order to get to the end
result. And initially, you might think that there's no way to parallel process because the true
state of the world is the state of the world, assuming that all those transactions are run one
after another. So for example, if I start with 200 USDC in my account, and then the first transaction is
me sending 150 USDC to you. And then the second transaction is me sending 100 USDC to my brother.
So, you know, the first transaction when it gets executed, it'll take my balance from 200 to 50.
And then the second transaction when it runs will take my balance from 50 to still 50 because that
transaction will fail because it was trying to send 100, but I don't have 100 anymore.
And so I think this is an example that shows you that you kind of do have to execute the transactions, at least in some sense, serially, because you need to have done that first transaction where I was sending you 150 before we can actually get the correct result of the second transaction.
If we ran them in parallel, just totally naively, we would think that both of them succeeded.
So there's a notion in which sequential execution is needed.
But in Monad, we introduce optimistic parallel execution to do a bunch of work in parallel,
but then do the right amount of bookkeeping so that we're keeping track of inputs and outputs,
i.e. storage slots, both the values that were read in from a storage slot and then the values
that are written to a storage slot. And although we do a bunch of work in parallel, we do bookkeeping
and then commit those pending results in the original serial order and re-execute any unexpected
results where the inputs have since changed.
And so in that particular example, we do those two pieces in parallel.
We would get pending results for both of those two transactions.
And then the second pending result, we would then invalidate and re-execute because
we would realize that it was done wrongly the first time.
Okay, because I think some other chains they were like first tried to check, like,
okay, what contracts does it touch?
And then if it doesn't touch this contracts, then you say, okay, you can paralyze it.
But in your case, you just execute it and then you see afterwards what it touches and then you kind of like roll things back almost?
Yeah, that's exactly right.
Although I wouldn't call it rolling it back necessarily because nothing has really been committed.
Right.
But yeah, that's a good characterization, though, that a lot of blockchains that do parallel execution require explicit.
dependency specification. So Salon is a good example of this, where when you submit a transaction,
you have to say the exact pieces of state that that transaction is going to touch and indicate
which ones are going to be read and which ones are going to be written to, aka the read-write
locks. And then that information is used as an input to the Salana scheduler to make decisions
about how to parallelize work. And if you mis-specify one of the dependencies, like if
the transaction tries to go out of bounds and touch a piece of state that wasn't specified
at a time, then it just automatically fails. So that's sort of a more explicitly defined model
for transactions. And I think you would think that it might allow for more performance because
you have all the dependencies up front. But in practice, what we've found is that you can get a
lot of performance from this pure optimistic approach where you just assume that everything is okay,
that you know, you're just reading dependencies on the fly. But then you put the results in a
pending state and you commit them serially and re-execute if you need to. Maybe we can come back to
this later, but, you know, one question I'm very curious about is the whole question of like
MEP, PBS, because that I think maybe relates to this as well. But maybe let's go through the
you mentioned four things.
So let's go through the other ones first.
Oh, yeah, sure.
So I was telling you about, I guess,
one of two major improvements
that are improving execution.
So I told you about the first one already,
which is optimistic parallel execution.
The second one related to execution
is this new database
that we've built called MonadDB.
And so the thing to know about
this particular topic
is Ethereum stores all of its
state in a merkle tree. And the benefit of storing all the state in the merkle tree is that
the merkle tree has a Merkel root, which is essentially a checksum over all of that state.
And in that way, a commitment to all of that state. So like if you and I are both running full nodes
and we both have the same Merkel root at the top of our tree, then we both know that literally
every single piece of state is exactly the same on both of our machines. So we don't have to go like
state by state in compare all of them, we can just compare the Merkel roots. It's a very efficient
way of ensuring that we're on the same page. And the Merkel tree is, you know, a successively
hashed tree where every parent is a hash of all of its children, and then you just kind of like
propagate that all the way up to the root of the tree. And the thing to know about existing systems is
that this Merkel tree structure is generally embedded inside of another database.
For Go-Etherium, it's LevelDB or PebbleDB.
In Aragon and Wreath, it's MDBX, which is another database.
But in any event, like all of these Ethereum clients, they use a different database as an actual store
for all this Merkel tree data.
And it creates a lot of indirection because each of the...
those databases themselves have another tree under the hood that's being used to define how
data is being stored on disk. So when you want to navigate from the root all the way to one of
the leaves in the Merkel tree, each time you visit a node, you're actually triggering another lookup
into another tree. And so it's just really inefficient to go navigate from the root of the
Merkle Tree down to any particular node because each node that you visit is going to trigger an entire lookup into another tree.
So with MonadDB, we apologies to the long explanation there, but for MonadDB, we're actually storing the
Merkle tree natively on disk. So there's an exact mapping between how the tree is laid out, the Merkle tree is laid out,
and the locations on disk, the pages. And then in addition to that, there's also a lot of other optimizations like
introducing the asynchronous IO support so that many pieces of state can be read at the same time,
bypassing the kernel, a bunch of other optimizations.
But what you really get is a much more efficient lookup system and also one that can support
effectively parallel reads, which then interacts really well with the optimistic parallel
execution, because in optimistic parallel execution, you are running many transactions,
each of which is at some point
encountering S stores and S loads
and triggering database lookups.
And so while all these database lookups
are being triggered in parallel
by running all these transactions in parallel,
the disk is also able to service
a lot of those requests in parallel
and start serving state
back to each of those transactions.
Okay, so basically what you're saying
is like there's kind of this,
Echirium has this like tree structure defined where, you know, the data, the state is hashed
and then you have a root hash and then that's implemented on top of other tree structures
which depend on the different databases. So you have sort of the tree on tree, which then
makes it efficient, whereas you guys just have basically used the underlying tree structure
of the database to just store everything. So you kind of get rid of one layer of
complexity there.
Yeah, that's exactly right.
And that's super important because all of this data is living on SSD.
And the mental model that you should have about an SSD is that it supports a lot of
a lot of work being done and a lot of lookups being done in parallel.
So like a good SSD is a million or more IOPS, IO operations per second.
but the latency of each one of those lookups
is somewhere between 40 and 100 microseconds.
So you could think of it as like a bottle that has a really wide bottleneck
and you want to be able to stick a whole bunch of straws into the bottle at the same time
and, you know, like put all the straws in your mouth
and suck a whole bunch of juice out of the jar all at the same time.
But there's the straws are long.
Like there's a long amount of latency to look up any single piece of storage.
So in the context of that nested tree structure that we were talking about,
where you're going to end up having to go back and forth with the disk a whole bunch of times
just to get one piece of state.
And when you can reduce the number of back and forth substantially,
you can get a lot more throughput out of the system.
And so that is primarily now an advantage also,
because I can imagine there's different advantages of this, right?
like, so one is that when you execute a lot of transactions, you can just do it faster.
Because I, maybe it also makes like running a node more efficient, you know, if it's just like
a normal node where you look up the state that doesn't, that's not like a validator or?
Yeah, that's right. The database is, um, is just how all the state for any node, whether it's a,
a full node or a, like a non-validating full node or a validating full node.
Either way, they're still going to need to respond to RPC calls and execute transactions
or go look up pieces of state.
And all of those are accelerated by having a better state backend.
Cool.
So we have this optimistic parallel execution.
You have the new database.
And then what's the next one?
Right.
So the two other areas that I mentioned are on the consensus side and on the interaction
between consensus and execution.
So to mention the consensus part first,
we have a new consensus mechanism
called Monad BFT,
which is pipelined two-phase hot stuff.
So hot stuff is a
consensus mechanism that has linear communication and complexity.
That means that as the number of nodes in the network
increases,
the amount of overall number of messages,
is it need to be sent increases linearly with a number of nodes,
as opposed to other consensus mechanisms like Tendermint,
aka comma BFT, where it's quadratic,
like, you know, it's the square of the number of nodes in the network,
because in Tendermintment it's all-to-all communication,
whereas in hot stuff, it's generally one-to-many, many-to-one communication.
Okay, so what's, what's,
the kind of number of validators that you expect that MoNAT can scale to?
We expect a couple of hundred validators participating in consensus on day one of
Monad Mainnet, and then we have a slightly longer-term roadmap to get that to the thousands,
but with the current consensus mechanism implementation, it can support somewhere between
two and 300 validators participating in consensus.
Okay, because that's kind of similar to, I mean, okay,
Tiananmen chains maybe is more, I would say,
sort of up to 200, I think is probably the most that people run.
What about their block time?
Right.
So Monad BFT in Monad is being configured with a one second block time.
As I said,
200 or more validators participating in consensus.
And the other thing I forgot to mention is that Monad BFT has single slot finality,
which we think is really important because, you know,
it affects the bridging time.
Like if you're trying to bridge off of Monad to another blockchain,
then typically, well, the bridge really should wait until the chain is finalized
before relaying a message to any other chain.
and in Ethereum where you have somewhere like 12 to 18 minute finality,
that means it just takes a super long time to get your assets off of Ethereum to another blockchain.
Single sloppinality really helps a lot with faster bridging and faster settlement times.
What are your thoughts on the comparison of this versus the Solana consensus,
which is, you know, this proof of history, BFT-like consensus.
I think Salada has Tower BFT, which is not single-slaffinality.
I think that some of the benefits right now of Tower BFT are support for a higher number of validators participating in consensus.
I think they have somewhere between 2,000 and 3,000 validators right now.
which is actually quite impressive.
Like, I know that the meme on Twitter,
or at least in the,
a couple years ago,
like people would always say that Solana is really centralized.
And there are some aspects in which,
um,
I would say that that could be true.
Um,
for example,
the high RAM requirements of running a Solana node.
But on the other hand,
from a pure consensus perspective,
um,
it is impressive that Solana has,
uh,
2000 plus validators participating in consensus.
but that does come at a cost,
and the cost is really finality is like not single slot.
It takes some time.
I think in practice they say like after 32 slots,
you can consider it to be finalized,
but it's a little bit probabilistic even there as well.
At 32 slots in Solana of 32 times 400 milliseconds is,
what is it, like 12.8 seconds.
Yeah, and I mean, one big difference with Solana,
I guess that's not consensus related, no,
but it's that there's no Merkel proofs, no.
Yeah, yeah, that's another example
where Solana removed the Merkelization,
which I think has an impact on bridging,
has an impact on the ability to run a light client.
In Solana, they say that the light client is like a node
that submits a transaction that creates a, yeah, a transaction that generates a proof,
but it actually has to be included in the blockchain in order to generate like that proof.
So it's kind of, it's not really a light client in the same way that when people typically
talk about light clients.
And from the light client perspective, those are, how do the light clients work for Monad?
So I think on day one, we will not have and a light client implemented,
but it definitely could be implemented in the future with the consensus mechanism that exists.
And yeah, there's no impediment per se because there is a Merkel route and there is like all of the other things that you need.
Okay, cool. So that was consensus. And then let's go to the last one.
Monad implements something that we call asynchronous execution.
And the way to understand that is for sort of best understood by sort of just want to mention some numbers here.
So Ethereum has 12 second block times.
But the actual rough time budget for execution is only about 100 milliseconds,
which is, you know, if you do the math, like less than 1% of the block time,
And so that's really interesting that the budget for execution is so small,
like such a small fraction of the block time in Ethereum or in other blockchains.
And the reason for this is the fact that execution and consensus are interleaved with each other.
So typically in blockchains, you know, what will end up happening is the leader ends up choosing a list of transactions and then executing all of them, generating the Merkel root of the resultant state.
and then sending that as a block proposal to everyone else.
Then everyone else receives that.
Can you repeat that?
How do you interleaved?
Yeah.
So in most blockchains, execution and consensus are interleaved.
Execution is the single node problem of, you know, given a list of transactions,
what's the end state?
And then consensus is a distributed systems problem of nodes talking to each other over a network.
the nodes, if they're globally distributed, which they should be, then that means around the world communication, which can take hundreds of milliseconds.
There might be multiple rounds of communication.
So I think all in all, what you can see is that consensus ends up taking the vast majority of the block time,
and execution ends up being squeezed into a very small fraction of that block time because of the interleaveness of execution and consensus.
Yeah.
Yeah. But for example, that's something where Solana is also much more, because of their system, there's much more execution, right?
Right. So with Solana, they're also exploring the idea of asynchronous execution.
Tully has written about this a couple times on Twitter, which has been really interesting for us to see.
But, yeah, Monad has been, you know, since,
day one, asynchronous execution has been a big part of the overall design, and asynchronous
execution means decoupling those two problems of consensus and execution from each other.
Asynchronous execution is the idea of moving execution out of the hot path of consensus
into, I guess what I would call a separate swim lane that is slightly lagging consensus.
So what ends up happening is as soon as the nodes in the network come to consensus about an official ordering of transactions,
then two things happen in parallel.
One is they can start consensicing over the next block.
And the other thing is they can all each independently execute that list of transactions that they've just agreed upon.
And so when you move the execution out of the hot path of consensus into the separate swim lane,
you can massively raise the budget of execution and use the full block time as opposed to
only a small fraction of it.
Ah, so first the network agrees on just the order of all the transactions and how they're
being executed, reaches consensus on that, and then the execution happens.
That's right, yeah.
Okay, okay, very interesting.
So that also means, for example,
If you're now a proposer or like, you don't really have, like, there's no, nothing really you can, no discretion you can apply, I guess.
That's a good question.
So as a proposer, you still can apply discretion.
And we think that in practice, the proposers definitely would apply discretion because that allows,
them to, you know, more optimally choose the set of transactions that will end up paying fees.
You were asking earlier about how MEV will end up working, and we do think that there will probably be some
sort of mechanism for network participants to submit ordering preferences to the block proposer
in some way and attach a tip alongside that.
So then that tip revenue is additional revenue for the proposers.
Because the proposers are the proposals are the ones that come up with the ordering of transactions.
Correct.
And then that gets concesses on and then the execution part is kind of that is basically predefined.
Correct. That's exactly right.
So I think you said it better than I did.
first time, which is when you think about a blockchain, a blockchain is really just a bunch of
blocks that are in sequential order. And then in each block, a list of transactions, which are also
in sequential order. So if you sort of like let the blocks themselves fade into into the distance
for a second, you just literally have like a long, long, long list of transactions that are all
canonically ordered. And if
everyone, or like, Brian, if you and I are both running nodes,
and we both have exactly the same list of transactions, starting from Genesis,
then we should have the same exact state of the world because we're both
applying the same transactions one by one by one by one by one,
and each time making the exact same state transition and getting the same
state of the world, the same RICL route. So the ordering
of the transactions purely determines the execution. It purely determines what the correct state is.
Yeah, basically, like, the only reason why we have Merkel roots is so that we can check each other's
work and make sure that, you know, neither of our computers, like, got hit by cosmic rays and
made a computational error. It's like, just to make sure that we're doing the same thing.
Okay, so, but it also means now if I'm the proposer, I,
you know, I get all these transactions and now I can come up with at least like an order of these transactions.
So then there is potentially a lot of value, right, in changing this order, you know, accepting different orders, putting in your own transactions, like all that kind of stuff.
Yeah, that's right.
So, and this is really just true of any, any blockchain, but, yeah,
almost every blockchain is leader-based.
So there's a rotating leader,
and then when it's your turn as the leader,
you sort of have the privilege of being able to choose
from various pending transactions that are in the mempool
and assembling the next block, i.e. assembling the next
ordering of the next set of transactions that get enshrined into the history of the blockchain.
Of course, you need to choose valid transactions so that everyone else, like, verifies them and accepts them and votes in your block.
But you have discretion over what the ordering is.
And what I'm saying is that that choice about the ordering is all determined during consensus.
So you as the block proposer, you choose an ordering, you send it to everyone, everyone looks at it, checks that all the transactions are valid, does some other validity checks, which I can talk about in a sense.
second and then votes yes. And then once a super majority of the network, like super majority
stake weight has voted yes, now that list of transactions is now enshrined in the history of
the blockchain. But that can be done actually before execution of that list of transactions
happens. And so that's exactly what's happening in asynchronous execution is having the nodes all
agree on the official ordering. And then once they've agreed upon it, then they can do two things
in parallel, which are like start working on consensus seeing the next list of transactions,
but then in parallel, go execute the things they all just agreed upon.
So I'm curious now, if you, do you think that where this is going to is that we also
going to have a sort of proposal builder separation where as a, as a proposer, I'm going
to basically, you know, plug into some builder or some other entity that, you know, applies a lot
of like intelligence to basically give me an order that sort of produces the highest value
I mean highest value for for that builder then and um because that presumably there's there's a lot
of value right in determining that order um is that way you expect things to go um yeah i think so
So the question of how the ordering is chosen is sort of orthogonal from all the other stuff that I mentioned.
Like in the story I was telling you about how the proposer just chooses an ordering and then message it to everyone.
They all agree upon it.
Now the order is enshrined and now everyone can go execute in parallel to consensus in the next block.
That entire story just kind of blackboxed the decision of the proposer.
of how he or she chose that ordering.
And to your point, just now, like,
that decision could be made by outsourcing that decision
to, like, a third-party network that is,
you know, like a proposer builder,
like a PBS type thing in Ethereum,
where there's a system for people to be able to submit bundles
to the builders, have the builders buildable,
block, have the block be submitted to a relay which conducts a private auction from different
builders to choose the ordering that offers the best overall amount of revenue before presenting
some option, like the best option to the proposer and having the proposer choose that and
and trying it. All of that still like kind of work, could work the same way. I don't know the,
like, it's sort of out of protocol.
So we'll see how it develops over time.
But yeah, I think you could think of the ordering choice
as being orthogonal to the other considerations
of how consensus and execution end up working.
But like by default, when Monat launches,
you know, proposer and builder would be basically the same.
And then, so it's not like enshrined proposal builder separation.
And then someone may come and say,
hey, we're going to create a modified client
that separates out the proposer
or plugs in some kind of builder
mechanism.
That's right, yeah. The default mechanism
for block building is
priority gas auction.
So choosing the ordering of transactions
based on descending gas bid.
I know, like, I think PBS is something
that's fairly, you know,
controversial.
I actually
remember we did an interview with Vitalik at the ECC and asked him the question, hey, how do you,
you know, do you feel it was the right decision? And I'm not really sure if this PBS separation
was the right decision. Do you feel like, is this a desirable end state, this kind of separation?
or
because in the end,
right,
there is potentially
that a lot of value
ends up
kind of being extracted by,
I mean,
in Ethereum's case,
right,
we now have,
I think really just two builders
who are completely dominant.
Like,
how do you want this ecosystem to evolve?
Do you hope that there's going to be
a lot of builders who compete
or you sort of say,
like,
don't really care.
It's,
uh,
it's,
it's up.
to the market to figure this out?
Or how do you want to see this evolve?
Right.
Yeah, it's definitely a complex topic.
I think that, you know, one thing that I hope we can accomplish through whatever the ultimate block building,
whatever block building mechanism becomes sort of the dominant one, I hope that it involves a lot of builders.
Ideally, it would be possible for the proposers themselves to just run the software themselves and build an optimal block.
That way, it's much more decentralized in terms of that, the thing that you were mentioning, like, ultimately the set of agents that are sequencing transactions, we want that number to be as high as possible.
I think in terms of the value capture in Ethereum right now,
the proposers are capturing a lot of the value because although there aren't that many entities
that have high market share in the builder space,
due to competitive forces, they do have to bid up to give up most of the value
that is in that block to the proposer.
So it is still very beneficial to the proposer network, which, you know, that is sort of revenue maximizing for the proposers, which is then good for all token holders because anyone can, you know, has the ability to stake and thus, you know, sort of get the returns of a proposer, or at least most of them.
there's also the part about how over time we're seeing more value get captured by applications
through yeah mechanisms where the builder needs to rebate some of the value to the application itself
and you know then when the revenue flows back to the application that can be good because
either it makes the proposition of building an app more attractive,
which I think a lot of us in the space would agree is a good thing to happen
because then it'll just encourage more ambitious apps to get built
or potentially the revenue, although originally going back to the app,
then could flow back to either LPs of that particular DFI protocol
or maybe a rebate to the taker, to the end user.
So I think those are all different things that can evolve over time.
But yeah, I think to your original question, like we do, I think it is objectively better for there to be more actors that are ultimately choosing the ordering of transactions for the purposes of censorship resistance and decentralization.
Okay, cool.
Fantastic. So now I think we're coming to the fourth thing you mentioned.
Oh, actually, I think I, you know, I've gone all the way out the plank because I've given, the first two things were both execution related.
Then the third one is consensus. And then the fourth was the separation between consensus and execution.
Oh, the asynchronous. Yeah, yeah.
Okay, okay. Yeah, yeah, yeah. Okay, yeah.
optimistic parallel, new database,
the consensus,
and then asynchronous execution.
Cool. So where do we end up with this?
Like, what is the kind of
throughput that is achievable here?
Yeah, I think the cool thing about
these different technologies
that we're introducing is that they all stack
on top of each other.
So it's like, you know,
it's always exciting when you get
a bunch of coupons in the mail, and then, you know, one is like 50% off, one is 25% off,
but they actually stack on top of each other because then you really have a magnifying effect.
So as I was mentioning before, asynchronous execution on its own is a massive unlock because,
you know, in existing interleaved systems, only a very small fraction of the block time
can be used for execution, whereas by decoupling the two,
basically the full block time and expectation
could be allocated to execution.
So we can pack a lot more, you know,
literal just like we can pack in the full block time
as opposed to only a small fraction as execution time.
And then if in addition to that,
you have parallel execution,
since we can do a lot more work in parallel.
And then we can also have a more performant state database.
we can respond much more efficiently to all these S loads and S-store requests that are happening in parallel.
And we have a more performant consensus mechanism that can keep up with this entire system of execution that's fast,
then we can really, really see massive gains.
The other analogy I want to mention is that in asynchronous execution,
like it reminds me a lot of the movie Limitless, where there's this,
which is sort of based on the premise that you only use 10% of your brain.
What if you could use 100% of your brain?
Like imagine how superpowered you would be.
And of course, that's like a little bit misleading because I think that 10%,
like the denominator has a lot of gray matter, like supportive tissue or something in there.
But it's not really true that you can go from 10% to 100%.
But it's a nice fantasy to think about.
And in this movie, there's this guy who takes a pill that allows him to
go to use 100% of his brain, and he's just immediately superpowered, and he's doing awesome.
And then, of course, there's like a narrative arc where, you know, by being so powerful,
he gets in a lot of trouble, and then he's able to somehow work out of it.
So asynchronous execution is like that, where you're going from using only a small portion
of the block time to using the full block time.
And then when you stack this on top of these other improvements, we can really get significant,
significant throughput.
So now back to the question you're actually asking.
So Monad is supporting over 10K TPS
replaying existing Ethereum history.
So this is not 10,000 transfers or 10,000 ERC20 transfers.
This is 10,000 real transactions
from the distribution of recent Ethereum blocks per second,
which ends up being about a billion transactions
per day or about
1 billion gas per second
which actually is
kind of funny like now on Twitter
a lot of people talking about this goal of
gig a gas throughput
or 1 billion gas per second
which is the throughput that we're able to
achieve on Monad
by stacking these different technology
improvements while running
on reasonable hardware requirements
which I think is another thing to point out
because a lot of the goals
of getting to gig a gas throughput
are assuming that you're running a centralized L2 where there's one server,
and that one server has like a massive amount of RAM and, you know,
is like kind of a supercomputer.
That doesn't really work with the principles of decentralization.
Like we want to have a fully decentralized network where the nodes are literally globally
distributed with the full overhead of consensus.
We want anyone to be able to run a node that gets,
shouldn't be expensive to run one of these nodes.
So if you can get to a gig of
throughput, but you have to
have a massive, like,
you know, a
terabyte of RAM or more, like,
people are talking about having
like machines that have 10 terabytes of RAM.
It's just like crazy, crazy machines
in order to get that throughput.
But with Monad, we're getting that
throughput while using commodity hardware,
like a server that costs
$1,000 a year to run.
Okay, okay. And so can you just contrast this with Solana today? How does it compare?
I think Solana is processing somewhere between 2 and 3,000 transactions per second of real non-voting transactions.
And, you know, whereas Monad is supporting over 10,000 transactions per second. So it's like, you know,
3 to 5X the throughput right now.
And then also, Solana is using relatively high hardware requirements.
I believe the requirement is 256 gigs of RAM right now,
whereas Monad requires nodes to have 32 gigs of RAM.
How do you imagine this is going to evolve in the future?
Do you think you can scale that a lot more,
or is this sort of, are you hitting the limits here with these improvements?
I think there's the capacity for another like 2 to 5x of throughput improvements.
The real constraint is on the networking side.
So Monad wants nodes to have 100 megabit bandwidth,
and the throughput is up to a certain point is sort of linear on the
the consensus side with the amount of bandwidth that's allocated.
So, but it's like not, we don't think it's reasonable for a decentralized network to require
gigabit bandwidth, for example.
Like there's just sort of like a physical limitation to the overall throughput that's imposed
by the networking.
Okay, okay.
So basically bandwidth becomes like the main bottleneck.
and then that's something where, okay, if you go up with bandwidth requirements,
it just means that you have to start compromising decentralization to some extent
or to another extent.
It just becomes harder to run about this.
I mean, I think in Solana, Salana's case, right, that is a real challenge.
I mean, we've had various times in the past, you know, data centers would basically say,
hey, you're getting, you know it's getting DDoS because it's getting so much,
traffic. So I think finding like data centers that support like Solana validates, it's not so easy.
Right. Yeah. I think that, well, I think the more, the most important thing to focus on is like the fact that there is, like for Solana or Monad, there is a, you know, commitment to having a fully geographically decentralized validator set, having hundreds or,
thousands of nodes participating in consensus.
Like a lot of the
more recent narrative has been around
like just like very, very centralized setups
where there's a single sequencer.
Like if in that situation
there's no consensus at all. And so if there's no
consensus, then that whole thing about the bandwidth
doesn't become a consideration because the node doesn't have to talk to
anyone else. Like it literally is just the one super node that all the requests are flowing to,
and all of the, yeah, all of the RPC calls are going to, or maybe going to like a slave or
something, but, you know, something in the same data center. So I think that's the single biggest thing,
and that is ultimately what the constraint will be is the networking limitations to support,
court a fully geographically distributed decentralized network.
But I think it is very important.
And then I think from that point onward, then Monad would perhaps pursue a horizontal
scaling strategy where there's several instances of Monat that each are very computationally
dense, like each over each delivering over 10K TPS of throughput.
And then, you know, when that 10K gets saturated by whatever set of apps that are there,
then maybe there's like another Monat instance that's also highly decentralized,
has nodes all the way all around the world that has a more specific like different,
different focus as well.
Right.
They become sort of a sharding like design, but I mean,
fortunately 10K TPS is quite a bit, right?
So it probably has some time.
Yeah, you want to shard at,
you want to shard when you've reached the limits of what,
you know, the hardware can really support.
Like, I think that it doesn't make sense to shard.
Like, if each network could only support 100 TPS,
and then you need to have 100 shards in order to get to 10K TPS,
that's not very good.
But if one shard itself is 10KTPS,
and then you set up 100 to get to a million TPS,
that seems much more justifiable to me.
Yeah.
And, I mean, it sounds like previously you were sort of referring a bit to like
mega-Eaf, right,
which is the other project that's trying to super scale EVM,
but doing it with this kind of supercomputer centralized approach.
To me, it seems very obvious that's like a massive compromise on in the end
what we're trying to do here, right?
We're having like decentralized networks.
But yeah, I mean, I guess do you think,
let's say Mega Eve will at the expense of decentralization,
and then probably be able to process even much more than Monat?
Or what do your thoughts on, on like, Meghaef as a comparison?
Yeah, I think there's a number of different projects that are trying to build, like,
high-performance L2s, taking advantage of the fact that there's no consensus overhead,
and, you know, you can run that one note on a really big box.
I think another example is Rise L2 is also doing this.
I think Radio is also doing this.
I mean, you could argue the hyperliquid itself is also kind of doing this
because the nodes are all in Tokyo.
Like in order to run a node, you kind of have to have it in this one geography,
high RAM requirements on the nodes.
So yeah, I think there's a number of different projects that are all trying to
trade decentralization for performance.
And I mean, if you think about it, like just your expectation should be that there could be more
performance if you cut out all of the decentralization aspects and all of the overhead
of consensus and so on. I think Mert actually has a pretty funny take on this where
he talks about how L2s really should be a lot higher TPS than Solana
because they're all running single centralized sequencers
and you can add all of their throughputs together.
It's the fact that like right now Solana has higher throughput than all the L2s combined
when the L2s have this massive advantage of centralization is actually pretty crazy.
So yeah, I think that there are different designs like there's definitely
some tradeoffs being made, but the goal of Monad is to offer really high performance while
having a very high degree of decentralization as well at the layer one level. And we think that
that's quite important. That's why we're all here is to help build new technology that
helps decentralization to have a greater impact and decentralized apps to have a greater impact.
So I'm curious about the developer experience here. Is it basically,
basically just going to be, hey, I'm developing just like for Ethereum and it'd be much the same,
but the difference is just faster block times, more throughput, cheaper transactions.
Or are there differences when it comes to developer experience as well?
On day one of Mainnet, the focus is really just pure EVM equivalence,
so that any, you could take any application built for Ethereum and redeploy it on Monad without any changes.
you don't even need to recompile it.
We are thinking about some additional quality of life improvements for developers.
Those are things like raising the bytecode size limit from 24 kilobytes to 48 or maybe even more than that.
It's things like adding support for new pre-compiles.
For example, there's a number of different cryptographic functions that,
frequently come up that currently have to get implemented in solidity and are very expensive to do so.
But if there's a native implementation, then that should just kind of be baked into the node code
and you should just be able to call it with a pre-compile.
So we're working on some of those and they will either be included in Monad Mainnet or,
if not, right at Mainnet, then at some point a little bit later on.
We're also looking at the account abstraction space quite a bit and thinking about, well, first of all, following along with the existing EIPs that are contemplating different ways to ultimately get to native account abstraction.
I think EIP 7702 is getting a lot of traction.
So this is the EIP that would sort of allow any EOA to,
effectively become a smart contract account as well
by having a pointer in its code slot to point to
another smart contract, like another address where code lives.
So we're looking into that, and again, it'll either be
included in Mainnet or probably sometime shortly thereafter.
But yeah, at the end of the day from a developer experience,
developers can expect
a fully EVM equivalent experience
so that they don't have to change anything
but in addition
our team is evaluating ways to make
life easier for developers
that's actually easier to develop on Monad
rather than on Ethereum.
Cool. I'm curious on a very different topic.
There's a lot of competition around L1s
and building community
and sort of getting mindchairs. It's not easy.
I think you guys have done a fantastic job there
and it feels like there's already a lot of interest in Monad,
people building on Monat.
What have you guys done in that front that has had the biggest impact?
Yeah, I think everything is very synergistic
in the sense that
I think there's a lot of people in the crypto space
that are really passionate about the ideals of the space
and excited about seeing the success of new, new decentralized apps and excited about trying new things and giving feedback.
And also that spent, you know, frankly, like, spend a lot of time on crypto Twitter every day and fall along with a lot of the storylines.
I think what our team has done well is just creating a welcoming home for many people that share these common interests.
and especially during the bare market of 2022, 2023, just having a, creating this welcoming environment
where everyone was kind of a little bit down in the dumps from all the negative news and
sediment and so on around crypto, but still very passionate.
And a lot of people sort of ended up joining the Monad community and making a ton of friends
and contributing in different ways, which then really really,
has a flywheel effect of now we're at the point where artists that are creating new art
involving the Mon Animals, which were created by members of the community, or like just leaning
into all the memes and jokes and fun, then have a massive audience for their art, which then is
very just encouraging to them and gives them more of a reason to create more art, which then
creates more enjoyable experiences for everyone else, which then, you know, it's very, very positive
some. And then, you know, lastly, also very positive for builders who are building a Monad,
because then they immediately get some moral support from people who are cheering for them to be
successful and trying out the beta versions of their products and giving feedback and becoming
community members. And yeah, I just think that there is, it's really just about the fact that
for various reasons the community is attracted. A lot of people are very long-term oriented and
care a lot about the space and have enjoyed making friends and then getting to go to meetups around
the world to meet up with their friends that they've been hanging out with a lot in person in real
life. One last anecdote that I'll mention is, so around DevCon, we hosted the Monad Madness
pitch competition. This is the second iteration of something that we're hoping will become like
very much a fixture of what we do. But it's, it was a competition for 25 teams to present in front of a
panel of, you know, really leading investors in the space. We had people from Paradigm, Electroids,
Capital, Pantera, Anamoka, IOSG, and then, anyway, those were the judges, and then a bunch of other investors were in attendance in the audience.
But we had, anyway, so it was like these teams got to go on a giant stage, pitch their project, compete for $500,000 worth of prizes, get the attention of judges so that they
could attract money in their next fundraise, and then also got over 400 people to attend in
person, a thousand people attending on the live stream. And then kind of around this, we had
over 150 people fly from other countries, mostly in the Asia area, but we had people
traveling from as far away as Greece and Turkey and so on, all the way flying to Thailand
just to go attend the community meetup and spend the week like hanging out with their friends that
they've met online. So yeah, I think in short, there's just an incredible amount of energy and
excitement in the Monad community. And then that has translated into benefits for builders as well,
which then makes the ecosystem stronger, which then pulls more people in and hopefully
pulls a lot of people in that have never known anything about crypto in the past as well.
The maybe final question. So what are the timelines here? Like when do you expect
main net to launch?
We're expecting
Mainnet to launch
sometime early next year.
No exact date at the very moment,
but the team is working
really hard on this.
And I think in particular,
we're expecting to launch
the TestNet eminently in 2025.
Cool.
Well, thank you so much, Keone.
It was really great.
I really enjoyed getting into
the details here.
And I feel like this is a lot of like very smart and interesting and reasonable decisions that you guys have made that I think can can end up in a really powerful blockchain.
So thank you so much for coming on.
It was a great pleasure.
Yeah.
Thank you for having me, Brian.
It's really nice chatting with you.
