Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - MultiversX: Blockchain Sharding 101 - Lucian Mincu
Episode Date: August 2, 2024In 2017, Vitalik Buterin defined the ‘Scalability Trilemma’ which consisted of 3 attributes that every blockchain had to balance depending on its intended use cases: decentralisation, scalability ...and security. While Ethereum sacrificed scalability in favour of security and decentralisation, others prioritised throughput over the other two. However, a solution was proposed, inspired from Web2 computer science - sharding. Despite the fact that Ethereum’s ossification and significant progress in zero knowledge research led to a shift in Ethereum’s roadmap away from execution sharding towards L2 rollups, there were other L1s that were designed from the get-go as sharded blockchains. One such example was Elrond, who implemented the beacon chain PoS consensus alongside a sharded execution layer. Their recent rebranding to MultiversX alludes to a multi-chain, interoperable ecosystem, in which sovereign chains can communicate in a similar manner to cross-shard transaction routing.Topics covered in this episode:Lucian’s background and founding Elrond (MultiversX)Elrond’s validator & shard architectureCross-shard composabilityVMs, smart contracts and transaction routingSelf sovereignty and modularityMultiversX visionRoadmapEpisode links:Lucian Mincu on TwitterMultiversX on TwitterxPortal on TwitterSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: Chorus One is one of the largest node operators worldwide, supporting more than 100,000 delegators, across 45 networks. The recently launched OPUS allows staking up to 8,000 ETH in a single transaction. Enjoy the highest yields and institutional grade security at - chorus.oneThis episode is hosted by Felix Lutsch.
Transcript
Discussion (0)
Our intuition was okay, some sort of parallelization should happen with those kind of sharding
and proof of state was intuitively the design that we should approach.
And everybody said, no, it's not doable.
If we take the blockchain trial lemma from Vitalik that you cannot achieve scalability,
security and their centralization without compromising any of those without charting.
I always envision that Ethereum would eventually define a standard where all those headers
or data that is being pushed into the blobs into the Ethereum beacon chain
will kind of get synchronized across the L2s.
You kind of build some sort of a trustless finality model
synchronized across all the L2s
if you would apply the same L-Rand or MultiverseX model to the ATERCase.
The only problem is...
This episode is proudly brought to you by NOSIS,
a visionary collective committed to fostering and expanding applications
for a decentralized future.
Nosis is at the forefront of innovation
with NOSIS pay, circles, and METRI,
revolutionizing open banking
and creating a superior form of money.
With Hashi and NOSIS VPN,
they are building a more resilient
and privacy-focused open internet.
Are you seeking a robust L1
to launch your project?
Well, look no further than the Nosis chain.
Enjoy the same development environment as Ethereum,
but with significantly lower transaction fees,
and with a robust network of over 200,000 values,
validators, Nosis chain stands as a credibly neutral and resilient foundation for your application.
Governance of NOSIS is driven by NOSIS DAO, where everyone has a voice in shaping the project's
future. Join the NOSIS community today by participating in the NOSISDAO governance form.
You can deploy your project on the EVM-compatible and highly decentralized NOSIS chain
or help secure the network by running a validator with just a single GNO and low-cost hardware.
Embark on your journey towards decentralization today at NOSIS.I.O.
Kars1 is one of the biggest node operators globally and help you stake your tokens on 45 plus networks like Ethereum, Cosmos, Celestia, and DYDX.
More than 100,000 delegators stake with KORS1, including institutions like BitGo and Ledger.
Staking with Kors1 not only gets you the highest years, but also the most robust.
security practices and infrastructure that are usually exclusive for institutions.
You can stake directly to Quarice 1's public note from your wallet, set up a white table node
or use the recently launched product, Opus, to stake up to 8,000 eth in a single transaction.
You can even offer high-yield staking to your own customers using their API.
Your assets always remain in your custody, so you can have complete peace of mind.
Star Saking today at Chorus.1.
Welcome to Epicenter, the show which talks about the technologies, projects, and people driving decentralization and the blockchain revolution.
I'm Felix Litch, and today I'm speaking with Lucian Minku, who is the co-founder and CIO at MultiverseX.
MultiverseX, previously known as Elrond, is a fully sharded blockchain network and ecosystem.
Hi, Lucian, welcome to Epicenter.
Hey, hey, Felix.
Pleasure to be here.
Awesome. Yeah. So glad to have you. I think, yeah, you've been in the space for a long time with
Elrond previously now, no, Multiverse X. I think there's like a lot of stories you can tell about
proof of stake, about what you've been building. And so, yeah, wanted to basically just start
there. Like, how did you get into crypto and started building like Elrond and now, I guess,
Multiverse X? Definitely. So first thing first, thanks a lot for the invite.
Thanks for hosting me, Felix.
By the way, big, big congrats and respect for the entire epicenter team.
I think epicenter is perhaps the podcast where I learned most of the crypto stuff in my entire life,
like in the entire career.
So I've been following ever since I met Sunny in 2017, just to give a brief context,
how I discovered first epicenter.
Then we'll move to my background.
But actually, I met Sonny in Tsug, into...
one of the very early
crypto conferences he was
presenting Cosmos ecosystem
and it was clear to main
it. I think it took
one and a half years
after, if I remember, after
Sonny mentioned
just it will follow very shortly
Cosmoslage will be very shortly
and immediately
actually I fall in love with a lot
of the stuff that you guys
were presenting and
more than happy as I said. Big
congrats and maybe the second point is I have not just to upfront disclosure. I have not
did any podcast in the last seven years since I built or started Alrand. And then the main reason
actually why I'm here is after listening to one of the latest podcasts from Episenter with
Paulo Ardoino where he mentioned, we all as technical founders thought, okay, we are going to build such
good products that the products will speak from themselves. And then that was my life thesis,
that if everything went for seven years, six years, and only with Alon not going out,
then basically once I heard his story and how important it is to go out, yeah, I decided I'll
reach out and here I am. So just a short into a big, big respect for the entire epicenter team.
Awesome. Yeah. Thanks so much. That's such a great anecdote. I think.
yeah and we're super glad to have you and honored to be chosen as the first i think your
project obviously is technically uh like one of the very advanced sort of blockchain infrastructure
so it is only fitting that you you come on epicenter to dive into it but yeah i guess yeah let's let's
start there how how did it start or when did you sort of come up with the idea of like
Elron back then and now multiverse and how has it changed if at all or what was like
yeah definitely so it's some context I've I think Benjamin and my brother was the
the hook for me into the Web 3 ecosystem I've been building and have technical
background have built several startups in Germany has believed the I think the
last 10 12 years in Germany they're building the all kinds of infrastructure projects
for the German states, for startups, for large-scale enterprises and so forth,
and was very passionate about a very simple logic of every protocol standard,
like from TCP, UDP, and all that kind of stuff, going to be torrent and that kind of
direction.
So I think for me, it was very, very important to try to connect to something that I could relate
and understood very simple.
I could draw it like the protocol design on the whiteboard.
So once you take like that and you just remove, for example, from BitTorrent, the Cedar part,
you immediately see, oh, that's the Bitcoin pretty much topology of the network.
Ah, aha.
So basically it clicked, it clicked, it clicked.
And then my brother tried to, he was already involved in, I think, NAM,
one of the core team members at NAM and I think on the Bitcoin Talk Forum and so forth.
very, very, very all days.
And then every time he had all this kind of task or challenges to solve,
he flew to Germany over the weekend, hooked me again.
We stood like one one one weekend first and then more weekends and so forth until actually
we got the task needed for other protocols.
And then after a while he convinced me to move full time into Web 3.
We also had a fund back then where we actually ended up investing in Cosmos as well, I think in private in Polka, Zilica, pretty much everything what was infrastructure.
Back then, we were one of the first backers.
And, yeah, while researching pretty much all those new protocols and working with them, trying to contribute to their ecosystem, actually, we learned a lot of the stuff.
And that was maybe the also one of the triggers where back in the days, we look at the performance or capabilities of the throughput of all those protocols that were coming up from 2016, 2016 and so forth.
And we just had the thought experiment.
Okay, if we're going to put like 8 billion people into any of this kind of architectures, would it stand?
Would it still stay alive?
would it still have we still reach consensus or have any kind of meaningful performance such that it
could gain well-ed-option? And basically we kind of reach the conclusion that the state problem
or the state size of any of those blockchains will become pretty much the main killer of
and killer not as we use in Web3, the good part of killer. Actually,
literally will kill any kind of performance that could those architecture reach.
And maybe the funny thing is that our intuition was, okay, some sort of parallelization should
happen with those kind of sharding and proof of state was intuitively the next, the desire
that we should approach.
And we went actually to several of those ecosystem projects that were already working on
very complex problems and pitch them.
that they should do shardling.
And everybody said, no, it's not doable.
So it's funny enough that everybody said, no, you cannot do that.
It's too hard, too complicated.
Why would you do that?
And here we are, I think, after six years, seven years with everything live,
and Ethereum still, I think, arguing that some of the stuff are literally hard-core.
I cannot not agree with that.
There were many, many sleepless nights and many, many,
tends to build this kind of stuff.
But yeah, I think this is very briefly how also the background and also how we got to
to at least have the thoughts in the direction of Elrond architecture, initial architecture.
Yeah, that's super interesting.
I think, yeah, I guess there was a phase where the Ethereum scaling roadmap was also
sharding based and then it sort of shifted back to like this roll-up architecture now.
and we were basically still there, right?
And I think in the wider space,
like some people,
depends on which pocket you are in,
you maybe even forgot about charting
that it existed.
But then on the other hand,
we have like projects like you
and maybe I guess the other mention of one
in a year that has implemented charting actually.
So, yeah, super curious to hear from you more
about like how that actually works
and how you set it up to,
and yeah basically the problems that it's solved so maybe we can we can dive a little bit deeper in there
like how you actually do the sharding so maybe we start just from like sort of the
the validator set that you have and how how they are sort of set up more than having more than
happy so maybe just some some context we funny enough that you you mentioned uh you mentioned the
the sharding part or the the sharding from from interior and roll nab we actually worked with prismatic lab
in 2017 on the Ethereum Shardle model.
So I just had a discussion on the English conferences
with explaining again how the architecture
of Multiverse X or Elrond looked like in the production.
It was very, very, very similar to what Ethereum 2.0 looked like,
especially at least the staking part with Q,
with all those modules.
Actually, Elrond and Multibur6 had both of them actually.
brought to the production. But also another funny part, I would say, Sonny was, I guess, one of the
first guys that did a peer review of the paper in 2017. And then also Vasili from Lido has been grilling
me about the randomness source of the chain in 2016, 17, I think, or 18, something like that,
around that, a lot and eventually contributed to the architecture as well. So very, very, very nice actually
to see that all those builders have contributed and always contribute this.
If there's something that builders have in common, they all contribute to the best things.
Now, going back to the architecture model of MultiverseX and going a bit into the technical stuff,
I would say the first step would be to walk the thought process that I look at as any blockchain architecture pretty much.
So if we take, for example, Ethereum as a supercomputer, a supercomputer is still a computer in the end,
and it has three major components.
So you have kind of CPU RAM disk where CPU is consensus.
You throw a lot of requested in, works with a very fast memory,
which is the RAM where you basically iterate or permit it those values once you reach consensus,
and you write to the disk how you got there.
Right?
So this is like, I would say, blockchain 101 explaining 30 seconds.
And also as a supercomputer, just distributed over the internet and then that's kind of it.
But the next point would be,
If we look at general computer architecture, how we scaled up the system, we did not end up
with a CPU with a single thread that has 100 gigahertz, but rather we have a bunch of threads
that parallelize the execution.
Also all of those threads work with a very fast memory, which is the RAM, where you have, even
though you have physically the RAM as one piece, actually each of the threads are having a pre-allocated
subset of memory, right?
is like each of the threads in order to work, in order to be able to process, it actually
reserves a set of memory of the RAM. And then of course, once they reached or solve the data,
the request, it writes to the disk how we got there. Now, this is like the general compute. This is
also, if we look at Elrond on Multiverse X, how it looks like or how it works. Actually, you have
the beacon chain similar to Ethereum 2.0, and then you have a bunch of execution charts. If you remember the
the initial Ethereum shard in architecture, it has, it basically push out the state,
the state or to the execution shards where the state transition will be maintained on the
execution shards. So basically you have this kind of beacon chain which is not arising all
those blocks being produced on the sub shards or the execution shards level,
while basically that allows and goes us basically to the entire sharding model.
Maybe before going even more into specific and losing pretty much,
maybe a bunch of the guys what is being notarized, what headers and so forth,
I would just do a step back.
Maybe defining a bit the problem of the sharding, right?
I would say one of the points here that said like there are three kinds of sharding.
there's, I would say, transaction sharding where we had architecture similar to Zilica.
Zilica, for example, was one of the first one that proposed that kind of model,
which allowed any kind of state transitional or kind of move balance parallelization in
within the same chain, within the same binary, right?
So that's cool.
However, as soon the chain has heat one transaction or one smart contract transaction, for example,
which would have iterated multiple counts,
there would have been a memory lock on the entire state,
and that would not allow any kind of parallelization at this point.
So the next point would be there's like the network sharding.
Like if we look and, for example, the way the architecture is being built on our case,
we have a total of 300 to 3,200 validator seats,
and all those seats are being allocated into four chunks or four sub-sharts,
Each of them maintaining 800 in this case, like beacon chain has 800 validators,
shard 0800, shared 1,0, and do each 800.
Basically, at the network topology level, there's on top of leap year-to-peer
an authentication layer where it tells, like, once you go, and again, this might be a bit
technical for people, basically, once you go into and you connect to the leap to peer network,
you need to have some sort of an ID, like a public-private key infrastructure.
Now, on top of that, what you can do, actually, you can sign the messages with the private key of the validator,
which will tell the other counterparty, which is receiving the message,
if you're, what kind of peer are you?
So are you coming from the same chart?
Are you coming from a different chart?
And basically, there's a specific optimizer set, which will,
will tell each of the validators, hey, I have a maximum, for example, like, again, just
defining the problem like network traffic is perhaps the most expensive resource we have in
the internet right now. And that's one of the problems also that we need to optimize. So
assuming that you can have into a parallelization system where each of the validators maintains
only a subset of the network, you also need to pass a lot of those messages. And
and kind of kind of find the optimum route to connect those peers to each other.
Now, because of this kind of first, you have, as I said,
the authentication method or the signing method with the validator key on top of each of the messages,
then on the other side, assuming that I'm synchronized and I'm synchronized with the network,
I can tell the public key that has signed this message where it comes from,
based on my knowledge of the network configuration,
I can tell, okay, this public key should be actually validating
or it should be a validator in Shar-2 or in the meta chain.
And I can assign this public key
or this network connection to a routing protocol
where I keep a certain amounts of connections
for an optimal broadcasting method or propagation protocol
in order to reach and always have highly available,
high connectivity and low latency between validators introshart
and still maintain some sort of a cross-chain connectivity
with all other shards such that they will never become a lonely island.
So there's, let me let's break here and maybe I let you ask some questions.
I know there's a lot of stuff.
Like there's very specific layers, but more than that, they explain everything.
No, sounds great.
That's super, super interesting.
So basically, first of all, maybe one question, the meta chain, which is like kind of
the beacon chain, I guess, in your system.
It's not that every validator actually validates that as well.
So, but rather it's also like just a normal shard in some sense.
No, no.
So basically there's first, there's shared security model.
So basically the entire pool of 3,200 validator seats are randomly allocated among all the charts.
So there's no special configuration, no special preferences for the beacon chain.
You just basically get every epoch shuffled and being allocated to that chart.
The only difference is in the current configuration is that the consensus size or the consensus, yeah,
the consensus participation for every round is 400 or 400.
Like, I could just maybe just define a couple more stuff.
Let's do.
Right.
So basically, there's, if we go on the chronology part first, right?
So let's define what's the metric of accounting or measuring inside blockchain.
So we have epochs, which are equal to 24 hours.
And then there's rounds, current round time is equal to six seconds.
which will get improved with the next protocol updates to, I think,
three, two seconds, one second,
and hopefully sub-second finality before the end of the year.
Maybe now that's chronology.
Then there's validators which have two different states,
or had two different states, so there's active and waiting.
The reason for that is basically each epoch,
400 validators are being elected,
or elected to validate each of the shards, including beacon chain, where 8, the other,
so 400 are being elected and the other 400 are in the waiting state.
Why? Because there is every epoch, every epoch, the validators are,
a third of the validators are being random sampled to reshuffle across the shards,
such that they will never be able to collude.
to take over a chart.
So by doing that, basically, we also have a built-in protocol
time for those validators to synchronize the new state.
So assuming, for example, that I'm being relocated to a new chart,
then I have a built-in in the protocol, a guaranteed time frame for my node.
Even if I will just, like, in practice, the node will destroy
and the entire database will just go and synchronize the, the, the, the,
snapshot, the three snapshots from the current epoch and then build on top of that, the current
state of the chain, assuming that at the next epoch, change, I am eligible to become validated,
right? So maybe that's very raw a bit how the concessus or chronology works and then also a bit
tied to the to the something part. Right. Yeah, that's super interesting. And this thinking,
How long does it actually take right now?
Because I guess you need to be able to do it in 24 hours.
So there are two parts.
There's basically, first there's three snapshots.
So we like three pruning.
We implemented the three pruning,
which allows us actually at each of the epoch change
to clean up the old or on the old state.
The three will always maintain only the latest and greatest,
so to say, version of the leaves or the tree model, which will be actually transferred in the new
snapshot, where basically that gets to also to the next problem, to the state charting.
Because maybe defy, like right now it takes, just to answer a question, it takes, I think,
around maybe hour, two hours.
And that could be optimized.
And we have optimized a lot and so forth.
However, this is like the general problem.
What way or do we?
And now, so we kind of touch on the network sharding.
We kind of touch about the design and rationale.
Why you need putting network sharding on top of this kind of architecture,
you kind of see all this kind of engineering breakthroughs
on top of all those primities to highly optimize the throughput of the network and the latency.
However, the main problem, I think, what we were solving is the state problem.
And like if you take the state, or if you take, like put the world population of a billion people into a adjusted database and try to iterate and do hundreds of thousands of transactions or thousands of transactions per second while finding those accounts.
Like you need to iterate on eight billion entries inside that database, find those entries and change the values and then redo the search for the next one and so forth.
So that could not scale.
And also in the same time, try to replicate this database as many times across the goal.
Like if we take the blockchain trial lemma from Vitalik, I think that's the most famous one that everybody knows.
You cannot, like the problem says, that you cannot achieve scalability, security and decentralization without compromising any of those without charting.
Like, charting is the what was the design for that.
And actually what it does, if, for example, if we take the Ethereum address range as an example,
and you would take 0X 0-0-0-0-0-1 as a beginning and then you have 0-10 at the end as an end,
and you would split into a sharded model, each of the shards were persist only a subset of those accounts, right?
So in practice, the storage, like everything related to those accounts will be pre-allocated
or allocated to a specific chart and that specific chart will maintain the storage of it.
While now, if we take and do the step back at the consensus or the entire architecture model,
basically you can, if, for example, a transaction would be introshard,
like assuming that the first 0 to 5 are in the same chart and the next 5 to 10 are in the next trial,
basically each of the shards could process in parallel, they will run consensus, every block, every block, every block.
And then basically when the transaction is intraschart, inside the same storage, it will happen atomically.
And then when not, then it will just reach out over the beacon chain through the notarization method.
And that's, I think, the next point where we're going to dive into.
But I'm closing here again, letting you maybe to ask some questions if it's clear enough, if it is...
Yeah, yeah, yeah.
So you still always go through the beacon chain ask for like interrupt shard interruptability.
And I guess does it work?
I remember from like near actually that, you know, if like some transaction is actually on the same chart,
they still kind of wait this one epoch and delay it.
is no benefit of being on the same chart somehow is that like something you guys do as well yeah so so there
there are two two problems here what so first is um the composability problem i guess that's the most
the most interesting uh the most i think the only reason why ethereum have not implemented charting
yet right so what like assuming that you run consensus on each of the charts and for example there are
two charts and the beacon chain every time your a transaction needs to like it is a
Assuming that I have the account with ID, like with ending one, and then I'm calling a small contract
which has the, it is in the same chart and basically everything happens automatically.
In the same transaction, I can just compose and so forth.
However, if, for example, the account that I'm trying to to go to is outside my poster
code, just to call the near podcast as well, basically, I need to go to.
through the router, right?
And the router will kind of ensure
meet the guarantee of the message delivery.
However, it will not happen in the same block.
But here there are two problems.
I would say, first, there's the throughput problem.
Like you could still try to concentrate everything
into a single blockchain.
However, there are some sort of improvements
that could be done in order, or like engineering
engineering steps that could reach to atomic composability across multiple charts.
But the most interesting part, I would say that's the easy part.
I would say once you have a sharded blockchain, it's easy to go, easier to go back and
compromise again.
I would almost compromise again and then kind of reach consensus across all the charts,
whereby the most crazy part is that nobody solved until now or before our,
the full state charted problem.
Like the cool part is, for example, now let me break it down to something that I think
is also very, very, very known for everybody.
Like the, the tribal agency problem, I think it was described in the Ethereum forums
back then where you want, for example, I'm an end user and I go to a travel agency and
I want to, regardless where the accounts are being distributed inside this architecture, I
want with the same ticket, I like, I want in my offer if I want, I buy this, this holiday
ticket to get a train ticket, hotel, a car rental, and a plane. And if, if, if, if, if, if, if, if, if, if, if,
if, if, if, if, if, if, if, if, nothing happens synchronously. Now, this is the cool part.
Like, nobody says that everything in, even in computer science problems, not, not, not, not, not
everything happens synchronously.
Even the way
we communicate right now is not synchronously.
It has an asynchronous model
which guarantees the
transport of the messages
and then we built on top of the
transfer messages a bunch of
algorithms or a bunch of software
which can handle this kind of
the modular approach.
And now if we look like
the way we approach, like
what multiverse or LRod or sharding
promises is
it guarantees a way of forwarding messages from A to B,
but with some specific properties.
Like, for example, if we know that the message will take a while,
you can basically go and lock that memory.
You can define the interfaces, so to say, on top of that,
to work and specifically say,
hey, I want for this ticket, for this smart contract,
that is going to purchase everything.
I want them to await and store this kind of information.
And asynchronously, like, when I'm calling this contract,
the contract will send a receipt or will send 10 other transactions
that might take even two seconds, five, seven, or two blocks, three blocks, whatever.
And whenever they will reach their destination,
I'm awaiting a message back.
And I know the protocol guarantees a message back to me.
And it will tell me what to do.
if it is successful or not.
So this is like the beauty of asynchronous execution
that you can build primitives or on top of those primitives
and can build on top of this messaging layer
where you can asynchronous call and have composability
and also not compromising on the throughput of the network.
So this is kind of kind of it.
And maybe the even crazier part is knowing, for example,
the part with the network
network sharding that I will be mentioning before.
And also that the network itself is kind of synchronize
over all this kind of primitives,
cryptographic privatives and consensus state,
basically, you could, if I know that, for example, in advance,
like one of the looking maybe a few seconds into the future,
how could such a protocol still achieve what everybody else?
has, but what the other ones could never achieve sharding, just like that.
Basically, if I know which transaction or what transaction I need to talk or execute synchronously,
and I know that the destination of it, basically I could easily target those validator sets,
or the protocol could talk to those validator sets and make them achieve consensus for
for automicity for one specific upcoming round
where they will execute one one specific transaction
from A to Z across all the accounts inside the chain, right?
But in the same time, you can still have,
you can still persist this kind of asynchronous model
for everything else where you don't need to guarantee everything
in the same block.
Like the same way the internet doesn't guarantee us
that everything just comes,
drops everything and just process one single trade at the time.
I hope it makes sense.
I think it's...
Yeah, yeah, yeah.
I think it makes sense.
You're saying that you can...
So there is a way to guarantee this somehow.
And then basically, in that specific scenario,
more or less the shards act like a single shard for this string of transactions or whatever is.
Exactly.
But also while not uniting the state,
So I can still work in the same way if the model works in an asynchronous model already.
Basically, we will just speed up the message passing from one to another
and we will still do consensus on that kind of messaging.
But instead of doing it asynchronously, if there's enough economic incentives, for example,
for that kind of processing, it can be prioritized, scheduled and then reached and executed
into one single transaction.
that could be a swap.
I mean, it's just an engineering, pure engineering problem.
It's not something that cannot be solved.
Like, if we have sold and built all this kind of stuff,
then I do believe that that's just a couple of sleepless nights, I would say.
It took a few years.
Yeah, okay, that's interesting.
But right now, can you do this prioritization?
Or is it like, I guess it leads a bit into, you know,
what are like the prioritized transactions in general?
in blockchain is like sort of M.V-related, like arbitrage or whatnot across,
like, is this something you can already do on multiple X?
I think it's clear a bit of the logic of the sharding.
Then on top of that, what we're building maybe, it would be, each of the shards, by the way,
has its own VM, so it has its own execution environment and so forth.
And on top of that, basically, we built.
at some primitives, which is called time lock and promises.
You can define at the interface of each of the SPAR contracts,
what kind of properties do you want?
Like defining, you want TCP or UDP?
Do you want, for example, do you want that,
like when I'm shooting to this smart contract
and this smart contracts calls another test small contracts,
do I need, does the user wants whatever it gets or do you want to await all the results of all the small contracts?
And if that passes basically, I'm just confirming sealing the result or I can go back to those contracts and say, hey, I don't want them.
So you can define all those kind of specific properties on each of the interfaces how you want.
want to do that. What we don't have is the synchronous consensus among all the charts.
I can do that. We don't have right now because it was not a problem. It is kind of a challenge
for us because we only had blockchains that are single-threaded. Let me put it this way. We don't
even have the mindset to think about applications that are multi-threaded. Like what if
what if I can design such applications that can instead of having just one single thread,
one single smart contracts, for example, for wrapping oil, I'm wrapping tokens,
or whatever else you're thinking, where you can actually have this kind of primitives available
in all subcharts.
And then I think that's kind of the challenge of also with relaps,
that you only have some primitives available at one specific layer,
and then you need to broadcast them, reproduce them to all other layers.
I also have some interesting notes on that end actually
but yeah I hope it answers your question to that end
yeah yeah that makes sense and I guess yeah you mentioned also like every
shard has their own VM is it all like the same VM basically
so we have three kinds of so first on
On top of the blockchain, we, on top of the blockchain accounts, we kind of built a routing
system where you can have multiple VMs.
The cool part is you ended the data field, for example, when you're calling a smart contract,
a smart contract you can define, there's a kind of a switch where you can say, hey, I want
this transaction to be originated or the account that I'm talking to, I want them to call
the bytecode with a specific VAT.
So I can tell through this kind of switch what kind of VM I want to.
It's a storage at the end of the day.
If we decouple the execution from the storage, then basically I can just tell, hey, go to that
specific storage, take that bycode and map it into a specific VM.
In our case, we have a WASM VM.
We built on top of a WASMA.
It's called a space VM.
And then there's all the framework on top of that, which abstracts the entire complex.
complexity of sharding, which is called spacecraft SDK.
So the core part is basically what that's one of the frameworks.
This is the most used for anything Spark contract related.
And then also on top of that, we also have a Go or like a system VM where we used, for
example, for staking primitives, like system specific applications or logic where you need
very, very efficient computation and very fast.
Yeah, that's kind of what the reason why we do that.
The funny part is actually because of the new SDK, like the sovereign chains,
I think we're going to touch on that as well, is we're incentivizing people.
And because of the entire modularity that we build on top,
we're actually putting grants for people that would take and build port different VMs
to the ecosystem.
So for example, there's one of the projects
that I'm very excited about that does
Ethereum, EVM compatibility.
So imagine that you basically
in the future, the main chain could
in one transaction,
you're calling one specific smart contract
and then that specific contract,
you can take the output and inject it into the next VM.
For example, you take a was in by code,
you're calling a was in by code,
and then you call a solidity code,
and then you take it with that result,
you reach composability across multiple VM,
more VM, Solana VM, and so forth, right?
So that's kind of the angle,
the direction for the VM execution side.
And then on the sharding part,
there's a lot of consensus optimization.
There's a lot of, like,
even block time is, I think, from my point of view,
a bit too slow.
But six years ago, it was reasonable, right?
Yeah, always something to do.
Okay, super interesting.
I guess, yeah, especially like,
heading into this sovereign chain's realm.
I guess we have seen more and more
this like application-specific paradigm,
I guess, play out, right?
I think that's where it started in Cosmos
and sort of like everyone built their own chain.
Everything is a chain.
Your fridge is a chain.
And I guess Cosmos has this approach.
Okay, you have the sovereign chains
or like the app chains and then IBC.
In your case, that's like sort of the sharding.
but now you're also bringing in, as I understand,
like sort of a model to have your own chain within this system.
Like how does it work?
Is it also like, is it an additional chart?
Is it just something that lives on a chart?
Or yeah, how can you explain a bit more of what certain chains are?
More than happy.
So maybe first I would just define the problem.
Like we've been very good on running multiple binaries.
So like multiple chains in parallel architecture,
orchestrating them into an invisible layer,
would say with through the bay orchestrate them through the beacon chain such that you don't care
where the accounts would leave actually in your entire architecture and then by doing that actually
we kind of learn and said we're kind of good on running this kind of specialized chains or
almost paralyzed chains why don't we just repack rebuild the the code base such that it will
turn into an SDK, which actually could, if you have, if you look at an internet architecture,
you don't have just public cloud where you're sharing the resources with everybody else.
You could, what if you could just deploy your own private cloud, but still maintain all your
own sovereignty?
Like, the cool part is, like, first, you get all the primitives, everything that we build
with Elrond on multibre six, four, the last couple of years out of the box, but also pretty
modular when it comes to you can define your own consensus size. You can run your own consensus.
You can, for example, decide at which point in time you want to post the transaction to another
chain. Like, you're not tied to multiverse X. You can basically decide, hey, I want to run the
consensus of 400 or 400, for example, validators. I want to have POS. I want to have a block time
of four or block, block, block time of one second. And I want every time there's an interaction I can
right into the Go binary.
I can say every time there's an interaction with a specific address,
you need to go and post this transaction to Ethereum or whatever.
You can just compose this kind of new features on top of that.
So that's kind of the narrative.
I do believe like, yeah, people, I think Sunny again was saying,
you're kind of copying the customers stuff.
But I mean, that's the beauty of it.
I think from where it came from people to,
if you're being copied, it means that you were doing right.
So definitely, I think Cosmos and IBC and so forth
was the part where I learned most of the stuff
prepared to starting what I'm doing.
Now, I think we defined a bit
what the capabilities of the SDK.
The narrative it is with it that it should go and serve
other ecosystem better than what the main chain was designed
do. And also with that, we're actually providing grants and supporting all this kind of new
teams that are building this, like Polka that had this concept of palettes, for example, where
you can just build new modules that you can attach and compose. I think even Cosmoism is a very,
very good example that one team built something and then it got implemented to pretty much
and provided a smart contract execution environment to all other platforms. I think first,
besides the privacy or the sovereignty of your own application needs,
you're also fueling a lot more innovation at the entire ecosystem level to connect
and also bridge, kind of have a bridge bridging method to Solana, to Cosmos, to Ethereum and so forth,
but not only on the virtual, like on the messaging part, but also you where you can
get and deploy the applications from there that have been built there into this kind of newer
new setting right so this is kind of approach there are three three like i can do believe like
one one is the application specific logic of course that's the best one dydx again is i think
i love the podcast with the of dydx by the way uh here and uh that's i think the best
explanation what an app specific logic would look like then there's the consumer grade for my point
of you where you have this kind of gelato, alt layer, and all this kind of other application
where you have consumer grade tons of blockchains that will be deployed. And then there's
enterprises. And that would be, I think, another podcast to touch on what enterprises
is case with a lot. But now, going again to something more technical, imagine that right
now there's Ethereum as a beacon chain as a meta chain compared to multiverseX. And then you have a bunch
of shards, which are L2s, roll-ups with sequencers that are posting this kind of transactions
to the main chain. The cool part is, like, the most easier way to understand would be, like,
I always envision that Ethereum would eventually define a standard where all those headers
or blocks that are, or data that is being pushed into the blobs into the Ethereum beacon chain
will kind of get synchronized across the L2s.
So I think, like in our case, assuming that, for example,
you have four shards, one is beacon chain and three execution shards,
and they all produce block, block, block one after another.
Now, if the shards, for example, the shard header blocks would be pushed to meta chain
and the metachain will notarize those headers,
and the next block actually will be again fetched by the executions,
Qchay Shards
from what
it will tell
them each of them
what's the height
or the finality
on each of the
sources of the
messages.
Now the cool part
is you kind of
build some sort
of a trustless
invisible
finality model
synchronized across
all the L2s
if you would
apply the same
Alvrand or
MultiverseX
model to
Ethereum
case to the
Ethereum case.
The only
problem is, I hope that that will be the endgame or at least if they drop sharding that
could work with minimal, minimal changes.
However, the problem is the longer the weight and the many, the more the chains go and pose
the data to another beacon chain, then you cannot synchronize them anymore.
Like I cannot know, assuming that, for example, you're a roll-up, your roll-up, I'm a roll-up,
We're both posting the data to the beacon chain and we're fetching the next block from Ethereum.
We can both know, I can know from the blog from Ethereum if your data was included without talking to you.
So in this case, this is exactly how meta chain works for the Ethereum, for Elrod for multibur6 model.
But assuming that we're all going to talk to that specific beacon chain.
Now, the next problem would be, and now this comes again to the sovereign chain, assuming that we're going to have,
10,000 shards or 10,000 roll-ups in the space.
Now, where would this data be processed,
even if it comes only to a block header,
to a minimum kind of information,
you still have this kind of overhead of communication
that you need to do and integrate
and write this kind of minimum information
into an authorization chain
that will give us the synchronicity
or the synchronicity across the execution in order to execute stuff directly from one to another
and not routing them, routing the data per se through the beacon chain.
Now, that's the reason, like, I've thought, okay, if we are Ethereum, right?
So assuming that we will get to the same problem, it's just a matter of time where each of the
blockchains, which is get to the same problems, then what would, what would happen? And in this case,
for example, if you would apply the same principle as Ethereum, that means in order to my transaction
to get forwarded to your chain, I need to compete economically speaking to with all other L2s
that my transaction will fit into that block space, very limited block space of one beacon chain.
Now, if, for example, if the shards are being connected to the execution charts and there's a proper messaging system through the execution shards, the execution shards will have tons of capacity to process tens of thousands of shards.
So it's just the scale of things is just put it at the power of 10, right?
And then you'll get kind of the problems where everything will just kind of hit a limitation.
I hope it's not too abstract and it just goes a bit to the desire rationale.
Yeah, yeah, no, I think it makes sense.
Is it then, I guess it's also a problem, especially since we have like sort of this L3 thing now as well,
where someone posed then first to like the L2 and then I guess, yeah, you have like another tree
that basically that is not synchronized with the beacon chain and that might be it.
One percent.
Exactly that's the main problem.
like it would require that layer L2 to post the transaction or the state of that L2
to that receipt or the state of that L3 to L2 and then from L2 to L2 and then from L2 to N
L1 in order to that message to get through like that that won't work and assuming that as I
said that the L1 will still be the same like I think we do have a lot of data availability
models and other stuff that that kind of again fragments the state if if we assume that
the model will be default
as a sequencer where only
someone will keep the state
and in order to execute
or trust anything you'll need to kind of
pitch that state
I think that
kind of
adds some some sort of
challenges like again some sort
of challenges how would you talk to each other
like then we'll fall back again to bridges
or I think IBC is the good
I think I don't like
things got iBC exist they might say same the entire messaging presentation across all those layers
right right right right makes sense yeah okay okay so we went like pretty deep here i think we
hopefully we didn't like lose everyone along the way but i think yeah super interesting to hear this from
yeah like your experience how you went through and how far everything has come so maybe we can
for the last few minutes
switch a bit to
you know like the broader
multiverse X story I guess you're not just building
this tech I mean I guess you predominantly
are like deep in that but
there's obviously like stuff being built on top of it
and I think what's interesting in your case
is yeah that is like a very integrated
ecosystem with like many many parts
sort of handled in some way by by by your team or like sort of the the the the broader multiverse
ex ecosystem itself versus like you know some more fragmented thing that is like sort of
ethereum i guess um so yeah i guess maybe the question is you know how do you think about
this like ecosystem building and integrating things or what's like the broader vision there i think
it seems like also sovereign chains are trying to bring a little bit more other people more in.
So yeah, happy to hear how you're thinking about that and the future for like the sort of
multiversex ecosystem.
Definitely, definitely.
So 100% agree with the part that we're trying to get even more ambitious people involved,
but also the small contract framework.
It kind of proven at least to some sort of the threshold that it can work.
So the dimension has pretty much all kind of primitives from concentrated liquidity, stable swaps, all kind of AMMs pools, liquid staking, multiple liquid taking protocols, and all that.
So that's that.
But now going even to the deeper level, if you want to persist and build a protocol that will be developed for decades to come, you cannot train people only at the smart country level.
and then assume that they will contribute to the protocol, right?
So this is kind of a two-sided sword.
I really hope that it will work.
But maybe going back to your question, yeah, we kind of built many teams.
Like now they're getting more like a spin-off their own space.
But one of the products is Exporto, one of the spinoffs from MultiverseX,
where what very interesting is our approach,
was again, I think everybody remembers, build the platform and developers will follow, build
pre-developers users will follow. Like, that's kind of a lie, but we heard too many cycles. And
then we kind of got, we believed that, naively enough, build the protocol and then waited for
the users, right? And the users almost ever came. And then why was it that we could wait,
awaited and waited for some results or said, okay, we're kind of hardcore engineers,
let's try to do something about it. And that's how we actually start the next day. And with one hand,
we were kind of having this bottom-up approach with the protocol and sharding, scaling for masses.
But then if you want to scale and reach mass adoption, then looking at the internet, there were like
two moments. Like there was the fiber channel where you have distribution, like what the sharding does.
And then we had the internet browser moment where you kind of have
disrupt the entire complexity when not only the gigs could work and start using
internet, you kind of have the browser moment. And that's actually what we try to do with
the X portal, where the portal would be your portal to everything. So it kind of
abstracts the entire complexity with both that product as well, which I think I think
1.5 million users in the first 12, I think 12 or 24 months.
to launch. And then, yeah, that allowed us to build or at least experiment some very interesting
stuff, like two, on-chain 2Fa, right? So I can give you a very short, like, where we come from,
like, we're the second unicorn of the country. And then there were a lot of people that put bets on us
and believed in us and so forth. And we did not, or we think that we did not go, went against
other communities and so forth and built our own community base.
Whenever people trust you and they put bets on you, they're taking a lot of risk.
And what happened actually when you onboard 1 million users with zero experience in crypto,
the hackers were in heaven.
Like you can imagine how much social engineering against those users happened
and how many people kind of also lost money into all this kind of fishing attempts,
all that kind of stuff.
So we thought what can be done such that the next bull and
run will, I can sleep good and know that even my parents are safe.
Right. So, well, indeed actually, I said, okay, if we do like the external old accounts,
like UA accounts approach, that would mean that every transaction would need to go to the VM
execution, right? So it will need a smart contract to open, verify that signatures. And you basically,
especially in the WASM environment, there's still a lot of work to do. Like, it could get even
better. And we do you don't want to add in a chain of let's say you were you called 10
smart contracts for liquidating a.m. or whatever, just add another multi-sig on top or 10
multisigs on top and with cryptographic experiments. It will just get worse. So we said,
today's today. Let's see what can be done. So we actually added a secondary field at the
protocol level, which is checks a guardian, so-called guardian signature, which is basically
the X portal comes with a blackbook signature, a black box memonic, which you cannot read.
only through encrypted backup, such that it is full-proof.
And then you basically, in that way, you register the application or the device at the protocol
level to co-assign your transaction.
So you basically cannot, even if I give you right now the memonic of one multibursex account,
there was a page called Eagle Heist, where there were like six million people trying to
watch or saw that the post with a seat phrase published.
where they could not steal the money.
Why?
Because it's required the same way you,
even if you have the my bank account login,
you require the 2FA, the second signature, right?
The second device that will authenticate you.
And we kind of took the same principles as a bank account.
Like, as a staking account, you, like, assuming that I have a phone,
here I have registered,
and I have the second one where I am importing just the C-phrase
and try to initiate this kind of transaction.
And then the first thing, okay, I will notice I cannot move the funds because they're locked.
Let me try to re-register a new, the same way you would call the bank and say, hey, I want to, I just want a new token.
I lost that, that one, dropped it.
I need a new one.
They say, okay, sure, I'll mail you one, right?
So they will first give you some sort of a type, a buffer that is not a gun held to your head to while while you're doing.
that call. It's kind of common sense. And then the process would be, I try to register,
I would send the transaction on chain that goes through. I'm allowed to do that. But the transaction
has a bonding time of 20 days. Right. So in that case, the second device, which is already,
assuming that I still have my, assuming that I still have access to my current device,
it will have a notification similar to, for example, Facebook or Instagram or whatever. Is it
that we're trying to login, yes or no.
If I say yes, basically, it asks me,
do you want to give the rights to this new device
to sign to move the balance or whatever,
give the rights to sign the transaction?
Yes or no?
If I say no, then basically it's just,
that account could just retry to re-register,
but it cannot move any fun.
The cool part, if I could say yes,
then instantly it just goes and transfer the security.
Now, those are like, on the one hand,
just going back,
you have this kind of super fast infrastructure, crazy, complex.
And then on the other hand, you just go to full-proof versions of applications
where I know that my parents are safe on the internet
and they can do this kind of transaction.
Maybe, yeah, one of the very interesting parts is we also acquired,
I think we're kind of the only L-1s coming out from Europe,
where we acquired and have an e-money license
and a kind of neobank license to operate IPAN accounts.
and have issued in partnership with MasterCard,
debit card attached to Iban accounts,
that now packing it into with the protocol with the LL1,
picking it with a very cool user interface,
you could basically just easily spend
and use all this kind of super hardcore tech and cool stuff
that goes decentralized and open border and so forth,
but also on the same time,
just go outside and buy your beer, right?
So what if?
Our question was, what if we can do that?
And actually, it ended up that we actually were alive with all this kind of crazy stuff.
Yeah, that's a really cool mechanism.
Thanks for diving into that.
I think so basically, like these 20 days, like if someone has the memonic,
they can just try to register a new device, but you could still block it.
But okay, if you lost any, you wait the 20 days, it'll automatically pass over
just in case, yeah, okay.
So you do need to like interact a little bit
with the account at least frequently.
If you lost the mnemonic, let's say.
Yeah, yeah, yeah.
So it's not perfect, but the idea is,
for example, assuming that my phone just got crashed,
I don't have access to it and so forth,
then basically I still need to be able to recover
this kind of the funds.
And assuming that I do have access,
I'm still getting notifications all the time
or I can just hear, hear, or listen to what a
count does and so forth.
I don't think that the ideal case would be to lose access to the funds forever.
Yeah, I know exactly.
There's like a sort of, okay, I think it's like you have to die one death somewhere
a little bit.
And I think this is like a nice trade-off that you're exploring or doing there.
So yeah, that's quite impressive.
I really like it.
Cool.
I mean, yeah, thanks so much.
I think we went like super deep into everything.
I hope people take away a lot about multiverseX and like understand how much cool tag you build.
And like what's still to come.
Is there any final thing you want to say or you want to like lead people towards or want to make them aware of what's happening or how they can get involved?
Then please let's it.
Yeah, definitely.
Definitely.
So I do believe that there's a lot of stuff that is still to be built.
As I said, on the Sobrain chains, there are the first, I think, a couple of chains coming
up, line up with EVM, bringing EVM, composability and compatibility.
Then we're looking and actively talking to several teams from Solara ecosystem to build a
solar VM, then move VM.
That would be also a very cool part.
and then hopefully to get one of the versions that will just wrap
and have a unified execution environment across all those kind of queries of VMs.
So what if?
I'm just opening it like that.
Then while I think they believe that risk-taking is definitely, I think,
very, very crazy idea that eventually kind of every ecosystem will have.
And yeah, I mean here, big congrats to you, Felix.
I know you had some very, very cool stuff recently announced.
And hopefully we'll work together on that one as well down the road.
Yeah, and there's a couple of stuff.
Maybe also very interesting that I have not seen yet because of those licenses that we're also having with either accounts.
What if we're going to build a chain or a chain framework where the banks could just spin up their own infrastructure, right?
So we're going to explore with our own, put a framework together and then work with the central bank to kind of let's see if that could be tokenized directly on chain or what if, what if.
I'm just throwing a bunch of stuff.
Assuming that legislation is being solved, the licenses are on the table and we do have this kind of toolkit on the table.
It's an open question to what can we build?
Like how far can we go?
And that's, I think, just a matter of time until the way I'm thinking, just a matter of time,
until whatever you're thinking, I'm going to think about it too.
It's just a matter of time that I do the ecosystem.
If it's progressing good enough, it will get there.
Yeah, yeah, like you mentioned, like the consumer grade, like the core primitives consumer
and then the institutional enterprise.
I like that framework as well.
So I guess you are definitely involved in.
all those areas. So just let's note it's funny enough. I just had an ecosystem call with the founders
yesterday. And there's the Institute of Research and Engineering, I think, from the national
government, from Romania that is being part and they have an NFT marketplace running and they're
exploring with the sovereign chains. And the cool, even crazier part, like there's a partnership
between the government, the China government and European, several European governments,
where the China one is exploring to launch their own NFT marketplace as well for the national
Olympics guys and exploring that. Like how crazy it got, like from talking all this kind of crazy
cryptographic protocols, then governments finally coming closer and exploring this technology.
with NFCs
Yeah
Yeah maybe soon
Chinese meme coin
Coming
We're 90%
We'll see
All right
Yeah thanks so much
I really enjoyed this
And thanks for being
The first podcast
In seven years
So yeah
I hope the listeners
Enjoy this episode
And yeah
Get in touch with
the multiverse ecosystem
So
Thanks a lot
I think to love
I think
I appreciate the time
And thanks love
For the patient guides
I know it might have been a bit tough with all that stuff,
but sharding will come eventually to everybody.
Nice.
A good final words.
Thank you for joining us on this week's episode.
We release new episodes every week.
You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud,
or wherever you listen to podcasts.
And if you have a Google Home or Alexa device,
you can tell it to listen to the latest episode of the Epicenter podcast.
Go to epicenter.com.
subscribe for a full list of places where you can watch and listen. And while you're there,
be sure to sign up for the newsletter, so you get new episodes in your inbox as they're released.
If you want to interact with us, guests or other podcast listeners, you can follow us on
Twitter. And please leave us a review on iTunes. It helps people find the show, and we're always
happy to read them. So thanks so much, and we look forward to being back next week.
