Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - The Layer-1 Data Availability and Consensus Solution
Episode Date: September 29, 2020NB: Since the recording of this podcast LazyLedger changed their name to Celestia.Celestia is a scalable general-purpose data availability layer for decentralized apps and trust-minimized sidechains. ...It is a minimal, viable blockchain which does time stamping and block ordering.Think back to Bitcoin in the early days, before Ethereum. Layer-2 systems were being built on top of Bitcoin and were leveraging Bitcoin’s consensus layer. This is what Celestia is doing, although it is purpose built and scalable for the exact use case. The implementation details are a lot more complex, and the vision is to create a modular pluggable Layer-1 that does nothing but consensus and data availability. It is designed for people who want to create their blockchain without consensus.The project is yet to be launched, however we had Ismail Khoffi, Co-Founder and CTO, and Mustafa Al-Bassam, Co-Founder and CEO on the show to give us a deep technical overview and vision of Celestia.Topics covered in this episode:Ismail and Mustafa's backgrounds, and how they got into the crypto spaceHow Ismail and Mustafa met and created CelestiaThe Data Availability paper which was co-written with Vitalik and introduction to CelestiaThe purpose and function of CelestiaHow data availability works and the advantages and disadvantages to their ledgerThe purpose of CelestiaHow transaction fees workThe interoperability aspectHonest validators on the blockNon-interactive proofs on CelestiaThe interaction between Celestia and Cosmos SDKHow will Celestia compete with other Layer One's, in particular FilecoinWhen the ledger will be launchedEpisode links:Celestia websiteCelestia blogGitHubCelestia TwitterIsmail on TwitterMustafa on TwitterSponsors:Algorand: To learn more about Algorand and how it’s unique design makes it easy for developers to build sophisticated applications. - https://algorand.com/epicenterThis episode is hosted by Brian Fabian Crain & Sunny Aggarwal. Show notes and listening options: epicenter.tv/359
Transcript
Discussion (0)
This is Epicenter, Episode 359 with guests, Ishmael Kofi and Mustafa al-Basam.
Hi, I'm Sebastian Quirio, and you're listening to Epicenter.
The podcast where we interview crypto founders, builders, and thought leaders.
On this show, we dive deep to learn how things work at a technical level,
and we fly high to understand visionary concepts and long-term trends.
If you like the podcast, the best way to support us is to leave a review on Apple Podcasts.
Now, did you know that leaving an Apple Podcast review gives you a serotonin boost that
permeates throughout the rest of your day. Well, if that sounds good to you, if you want that
jolt of happiness, go to epicenter.orgs slash apple on your Mac, your iPhone, or your iPad,
and leave us a five-star Apple review. Today, our guests are Ishmael Kofi and Mustafa al-Basam.
And together they are the co-founders of Lazy Ledger. Now, this is a project that I didn't
know very much about before I heard this interview. And actually, if you go to their website,
there's very little there, and it still feels fairly early. But,
But it's really interesting because it does something that we haven't really seen before, at least not implemented in this way.
So Lazy Ledger is sort of a minimal viable blockchain.
It does timestamping and block ordering.
And so the way to think about this, the analogy is to think about Bitcoin in the early days.
Before Ethereum, when folks were building things like colored coin and counterparty, essentially layer two systems on top of Bitcoin that were leveraging Bitcoin's consensus layer.
And what they were doing is they were using the operturn field in Bitcoin to store block hashes for their layer 2 blockchain.
Lazy Ledger kind of does this only its purpose built and scalable for that exact use case.
So obviously the implementation details are a lot more complex, and Sonny and Brian go deep into the technical details during the interview.
But the vision is to create a modular, plugable layer 1 that does nothing but consensus and data.
availability. There's no smart contracts or anything like that. And it's great for people who want
to just create their own blockchains without consensus. They just dump their blocks on lazy
ledger and it does the rest. So you could say, for example, build a Cosmos SDK chain or an
EVM chain. You post your blocks on lazy ledger and it takes care of the rest. So the reason why
I think this is interesting beyond the technical implementation is that to me it shows a certain
level of maturity of a technology ecosystem when different layers of the stack are starting
to unbundle and uncouple from each other. So we saw this in Web 2, where we now have a very
modular stack all the way down from the infrastructure layer and going all the way up to
the application layer where developers can pick and choose the different components that
they use to build their applications that are best suited for their apps. And we're now
starting to see this in the blockchain technology stack, and Lazy Ledger is just
one component, it's the consensus component in that broader technology stack.
The other reason why I think this is interesting is its simplicity.
Lazy Ledger does one thing.
It does consensus-based, timestamped ordering of data.
And beyond crypto finance, there's a whole universe of applications that exist that could
leverage this very thing.
So if you look at the entire enterprise blockchain application space, I would say most
of those applications are trying to establish a.
single source of truth between participants that don't trust each other. And as we've seen,
a blockchain isn't always the best technology for these applications. And so I think having
something like LazyLedgeer as a publicly available single source of truth has tremendous
potential for all kinds of applications outside of cryptofinance. If you're thinking of building
a crypto finance application, you should definitely check out Algaranth, because their unique
design makes it easy for developers to build sophisticated apps.
Algarand is fast, it's secure, it scales, it has instant finality, and it's designed with all of the purpose-built building blocks that you need to build your next defy app.
I'll tell you a little bit more about that during the interview.
But for now, here's our conversation with Ishmael Kofi and Mustafa al-Basam.
Hi, so we're here today with Ishmael Kofi and Mustafa al-Basem, the co-founders of Lazy Ledger.
So Lazy Ledger is this very innovative, new kind of blockchain layer two type protocol that we're going to dive into the day.
So yeah, thanks so much for joining us today, you guys.
Thank you for having us.
Thanks a lot for enlighting us.
I'm very excited to be here.
Yeah, absolutely.
So am I.
Well, let's start there.
I mean, Mustafa, you have like a very interesting background.
I was watching this talk of years before at some hacker conference where he talked about.
all the work you did in anonymous and this part of a thing called LULS.
Do you mind going into a little bit?
Like, what's your history there?
And, you know, how did that lead you to crypto in the end?
Sure.
So that was actually a very long time ago when I was a teenager.
I was involved in various hacker groups,
and including anonymous.
And I co-founded a hacker group called Dulcic,
which compromised many corporations and governmental entities.
this was when I'm about 15, 16 years old.
And in terms of how it relates to crypto, I guess not much.
But that was a very interesting time.
And I've moved on to other things.
I mean, I guess there's at least like some kind of similarity in, you know,
they're both potentially these, you know, disruptive activities going against status code.
Do you see some similarity there between the two fields?
or
yeah I mean yeah I mean for sure I guess that's in that sense is quite similar to crypto in the
sense that there's you know similar political ideals and sort of philosophies you know the
hack the hacker movement and the and the and sort of the crypto cyphopunk movement are all interlinked
with each other and I guess the I guess the ideals and that I kind of drove my hacktivism as a teenager
kind of also drives the same ideals that makes me very interested in
cryptography, which is to give people more freedom.
And back then it was motivated by freedom of speech.
And I guess financial transactions are a form of speech.
And cryptocurrency allows people to transact money freely.
And so how did you like first get involved with the crypto space?
So what was your first foray into the field?
So I first kind of heard about Bitcoin in 2010, 2009.
But even before I heard about Bitcoin, I was always very interested in peer-to-peer systems in general.
Like, I was very interested in BitTorrent, for example.
And I was very closely following a website called the Pirate Bay, which was like the biggest,
or still is the biggest torrent tracker.
and where people kind of upload copyrighted movies and software.
And the kind of idea of creating decentralized protocols was very interesting to me.
And so when I had about Bitcoin, that was naturally very interesting to me.
I kind of, even though I heard about it in 2010, I only really got involved in a full-time capacity in 2016.
when I started doing a PhD in London at the University College London, focusing on the topic of on-chain scalability.
And I was specifically very interested in that topic because I was very closely following the Bitcoin community from 2010 onwards.
And I was always thinking about the one-negeabyte block size limit in Bitcoin.
and even before
there was a massive debate
on the Bitcoin community
when the Bitcoin,
the block size limits started
getting reached
and transaction fee skyrocketing.
There's a whole kind of huge debate
about how we should scale Bitcoin
if on-chain scalability
is even possible to do securely.
Even before that,
I was actually very worried
about this block size limit.
And, you know,
prominent Bitcoin community members
like Gregory Maxwell
told me that this isn't really a problem and we shouldn't worry about this.
So I started doing research on Unchain Scalability to figure out how we can actually scale
blockchain securely in a decentralized way on layer one.
And as part of my PhD, I was a co-author on a paper called ChainSpace, which was one of the
first proposals for a sharding, a sharded blockchain design.
and that was later spun out into a company based in London
and that company was, well actually the company was based in Gibraltar
but developers were based in London
and that company was later acquihired by Facebook
and now many of the people involved are working on Facebook's Libra project
and Ishmael, how about you?
I know you have a lot of background as well
and like distributed systems and peer-to-peer stuff.
How did you get involved with this space?
So I think the time frame is actually very similar to Mustafa's.
So I was interested in distributed systems and decentralized systems for a while.
And after I finished my studies, I don't remember actually exactly when it was,
but around like 2015, I was like really interested and tried to get like more involved into it.
And I was working at a research institute at Fraunhofer.
And like I was proposing to like do something into that.
direction and I was like believing that this is like this will get bigger and more relevant in the
future but there was no space for for that in in that job so basically I was looking around
for something where I can like dive in more deeply into research but like also work a bit on
like actually implementing real world systems and I found this exactly at a BF
at the DDIS lab, like the distributed and decentralized systems lab of Brian Ford,
where one of the goals was also to, like, scale Bitcoin.
And so, like, Brian does basically everything.
And it was a very, like, very chaotic and very interesting year where I learned a lot,
where I was at DPFL.
And we, like, I co-authored a bunch of papers there as well.
One about, like, scaling Bitcoin as well.
It's called Bisccoin, but also we did a bunch of research or like work in privacy preserving
technology and all this.
So this is where I think learned so much in like very little time because I had to because
like I had to implement bunch of these systems in a team though.
Like it was not, I was not like the only person in the team.
We're like a team of three engineers, which is quite unique.
think at university or an academia.
Yeah, so after that, I briefly started a PhD as well and did a detour as in as a PhD internship
at Google where I worked on something like Connix.
Well, basically, it is Connix.
So it's like Google has this project called Key Transparency, which is very similar to Connix,
but like they use a log to put designed, basically they don't just chain it.
but like they put the tree roots, the Sparsmoker tree roots in the log.
And so basically after that I decided, okay, there's so much happening and there's so much
going on is probably not the best idea to stay in academia.
And also at that time, I met with Zaki in London and he was already kind of hinting
that there might be hiring attendance.
And it took a few months later.
that I actually, almost a year later, actually, that I started the tournament as well.
And there I got pretty much involved in the knee-deep, the implementation side of layer one blockchains.
And so how did you guys end up meeting and like starting to work on Lazy Ledger?
So it is a very good question.
I knew Mustafa already for quite a while from like Twitter and online.
and I've been going to these like hacker conferences,
the Chaos Communication Congress,
where he's like also been always present.
And I think we met more and talked more during my time at DPFL, I guess.
And we met at academic conferences.
And he approached me about Lazy Ledger at the camp.
And yeah, he had like this research paper and I read it.
And I found like I didn't fully understand it in,
when I first read it and I had a bunch of questions.
But like I immediately seen the potential there.
And yeah, then we started working on that shortly after, basically.
So basically I asked Ismail to join the passenger in the middle of a field, about 50 miles north of Berlin.
It's true.
In the middle of nowhere.
Yeah.
Like a little hacker village.
Back in January, we interviewed Steve Kokinos and Sylvia Macaulay of Algrand, and during our conversation, we talked about how Algarend's unique design makes it easy for developers to build sophisticated applications on their platform.
So what's great about Algorithm, beyond the fact that it's fast, it's secure, it scales, and it has instant finality, is the fact that they've designed a layer one protocol with primitives that are purpose built for defy.
So what that means is that they've taken some of the most common things that people do with smart contracts, and they've embedded them right in the same.
system right in the layer one. So things like issuing tokens, atomic transfers, well, these are
built into the layer one. And smart contracts are first class citizens on Allgrant. So with these
essential building blocks at your disposal, you can build fast and secure defy apps in no time.
To learn more about what Algarand brings to the table and how to get started, I would encourage
you to check out Algran.com slash epicenter. That lets them know that you heard about it from us
and it takes you where you need to go to learn about their tech.
And with that, we'd like to thank Al Garand for supporting the podcast.
So Lazy Ledger, it like, a lot of it's derived from like the sort of the data availability
paper that you co-authored from a couple of months ago.
So was, was that paper like sort of like, did you have that in mind that, hey, I want to
like create this like lazy ledger project and that paper was like sort of sort of the white
paper for it?
Or did the paper come first?
and then after that you were like,
hey, how do we build,
productionize this?
So you're referring to the paper that I wrote with Vitalik.
I guess to give some context first about that paper,
so that that was a paper about something called fraud
and data availability proofs.
And it was basically solving a fundamental problem in sharding.
It was kind of like the missing piece of the puzzle
or like not maybe the missing,
the biggest missing piece of the puzzle to,
making sharding complete, which is, well, if you increase on-chain three-port,
if you increase the number of transactions that people can post on the chain.
And regardless of how you do that, whether it be through sharding or increasing the block size
limit, you also need a way for people to verify the entire chain efficiently.
And the question is, like, how can you do that without requiring?
everyone to download every single transaction in every single shard to make sure every single
transaction is valid.
And that's very important to scale because the whole point of blockchains is that they're
decentralized.
And the reason why they're decentralized is because anyone can publicly verify that the
blockchain is correct and the transactions are valid.
But you can't do that cheaply if there's a lot of transactions to verify.
verify. And that's why the Bitcoin community has been reluctant to increase the block size limit to more than one one, one, about one megabyte. Because it would, it wouldn't make it possible anymore for people to verify, um, uh, blocks using raspberry pies, basically. And so this, so, um, there's this idea of fraud proofs. And the idea of fraud proofs is that instead of verifying, um, instead of that, instead of that, instead of verifying, instead of that, instead of fact,
every transaction yourself, what you can do is you can assume that the blocks are valid,
or specifically the blocks that have consensus are valid, and if they're not, then someone or any
person or any node on the network can generate a very succinct and small proof called a fraud
proof, or more specifically what's called a state transition fraud proof to prove to you that
block is invalid and you can reject that block. And so therefore, you can kind of indirectly verify
the whole chain with very little resources. But in order for you to be able to do that,
need another kind of free requisite called data availability proofs because you can only generate
a fraud proof for a block if the data of that block has been actually published by the minor
or the block producer. Because what the minor
might do is simply publish the header of that block, what was called the block header,
but not actually publish the actual data in that block.
And so what data availability proves allow you to do is succinctly convince yourself
or efficiently convince yourself that the block has actually been published
without downloading the block yourself.
Now, Leisure uses this data availability proof primitive to make it very efficient for people
to prove to themselves that the data has been published in blocks.
However, the idea of Lazy Lager itself actually came about long before this fraud proof paper.
The idea of Lazy Lager, I started thinking about it.
When I was starting my PhD, I was thinking about what of some alternative design
paradigms that we could use to build layer ones?
Or more specifically, what is the most minimal layer one you can build?
or like the most kind of modular, basically, one you can build,
to build a cryptocurrency on or to allow for cryptocurrencies to exist,
and how can we scale there?
And so this data availability proves primitive made that much more practical and scalable.
Cool, thanks.
I think that was a great explanation already.
And so that would be great intro to lazy ledger.
before we go into
the details
from Lazy Ledger
I'm curious like
what do you guys want
the impact of Lazy Ledger to be
So as I mentioned
Lazy Lager
is basically
you can think of Lazy Lager
as a very basic
layer one
I would call it the minimum
viable layer one
so if you strip back
layer one to its core components
and you made it as scalable as possible
using existing technology
would get lazy ledger.
So effectively, lazy ledger
in simple terms
is basically a blockchain
where people can dump
arbitrary data onto it
and that data gets ordered
and the consensus knows
don't care or process
what that data is.
So it's basically a blockchain
for dumping data on and the data
gets ordered.
And you can use this as a primitive to build all kinds of applications and blockchains.
Now, what this is really useful for or what this has impact for, and I guess to explain the overall kind of vision is as follows.
So when Bitcoin came about, the kind of vision for Bitcoin was, I guess the technical architecture of Bitcoin was that it was using a blockchain for one purpose.
or for one application,
and that application is cryptocurrency or payments
or store of value,
or depending on who you talk to.
So it's basically a single-purpose blockchain.
Then Ethereum came about,
and the idea of Ethereum was,
let's actually create a general-purpose blockchain
where you can, for every application,
that has a kind of general-purpose virtual machine
where people can upload smart contracts.
And so those architectural visions were like very stark opposites to each other.
And at the same time, there was Tendament.
And Tendomintment was more similar to Bitcoin's vision, where the idea of tendament is that it allows you to create your own blockchain for your own application.
So at the moment, basically, if you wanted to create a application on a blockchain, there's only really two ways to do that.
the first way to do that is you use a shared execution environment or a shared computer like Ethereum.
You code up your smart contract, you upload it to Ethereum, and your smart contract runs on the same, I guess, machine or the same chain as everyone else's smart contract.
And that's basically the world computer model that Ethereum kind of created.
and the problem with that is that
the Ethereum virtual machine is very limited
in terms of what you can deploy on it
and it has very high gas costs
if you want to build applications
that the Ethereum virtual machine
does not need any of these ports
so as for example complicated cryptography
or cryptographic proofs
that don't have native built in functionality in the EVM
so if you want to do that
if you want to build more complicated applications
that the EVM doesn't support,
you have to basically build your own blockchain
using something like Tendarmint.
But the problem with Tendament is,
or not the problem,
but I guess the drawback of Tendomide is that,
or any kind of solution like Tendoment,
is that in order to create your own blockchain,
there's a huge overhead for creating your own blockchain.
There's a lot of work involved in creating your own blockchain.
Because when you create your own blockchain,
you also have to create your own layer
one for that blockchain, you have to create your own consensus layer. And that consensus layer
nowadays is usually based on proof of stake, which is what tender provides for you. And to deploy a
proof of stake network, it takes a lot of work. You have to, for example, create a token sale,
you have to make sure that's distributed the essentialized way. You have to make sure there's
lots of validators and so on and so forth. So where laser leisure comes in, laser is that
LaceyLager provides a modular, plugable layer one that does nothing but consensus and data
availability.
The LaceyLedge layer one does not do smart contracts or execution.
The LLage layer one basically does the core things that a layer one does and nothing else.
And that makes it very useful as a pluggable layer one for people that want to create their own
chains but don't want to go to the overhead of having to create their own consensus network.
Instead, they can simply plug laser ledger in to their chain as a consensus layer by simply
dumping their chain's blocks on lazy ledger.
So could you maybe walk through a little bit of like the architecture, what that would look
like?
And, you know, one way of thinking about it is like you're kind of building the minimum viable
blockchain in the sense that it's.
basically just a timestamping system, right?
Like you're getting consensus on ordering of transactions,
but like nothing else about the transactions.
And so this was actually like the idea of a lot of the early Bitcoin extensions,
things like colored coins, for example.
So could you maybe talk a little bit about how these compare to those?
Sure.
So, yeah, so before Ethereum existed, there were various projects to kind of
extend bitcoin to support many applications and the kind of like overall basis of those
of those proposals was where to basically embed data into bitcoin transactions and using an
operation code called op return so when you submit a bitcoin transaction you can attach a bit of data to it
and so that basically allows you to timestamp or get consent or order
arbitrary data using the Bitcoin blockchain.
And so that's where projects like Colored Coin and MasterCoyne
are effectively using Bitcoin as a data layer to dump data on.
And that allows them to basically create all of these other applications,
like smart contracts and other, not smart contracts,
but more like specific other applications that are relevant to Bitcoin.
But kind of, I guess, some people would say abusing the Bitcoin blockchain,
by dumping data that it was not designed to receive.
And the difference with Lazy Ledger is that Lazy Ledger is designed for that specific purpose,
and it's designed to be more scalable for that specific purpose in its architecture
and also in its primitives, including data availability proofs,
which allows nodes to verify the entire Lazy Lager chain without having to download every block,
which is what you have to do in Bitcoin right now.
If you were running a master coin node,
you would need to download every single Bitcoin block,
basically to make sure that the validity of the master coin data is valid.
I mean, isn't one of the nice features in a way,
you could think of a proof of work is like a consensus protocol
that directly incentivizes data availability.
Because if you're not making your data available,
no other miner is going to build on top of your blocks.
Maybe that's like a reason why Bitcoin wouldn't need data availability proofs.
Like if I had data and I wanted to make it available, I could put it on the Bitcoin
blockchain and have with like very, very high level of like certainty that that data
will be available just because of the nature of the Bitcoin network and like how widely distributed
it is.
And the fact that it's proof of war kind of incentivizes widespread distribution of the data.
So why wouldn't I want to just do that?
So basically there's several things to this.
So with Bitcoin, that's okay because Bitcoin is like a single execution environment.
But the problem is if you wanted to create kind of like a general purpose data availability layer,
you can't just rely on the consensus and the word of the consensus to tell you that the data is valid for two reasons.
So first of all, the threat model of Bitcoin.
and Ethereum are such that
even if the consensus
even if there's a 51%
attack on the consensus
that the consensus
cannot steal money or cannot
insert malicious state
into the chain. The only
thing that you can do is censor the transactions
or reverse transactions.
Now with applications
or with
block chains that simply
or solutions like optimistic roll-up
side chains that use
a data available layer like Base Ledger,
if you configure the nodes in those systems
to simply trust the consensus
to tell you that the data is available,
then if the consensus is dishonest,
then they could potentially lie to you
and inject invalid state to the system
that you would never know about.
And that significantly increases
the incentive for doing a 50-1 mess attack
because the financial rule
reward for doing a 50% attack no longer becomes you just double spending a transaction,
but their reward becomes potentially injecting malicious state that generates unlimited money
or steal people's money. So that's the first reason. And second reason is that with a general
purpose data availability layer, there may be many different applications or chains that don't
necessarily connect to each other or independent of each other that are using the same
data availability layer and you don't want to end up in a situation where you effectively
have to download every single, you have to download irrelevant data for other chains to verify
the data for your chain. And so data availability proofs allow you to verify that the data
for the entire system is available without downloading the data for other chains.
So one of the advantages here would be it's really good.
good at like domain separating and like querying this.
So like one of the problems, if I want, let's say, uh, I wanted to store data on Bitcoin.
Like as an example, you know, I had this idea once of storing, uh, all tendermint, uh,
like Cosmos hub, uh, validated of set changes on the Bitcoin blockchain.
So it, it's there as like a long range attack prevention.
But then the problem there is that finding one of these transactions on the Bitcoin
blockchain is really hard.
You kind of have to like scan through every transaction.
And so one of the benefits of having a specialized, like something that's specialized for data availability is it will be very, it's much easier for me to like query for particular properties of transactions.
Yeah, exactly.
So because with later that Joe, we expect the chain to be used by many different applications or side chains or execution environments that coexist with each other that don't know.
certainly communicate with each other.
And so we need an efficient way to allow nodes in each of these applications to query the data
relevant to the specific application without having to care about the data for other applications.
And the way that we achieve this is we basically use something called a namespace milkweed tree,
which is kind of like a modified melchewish tree that basically allows you to
query for specific messages in that tree for specific applications.
And each application has what's called its own namespace identifier.
So you basically query for the namespace identifier that you're interested in.
And you can very efficiently, you kind of verify that yes, the node that you're looking to
has given you all the relevant transactions for that application that are in that block.
So, I mean, you mentioned, right, like in Ethereum, for example, right?
Like, you have this transaction ordering, but you also have to a transaction execution.
And so you guys are getting rid of that part.
So I'm curious, like, what are the downsides of this?
You know, what do you give up?
Yeah, so it's a good question, right?
So you can think of it basically as we're pushing the execution to lay.
to. And in terms of what you give up, it kind of depends on, as we mentioned, or to clarify,
there is no kind of user execution environment on these ledger. Therefore, developers have to
define their own execution environments using something like Cosmos or the Ethereum virtual machine
or one of the many optimistic roll-up side chain implementations out there.
just on that so developer have to define their own execution environment like let's say using cosmos as an example
can you explain what that would actually look like if somebody wanted to to do something like that
sure so effectively uh to go i guess as i mentioned lazy ledger is basically a chain where people dump
arbitrary data onto it or any data they want onto it and that data gets including
the blocks and the blocks get ordered.
And each application has its own name space ID.
So if you were to build a Cosmos SDK-based chain that wanted,
let's say, okay, let's say for example that you wanted to build a Cosmos SDK app,
but you do not want to go to the overhead of having to actually create your own
proof of stake network using Tendomint.
So what you would do is you would create your own Cosmos SDK chain,
and then you would post the blocks of the cosmos SDK chain
directly on lazy ledger
and that would basically give you consensus
for that Cosmos SDK chain
because the blocks immediately get ordered
and so it's like the blocks are within the laser ledger blocks
so it's kind of like a sub-blocks if you like
when you keep agreement
about like how to execute
those blocks.
And like, let's say, let's say there's like,
now we have an upgrade of this, like,
Cosmos SDK chain and like,
you know, in Cosmos, we know this process,
okay, on chain governance and then hard fork and whatnot.
Like, what would that look like here?
So that's all defined on layer two.
Because on the Lazy Ledge of Layer 1,
the Lazy's Ledge layer 1 has no concept
of what transactions are valid or not.
Instead, all of those concepts are defined by the users
or by the Cosmos SDK app itself.
So all the users that use your Cosmos SDK app
will know which transactions are valid or not
because they have the code for your Cosmos SDK app.
So the code for your Cosmos SDK app
basically defines what the execution rules are
and what transactions are valid or not.
Maybe you can add to this.
You can still have like governance
proposals and all this, like on the optimistic wallup, right?
I mean, you could basically have something built completely with the Cosmos SDK,
but you replace, for instance, like this is how like from an implementation point of view,
we are looking at it right now is for you use the Cosmos SDK, you can use all the modules,
but you replace on the Optimistic Wallup, you replace tendament,
which like additionally provides consensus, which you don't need in the optimistic wallup,
you would replace it with another what's called like ABCI client.
So we would implement our own node that like fulfills the tenement side of the ABCI contract.
This would be used instead.
And then you could ideally could use all the SDK modules, for instance.
Don't you still need consensus though on the state route?
Like let's say, because, you know, the Lacey Electro side is giving you consent.
on transaction ordering.
But then that would mean that anyone who wants to interact with this chain now has to
actually run all the software.
But realistically, oftentimes, like clients, just want to query something that's currently
in state.
And so for that, you want some sort of consensus on state route.
Yeah, that's right.
So I guess I think this is a good point to kind of talk about a little bit about
optimistic roleops.
And assuming that your listeners might not be familiar with them.
This is basically our answer to consensus on state routes, if you like.
So going back to the kind of Cosmos SDK example, you might create your own Cosmos app
and you post the blocks for the Cosmos app onto Lazy Ledger.
And these blocks have block headers.
And these block headers contain stage routes.
And these state routes can be used by like clients.
So then the question kind of becomes, I mean, I guess that kind of answers.
a question partially, but then the other question kind of becomes, well, if you get to actually
post these blocks, I'm like, what if people post invalid blocks? And the way that kind of optimistic
roll-ups work, optimistic roll-ups is basically this kind of side chain technology where you can create
side chains that use another layer one as a data-available layer. And the whole kind of concept is that
it's on-chain data availability, but off-chain execution.
So the execution for your side-chain happens off-chain,
but the data availability for your side-chain happens on-chain
on some other layer 1, like Ethereum or Azure.
So in optimistic roll-ups, there's something, or there's a node called the aggregator.
And the job of the aggregator is to collect transactions for that side-chain,
and to aggregate them into blocks and then submit that block to the data available later.
Now, what happens if the aggregator submits an invalid block?
And so first of all, I should add, by the way, in many of these optimistic world proposals, anyone can be an aggregator.
And so the question is, what if the aggregator submits an invalid block?
Then what would happen is basically a fraud proof can be generated.
And because the data for that block is,
block is available to everyone because it's been published on the data availability there.
And because that's the case, anyone that's washing the chain can generate a fraud proof
to prove that they have generated an invalid block with an invalid state route.
So is Leisure going to provide this sort of fraud-proof place to do fraud-proofs,
or would that happen on another chain like Ethereum?
Well, I mean, the fraud proofs themselves, they don't have to be posted on chain.
I mean, the fraud proof system is also a layer.
I mean, like the challenge game.
Like, where would the challenge game take place?
So that depends on a layer two, because all of this execution staff is orthogonal to the, or irrelevant to Ledge Lagerge Layer 1.
So on layer two, there's different ways of doing it.
I guess the simplest way of doing it is basically it's the case that each kind of optimistic
roll-up chain has its own kind of sub-network where like the users of that of that chain communicate
with each other and if someone generates a fraud proof then that fraud proof gets generally
gets distributed and propagated across that side chain's peer-to-peer network and that allows all the
users in that side chain to verify that the block is actually invalid. Also I know you guys have a
third team member, John Adler, and I believe he also has another project he works on called
Fuel Labs, which is like an optimistic roll-up system on Ethereum. So what's sort of like
the relationship between these two projects? John is like the chief research officer
at Lazy Ledger, well not liked, but he's the chief research officer. He's also working,
he's actually the person who proposed idea of optimistic roll-ups a year ago. And Fuel Labs
is basically an optimistic roll-up side chain library for Ethereum, or specifically it's an EVM-based
or an EVM-compatible actually, kind of optimistic roll-up implementation that allows you to do payments.
And so the idea is that in the future, Fuel Labs will also support other data-available
layers other than Ethereum, such as LazyLedger.
And the main advantage of that is that because Lazy Lager is designed from the ground up to be
a scalable data available at this layer, you'll be able to process much more transactions
on Lazy Lidger than Ethereum.
So, like, you know, I'm not, I'm more familiar with like the optimism team's roll-up design,
but there, you know, I send transactions to the operator and then the operator, the operator, and then the
operators put them on to, you know, for that, they just store it in the call data of an
Ethereum block.
So in your model where like a lazy ledger plus a roll up, are users submitting transactions
to the operator who then puts a block and stores them on lazy ledger or are user submitting
transactions directly to lazy ledger and then the operator picks them off from the lazy
electric chain.
No, so it's the first one.
So the, um, the users submit the transactions directly to the aggregator.
And then the aggregator aggregates them into a single block.
And that's the block that gets posted onto, um, Laser Ledger.
But I should say that the original model of Laser Ledger was actually the second option.
Because, um, the Laser Ledger paper was released before the idea of Optimistic
world-ups came about. And so the original model was that simply users would simply submit the
transactions directly onto the laser main chain. But that had some major drawbacks,
because it meant that users have to basically process everyone else's, or the users have to
specifically process every other user's transaction. And like clients won't be supported
because there's no fraud proofs involved and there's no aggregator to,
create a state route.
I'm curious about transaction fees in this world.
Would you have potentially like transaction fee on the lazy ledger level and then potentially
also transaction fees on the level of the particular application running on it?
And like how do you see that working?
Yeah.
So there will definitely be transactions fees on the lazy ledger main chain.
And this will basically be very straightforward.
It's basically storage fees because on the leisure main chain, nodes don't actually process
or care about the contents of your messages and transactions.
They simply just take them and put them on the chain.
So there's no execution costs or competition costs.
Yeah, exactly.
More like in Bitcoin, right, where it's basically dependent on the size.
Exactly.
So the transaction fees will be solely dependent on the size of the transactions.
And that being said, you can implement gas fees on the execution environments or the layer two chains that people build on this ledger.
So if, and that would be specifically useful.
If you wanted to create a general smart contract platform using LaysLedger as a data available layer, then you can implement gas fees.
However, the main kind of, the main vision for Lazy Lager, I envision, is a data available layer, then you can implement gas fees.
for Lacey Ledger, I envision is that people will use it to build app-specific chains, i.e. chains
for one app.
Each app has its own chain.
And in that model, you don't really need gas fees because you can just directly hard-code
the fees for the actual transactions in your app because there's like limited amount
of methods in that app.
I'm also curious
so you have these different apps then
to what extent
with different apps that are running on top
of LazyLedge would be interoperable
you know like via
and yeah how would that work
sure so
the interoperability aspect
is also a layer to concern
and is dependent on the
execution environment
that chains use
if the chains are using
a Cosmos SDK based
execution environment
then they could use IBC, which is short for the Inter-Blockchain Communication Protocol,
which is the protocol that Cosmos has developed to allow Cosmos chains to communicate with each other.
So that's on the Layer 2 side of things.
On the Layer 1 side of things, we want to make it possible for people on other layer ones,
like Ethereum, to develop smart contracts that use Laser Ledger as a day.
data available layer. So for example, like you might develop an Ethereum Smart Contract that is
very data heavy and you might find it cheaper to post that data on LaserLedger, but you'll need
a way for that Ethereum smart contract to verify that that data has been posted on LASLegger.
And so for that, we're developing a LazyLedger Ethereum bridge where basically the LazyLegger block headers
are posted onto the Ethereum chain
that will allow you to verify
that certain pieces
of data have been included
in LazyLager via an
Ethereum smart contract.
So one of the
main things about this
Lazy Ledger project is
like it's not just, you know, you could
have just, you know, a tenderman chain
chugging along and then it was storing transactions
on there and you could do all this
like layer or two
like stuff. But on top of that,
you all guys also do these like sort of data availability guarantees for each all the transactions.
So before we even get into how those proofs work, I'm kind of actually struggling a little bit
to seeing why they're necessary. Because if you have a tenderman chain, isn't that already
giving you some sort of data availability? Because let's say you have a hundred validators on
your tenderman chain. Tenderment consensus, as long as you have some honest
validators, they're not supposed to sign pre-votes unless they have the full block proposal.
So just the fact that you have, as long as you have one-third validators being honest in the sense
that they're not just signing stuff without getting the proposal, you're kind of guaranteed
that all the honest validators, all the honest validators at least have access to the data.
So why is that not sufficient?
Sure. So this is going back to the threat models. So one threat model or one security model might be put for data availability might be as you described, which is let's just assume that two thirds of validators are honest and they will only sign valid blocks. Or even one third has to be honest.
Sorry, one third, yeah, I have to go on this.
And they only sign, they will only sign valid blocks.
When you say that, that sounds very reasonable.
But when you actually compare this to the Bitcoin and Ethereum security model,
and you understand the implications of this,
this is much, much less secure than those models.
Because with Bitcoin and Ethereum, at the moment,
if there's a 51% attack and the consensus goes rogue and dishonest,
The worst thing that could happen is that the rogue and malicious consensus can either sense the transactions or undo transactions.
What they can't do, they can't inject or insert invalid transactions to the chain.
And they also can't hide transactions, so they can't make data unavailable.
because like the kind of validity rule on Bitcoin and Ethereum is such that the full nodes also verify the data availability of the chain.
If you start assuming that one-thirds validators are honest for other things, not just for double spending,
if you start assuming they're honest for things like data availability and validity of state,
that completely changes the threat model
and that completely changes
incentives for doing such an attack
because at the moment there isn't really a big incentive
for doing a 51% attack
because from economic perspective
at least because the worst thing you can do
is undo transactions
and do double spending attacks
so like maybe you can like buy
maybe maybe the best case scenario
you can buy a Tesla or something with Bitcoin
or some expensive car
let's say it's worth 100K or something
some supercar
and then like
do a 50-1% attack
and then undo that transaction
and you've got on a free sports car
but that's for the Bitcoin model
but if you start assuming
the consensus is honest
for other things like state of validity
then the consensus
can steal everyone's money
so like that's a lot more profitable
than getting a free sports car
yeah so you
need one third honest, but you can turn that into like one third rational by adding in like
challenge game. So let's say, here's an example. Let's say I create detriment and then I say,
okay, in the next, every validator gets a random challenge saying like, hey, give me the
nth leaf in the Merkel tree. And every validator has to, with their next vote, like in like
consensus, they have to include their piece in that.
And so now it's, and if they don't, then they're like slashable, right?
Or their votes just don't get counted.
And so now you suddenly turn that honesty assumption into like a rationality assumption
where, okay, every validator now does have an incentive to not, they will say I need the
data before I sign a pre-vote because if I don't get the data, I'm not going to be able
to answer the challenge next time.
And so now you're not even depending on honesty anymore.
Right. But what you're proposing is basically, I guess, a naive protocol to prove data availability, which is basically, you know, what we're doing, right?
Yeah.
So at that point, I think we kind of agree that you do need data availability. You need to kind of verify or, like, I guess like, you need to, you need to, I guess, like, you need to, I guess, you need to, I guess, you need to, I guess,
I guess what you're saying is slightly different in the sense that the node, the end clients themselves aren't verifying data availability, but you're kind of incentivizing minus to be honest.
So what you're proposing is basically something similarly what's called a proof of custody scheme.
Well, what that basically, that proves that.
So, I mean, there's three problems that you proposed.
So the first problem that allows miners to prove that they have the data, but,
it does not prove that they've actually published the data.
Like, that's the first problem,
because they're only publishing a very small part of data.
And the second problem is, I guess,
kind of related to the first problem,
which is that that only proves that they have a very small part of data.
Well, in reality, you need 100% of data to be available
because you can hide an invalid transaction
in a very small part of the block,
like 100 bytes.
But it's random, right? So the part that they're challenged with is going to be random. So they need to have the whole block in order to have guarantees that they could provide a random data. And the point here is it's turning it. So you only need one honest validator to make sure the data is available. Right. So you're no longer depending on like all validators being honest or the proposer being honest or one third being honest. As long as you have one honest validator, the data will be available.
how, how, like, which part of the block, like, how much of the block will they have to publish in the challenge?
Like, let's say, like, 1%, for example, right?
And so that's not good, that's not good enough because that means there's effectively a 99% chance of them winning the challenge without actually having that data.
So that's, yeah.
So what, that's what, that's why we use evasion coding, uh, in data availability proofs.
because our data availability of proof scheme is basically a glorified version of what you proposed
using some fancy stuff like erasure coding to make the probability of this challenge is a very high
probability guarantee that the data is actually published and I'm happy to kind of go into detail
a little bit to explain how the scheme works yeah that would be great sure yeah I mean that's a good
kind of place to start. So the way it works is basically, so first of all, I should explain what
erasure coding is, right? Eurasia coding is this kind of like mathematical primitive, a very old
mathematical primitive that was, I think, first discovered in about the 50s, and it's a piece of
technology used in all kinds of like technology like CD-ROMs and satellites and stuff like that.
and what basically allows you to do is
if you have a piece of data
let's say one megabyte big
and let's say you lose some of that data
or let's say you lose half of that data
so let's say with your one megabyte file
you lose 500 kilobytes of that data
and you can still recover
the entire one megabyte file
with only half of data
because what you basically
do is let's say you have a 500 kilobite file, what you do is you apply this
erasure coding scheme onto it and what that does is it it blows up that file to one
megabyte big and the kind of extra 500 kilobytes of that file is not your original
laser but what's called the erasure code which is kind of like and which is kind of like
this kind of like, um,
ongoing to mathematics of it,
but it's basically,
it creates extra redundancy for your file,
such that it makes it such that if you lose half of the data,
you can recover the whole data,
thanks to this extra part of the file that you included,
we're called the original code.
So,
so what we do, basically,
is when we create a new block,
usually in Bitcoin and Ethereum,
when a miner creates a block,
they commit to a melchode route,
of all of data in that block.
And so what we say is that instead of committing the Merkel roots of data of that block,
you commit to the erasurecoded version of data of that block.
So if the block was originally one megabyte big and there's one megabyte of transactions,
you apply the coding scheme onto it and the block, the transaction size becomes two megabytes
big, the latter one megabyte becomes the code itself.
then you commit to the miracle route of that data, this is block header.
And so what that creates is basically this property where, let's suppose a miner is malicious
and wants to hide even one single bite in that block.
Let's say they want to withhold one bite in that block.
In order to hold one bite of that block, they have to withhold half the block because
they can't just hold one bite after block.
because you can recover that bite from the erasure code.
The only way you can hide that bite is if you can,
if you hide or if you withhold half of that block.
And so that basically makes it possible to create this kind of challenge scheme
based on random sampling.
Because let's say that you're a client or you're a node
that wants to check that all of the data in the blog,
has been published, but you're too lazy or you don't have the resources to download the entire block.
So what you do is you ask any node of the network to give you 10, for example, 10 random pieces of the block.
And for each sample that you ask in that block, there's a 50% chance that you will hit the portion of the block that has been withheld.
Right.
and then if you do two samples, there's a 75% chance.
If you do three samples, there's an 87.5% chance and so on and so forth.
Until you get to a situation where if you do 16 samples,
there's a 99% chance that you've sampled a part of the block,
the half of the block that the miner has withheld,
and therefore you will not get a response to a sample request,
and therefore you can assume that the block is not available.
Doesn't that assume that the miner pre-decided what to withhold?
Like, let's say the original data is 100 bytes.
The new encoded data is 200 bytes.
No matter what I request for the first 99, the minor will be like,
oh, yeah, here's the thing.
It's only when I hit the 100th request, then the minor will say,
I'm going to stop now.
I'm not going to reply.
Yeah, that's exactly right.
So in order for this scheme to work, there has to be a minimum number of nodes that are actually sampling enough samples from each block such that the minor is forced to release more than half of the block.
And under a naive kind of or basic network model, then the miner could successfully pass all of the sampling challenges for those first.
few light clients. And whether that's acceptable or not, depends on how big a network is.
So with Bitcoin, for example, there's 1 million like clients according to Google Play's Bitcoin
wallet downloaded statistic. And with reasonable net parameters, you can only fall a couple of
hundred like clients. So that's a very few like that's a very few number of like clients such that
the attack is not really worth the cost. However, if that's not acceptable to you,
or if you have a much smaller network,
then you can use a more advanced network model
such that you make each sampling request
through a mix network,
or you use a kind of on your network like Tor.
And you kind of add a mixed network on top of that.
And it basically makes it such that each sampling request
is sent at kind of like a uniformly random time.
and that makes you impossible for the binary to link each sample request to each specific like client
and that basically thwarts that attack.
So is the idea here that like clients would inform each are like, let's say there's a hundred
like clients, right, talking to a validator.
And for 90% of them, the validator can't, it ends up not responding like properly.
But for 10% of them, it's like, you know, they got lucky and they only requested stuff that wasn't withheld.
So how are like clients supposed to like inform each other saying like, hey, you know, maybe in your challenge game with the minor, it looked available.
But to me, it didn't work out.
How do they inform each other of this?
Well, I mean, there's no need to inform each other.
the data availability kind of challenge does not require any cooperation.
Well, I mean, the data availability proves themselves.
It doesn't require any cooperation between like clients.
You only need to verify it for yourself to be convinced.
Let's say, like, you know, back to the RAPA I was describing earlier,
where like it gives 40, the minor gives 49% of the data to anyone.
and it could be a different
49%.
But all the like clients
need a way of like
coordinating,
let's say the minor
only gives 49% to anyone.
All the like clients now need a way
of combining their own 49%
to recreate the original
data, right?
Yeah, so they need to cooperate
to share the data with each other
but they don't need to cooperate
for the actual part of
convincing themselves
whether data is available or not.
So did you have to cooperate
to work together
to reconstruct data?
And you can basically
use BitTorrent for this,
for example.
You can basically represent
the block as a BitTorrent file
and different peers
in that BitTorrent file
can have different chunks
and they can share it with each other.
Any peer-to-peer network
works.
Like,
because you also use IPFS
or whatever.
Yeah.
So one of the challenges here is how do you turn this into a non-interactive proof, right?
Like, so I see this is sort of a very interactive game.
I can convince myself, but that like data is available.
But like on roll-up, let's say I have a roll-up on Ethereum, I need to tell the, before
the smart contract on Ethereum should accept the state proof.
It should have some non-interactive proof that the data is available.
So how do I construct that?
Yeah, so I've been, I and others have been thinking about non-interactive proofs a lot.
So, I mean, there's different definitions of non-interactive proof.
If you're talking about, like, can I generate some string of data?
And I can give you that string of data.
And that string of data can convince you that some other piece of data is available on the internet.
As far as I know, that's not possible.
I've tried to construct schemes to do this, but it require a lot of assumptions and basically it's not really practical.
However, if you're talking about more generally the goal of like verifying data availability proofs on Ethereum,
there's two ways of looking at this.
Well, with our current plan to create a lazy ledger to Ethereum bridge, the Ethereum part of the bridge does not verify data availability proofs.
for LazyLedger
and therefore on the Ethereum side of the bridge
does make
a
under this majority assumption
for the consensus
and it assumes the consensus is honest
and is only signing blocks that are actually available
which is problematic for the reasons that I propose
but it's okay for certain applications
that I can describe later
the second way of looking at it is
there have been proposals in Ethereum research
to kind of
allow Ethereum to
and Vitalik has also
made this proposal to allow
Ethereum to validate
user-specific
data availability
routes and that would basically make it possible for
Ethereum to verify data
availability proofs for third-party chains
so that you can basically submit
the medical route of some data
to Ethereum and this could be
like a special upcode that is added in the future
hard fork and the Ethereum chain would basically verify the data availability of this other
specified medical route using data availability proofs by making the sampling requests.
And the nodes that are verifying the chain itself have to do this.
Okay, cool.
Ishmael, I'm curious, you mind diving into a little bit?
I mean, you guys are built, like, what's kind of the connection between like lazy ledger
and like cosmos SDK.
Is this currently being developed like as, yeah, can you expand a bit on that?
Sure.
So the current plan is to use, so for the lazy leisure layer one, we will use tenement
for consensus.
And Mustafa originally proposed to not have any state execution on the layer one.
So it's like the purest form of lazy ledger.
But then we'd have to define, we would have to define, we would have to define,
the POS layer, like a proof of stake layer as well as a roll-up.
And we have to do this execution anyway.
So I think for a first implementation, we will go with using the Cosmos SDK as much as possible
and do that part of the execution, like the minimal amount of execution to have like a proof of stake network on layer one.
So on lazy ledger, basically.
And then, so in that sense, we will use the Cosmos Sestack.
SDK as much as possible.
And then for the optimistic roll-ups, one way to build these is to use the SDK as well.
Ideally, we would like abstract away is the wrong word because like it's already
abstracted way through ABCI.
You would like, yeah, we would remove tenement from the Cosmos SDK in the sense, that like the
dependence on that.
And write our own ABCI client, for instance.
And then people could use the SDK.
to write their optimistic wallups.
Yeah, cool, very interesting.
Well, I mean, let's dive into a little bit,
like if Lazy Ledger launches as this layer one,
like how do you think that will kind of compete with other layer ones,
you know, whether that's an Ethereum or Solana or various others that are coming along?
Yeah, how will it differentiate and for what kind of use cases
do you think will be better or worse versus those?
One that I'd actually be especially interested in is file coin.
How does, like, sort of, how do you see this in comparison to file coin as well?
As far as I understand, for instance, Phi coin doesn't do ordering, right?
Like, it's more of like dumping data only, right?
And I think in lazy ledger, like, if your application needs to dump data, but like, it needs the ordering of that data, then you would, you would prefer lazy later.
But if the roll-up operator is the one picking the order,
and then submitting it to the lazy ledger chain, then...
Well, it depends what you say the data is.
And when I said, like, the data, I was speaking more generally,
in that case, the data would be the roll-up, the block itself.
So there obviously you'd have an ordering to have a chain.
But like for other applications, it could be like Ethereum applications,
you need to dump data to be available, but also like ordered.
then you couldn't just use
Vycoin as far as I understand.
Back to Brian's question, right?
You asked what kind of applications
would like preferably build on LazyLager.
I think if you want to build
like an application-specific blockchain
or like you have a cool idea
for your fancy blockchain
and decentralized app.
So you could do this
with LazyLager in a very simple way
because like you don't need
to assemble that validator set,
like the proof of stake network,
as Mustafa mentioned earlier.
So like you'd have the easiness of like Ethereum
of like deploying a smart contract.
And you could launch your app
without any any further hassle, basically.
Like in an ideal world,
you could like just deploy it in like in a few minutes,
basically.
That would be ideal.
Cool.
And when this lazy,
Legend chain launches, I mean, if it's a proof of stake chain, right?
So I guess it will have some sort of staking token.
Do you see any other role for that token besides just, you know, validators putting up a bond?
Right.
It will also be used to pay the fees to submit the data.
Right.
Probably, I mean, what if I can say more to that, but like probably the optimistic wallup
chains could also use the LazyLager native token.
They don't need to define their.
own token if they don't want to.
Like if you need a token on your chain, you could just also use that.
Right.
Yeah.
So it can be used as a payment token as well, more generally.
If the, if chains building on laser legend, I want to use it as that.
But, I mean, the beauty of it, though, is that you don't have to.
And you can actually build chains on top of Lacey Lager where users have no interaction
whatsoever with laser ledger token
and so
he don't really have to
kind of force users to use some other token
but the block aggregators
or the block proposers
still pay for the storage fees
in lazy ledger in lazy ledger tokens
but the users
does not have to be, do not have to be exposed to that
at all because they can
simply do like a currency conversion
where the users pay for
pay the aggregators in
some different currency
and the aggregator does a currency exchange to Laser Ledger
and pays for the Laser Ledger storage fees and Lasel Ledger token.
Exactly. So it's optional.
Yeah.
But I mean, to go back to your kind of question about
how does Lazy Ledger differentiate from other layer ones?
So I mean, pretty much, I feel the layer one has a very similar value proposition
and Lazy Ledger has a very different value proposition.
The value proposition of most layer one,
ones like Ethereum2.0 or Avalanche or Algarand and so on and so forth,
NEOProtocol, the affinity, so on and so forth,
their value proposition is effectively to create, like, the coolest new, like,
layer one with a superior execution environment that's more scalable than everyone else's.
And they're all doing that under the world computer model.
So they're all, like, basically creating competing computers,
and they're trying to attract developers to them to,
to build applications on top of them
using their execution environments
using whatever programming languages they provide.
Whereas Lazy Ledger's early proposition
is actually we're just providing
a very minimal modular,
a pluggable layer one
and the developers actually
should just create their own app-specific chains
using whatever kind of execution environment they want.
And I think this is a very, very useful
piece of infrastructure.
that does not exist yet
because it makes it
basically the end
goal is it for the first time in history
it makes it possible for people
to deploy their own
blockchains in a decentralized way
very, very quickly
in minutes
possibly without having to go
to the overhead of deploying a new
consciousness network. And I think this
is comparable in terms of
impact, I think this is comparable to
what the cloud
or what virtual machines did for the internet.
Because virtual machines and services like Amazon EC2
made it possible for the first time for developers
to deploy their own virtual servers
with their own execution environments on the internet,
whereas previously, if they wanted to do that,
they have to have a physical server
and put that in a data center or in their own home, for example.
Or they would have to use someone else's server
with the limited
execution environment
like back in the day
it was geo-cities
for example
and more modern
the ones
include Bluehost
but virtual machines
allow people
to actually
create their own
servers with their own
execution environments
and I think that's really
what drove
like the kind of
later stages of
web 2.0 development
as it allowed people
to use things
like Docker containers
and all kinds of new
environments and languages
like Ross and Ruby
and Python
and stuff like that
that just
weren't available on shared web hosts like Bluehost or Dreamhost or GeoCities.
Cool, amazing. What's the timeline for you guys to launch Lazy Ledger? When is it, when can people
use it and build on it? Yeah, so it's still very early stages and we're just, at the moment,
we're just completing the legal aspects of our seed round. So we, and that will allow us to hire
entire developers.
But we expect to have a test net release within about 9 to 12 months,
and then the main net release within 12 months after that.
But before then, we'll have a devnet release sooner for people to play with
and experiment with.
We have a lot of activity on GitHub at the moment.
So if you go to GitHub.com slash lazy ledger,
we're actively developing the project there,
and we also welcome the community contributions and input
for anyone that's interested in following the project.
Cool. Thanks so much for joining us today, guys. I think there was a very interesting to dive into this and definitely seems like a really radically new approach to doing this. So I'm excited to see what the impact will be. Thanks for having us. It was great to talk to you. Thank you very much.
Thank you for joining us on this week's episode. We release new episodes every week. You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud, or wherever you listen to podcasts.
And if you have a Google home or Alexa device, you can tell it to listen to the latest episode of the Epicenter podcast.
Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter, so you get new episodes in your inbox as they're released.
If you want to interact with us, guests or other podcast listeners, you can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show, and we're always happy to read them.
So thanks so much, and we look forward to being back next week.
