Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - John Letey: KYVE – Decentralised and Accessible Data Storage
Episode Date: April 21, 2023We often take data storage for granted, but the rise of centralised storing facilities (i.e. AWS, GCP, Azure, etc.) hangs over like Damocles’ sword. When it comes to blockchains, full nodes store th...at network’s complete data history. Even if storage costs are set to decrease as technology evolves, so do blockchains accumulate more data over time. An interesting solution would be to create a decentralised backup for blockchain data and Arweave is a prime candidate for it. There is but a missing link represented by uploading and validating different blockchains’ data. This is where KYVE steps in, offering a decentralised validating solution to verify uploaded blockchain data and even retrieve it from Arweave, on demand. This also extends to off-chain data, opening new possibilities for data availability and scalability.We were joined by John Letey, founder and CTO of KYVE Network, to discuss the challenges and use cases of decentralised data storage and how blockchain interoperability could benefit from it.Topics covered in this episode:John’s backgroundWhy KYVE migrated to a Cosmos app-chainHigh-level explanation of KYVEKYVE’s data pools and data streamsHow KYVE connects to ArweaveValidating uploaded blockchain dataHow Arweave worksKYVE v.2How slashing is handledKYVE use casesFuture roadmapEpisode links: John Letey on TwitterKYVE Network on TwitterKYVE NetworkArweaveThis episode is hosted by Sebastien Couture & Felix Lutsch. Show notes and listening options: epicenter.tv/492
Transcript
Discussion (0)
Welcome to Epicenter, the show which talks about the technologies project and people driving decentralization and the blockchain revolution.
I'm Sebast Sanku-Chicil. Today we're speaking with John Lede, his co-founder at Kive.
Kive is a protocol that helps developers store, retrieve, and validate data on and off-chain.
And we're going to be diving deep into Kive today, understanding how it works.
It's interesting place sort of sitting between Cosmos and Arweave and also use cases and the product roadmap.
Before we get started, though, I just want to chill Nebula Summit here for a second because we did launch the website and it's really exciting.
Nebula Summit is happening in Paris again this year on July 24th and 25th, right after ECC.
The website's live.
It's nebular.
builders and early bird tickets are on sale for just $39.
You get access to two days of probably one of the best
Interchain Builders Conference in the world.
I would argue going towards that number one spot for the best
interchained developer conference.
And we're really excited to have announced the venue,
which is this really cool business school in the center of Paris
called Aber School.
So be sure to check it out and get your tickets if you plan on be in Paris this summer.
Again, the website's nebular.b builders.
And you guys were both there last year and hoping to see you guys there again this year.
For sure.
We're definitely sending quite a few team members down this year, definitely.
Yeah, we really want to make this a developer conference, I guess is what I'm trying to say here.
And we'll have lots of technical content and workshops.
But that's enough about that.
John, tell us a bit about yourself and how you got to be here.
Of course.
So yes, as I know, I'm John.
I'm the co-founder and CTO of Kaiv.
The good place to start is probably like how I got into crypto.
So, you know, you know, come from like more of like an academic background doing a lot of research.
Actually, we're doing a lot of research in AI, which like now is like a hot topic, but I'm quite deep into crypto now.
So, you know, so like it came from like the academic side of things.
Got into crypto in 2019, I was doing an internship for a month-in-based startup.
Actually kind of touched on data there a little bit because what we were doing there,
and again, this is when enterprise blockchains were like the thing.
And so basically we were building an enterprise blockchain based off of Ethereum that was dealing with...
More we all censor data.
At some point.
Exactly.
So, you know, it was like my first interaction with smart contracts with Ethereum,
with like data involved in the mix there.
It was loads of fun.
Kind of fell out of it, though, after that, you know, like that was like my first interaction with crypto.
But when I really got like heads down to crypto, I was actually during the COVID lockdown.
It's like 2020, like start of it, summertime-ish.
And kind of by chance fell onto RWeave and just like discovered it.
And, you know, like what fascinated me about RWeb actually is like right now, everyone kind of just assumes the data is permanent, right?
It's like when you have photos and whatnot on your phone, you just like assume that it's permanent, right?
But in reality, it's actually not.
You know, maybe you have like an ICloud backup or something like that.
Again, very web too, but like in the end, data is not actually permanent.
And so it was kind of a cool realization for me.
I was like, actually this is a very niche problem that they're solving and like really fell in love with the concept of it.
Of course, you know, I'm a huge history nerd as well.
So that also kind of fascinated me about like permanently storing everything.
So yeah, so like joined the community.
I was actually one of like the first like, you know, community development.
developers like really just like building lots of random cool stuff in RWeave.
Yeah, so fell into RWeave in 2020, played around with a lot of different things there.
So if you don't know, there's like an ERC20 like token standard on top of RWeave.
And so I created the first ever decentralized trading platform for that.
Again, loads of random cool stuff.
And then yeah, throughout all that process of working on projects, I met my co-founder,
who you guys have talked to before.
And then through that, you know, we worked on loads of different side projects,
hackathons and different things.
And then that's actually kind of how we came across the idea for Kive is because
Pokedaat and R weave at the start of 2021 put together this bounty.
And it was basically like, hey, listen, you know, right now we have a problem where there's not a lot of full notes in our network.
And by the way, this isn't just a problem for Pocodot.
problem for a lot of blockchains because full nodes are actually not incentivized in like prove
mistake networks in general. And so Pocodot was like, hey, listen, like, it would be great if we could
kind of decentralize access to like a full node and like data that the full node has. And so Pocodot,
you know, created a grant on Gipcoin with RWeave and me and Fabian saw that and we're like, hey, that's
really cool. You know, we can like archive all of Pocodot's data from a full node onto Rweef.
So, you know, like one weekend and a simple script later, you know, we went to Sam, who's a CEO of R.
We've got like, hey, Sam, listen, like, we have this.
And he's like, wow, okay, that's really cool.
You know, we talked to Pocod.
They were happy with it.
And then Sam was like, oh, by the way, you know, when Pocod did this, they weren't actually
the only blockchain that wanted to do that.
And so through that, we got connected to, you know, everyone from Avalanche and Solana to NIR and
even the Interchange Foundation with Cosmos.
We got connected like everywhere.
And then we kind of saw the need for a tool that permanently stores like full note information.
But then of course, you know, we saw the fundamental like trust problem there where basically it's like if it's really just like a script that we're providing to the foundations of these blockchains, the trust, like the fundamental trust restrictions are still there, basically.
And so we said, okay, we can take this one step further, right?
Instead of just permanently storing copies of these full node and blockchain ledger information.
How about we also then decentralized it and make it trustless?
And so that's kind of when the idea for Kaif was born.
And then like two and a half years later, we've just launched our cosmos layer one, which is really cool.
Yeah, like I guess there is a lot to dive into there.
I think especially maybe we can already start there.
But I guess Kaif already started out initially on Arweave.
Then I think you move to a moonbeam until like finally like launching your own cosmos mainnet.
And again, congrats to the recent launch.
I think what would be very interesting for many people is probably, you know,
like you're one of the projects that has done this migration.
And it seems like a lot of people are doing this migration, either in that direction
or maybe like from App Chain to Smart Contract.
I guess I've seen that less, but probably more like going from Smart Contract to like an App Chain.
And I guess would be interesting to hear your reasons of why, how you chose to do that
and what it gets you as a type of protocol.
For sure.
Yeah, so I would actually be very, very curious to hear people's reasons for going from an app
chain to a smart contract.
That's got to be a very interesting reason.
Anyways, so, yeah, so like you said, we went from a smart contract from RWeave to Moonbeam,
which for anyone that doesn't know, that's basically EVmos on Pocod.
It's EVM, but on substrate.
And then, yeah, after that, we then microbeam.
over to our own like Cosmos app chain.
Yeah, basically fundamentally the main reasons for that is that smart contracts are great.
Right.
Like they they definitely have their purpose and like apps are amazing.
But what we realized is that we were just not scalable enough.
Like they were just not scalable enough for us because like when thinking about KIP,
right, we're building a full decentralized network.
Right.
And you know, smart contracts can only go so far.
They're great for, you know, building AMMs and they're
great for doing NFTs and tokens and whatnot, right?
But when you're trying to build a full scalable network
with actually like a validator set, it gets a little bit complicated,
especially in the EVM space where you have to like constantly,
and I mean constantly optimize for gas.
It was like we were really, really like having a hard time
thinking about, OK, this is how we like store validator sets
and rewards and everything like that.
But then, OK, that's one way of doing it.
And then how do we optimize it so that it doesn't cost a fortune?
But then even like that, the problem is that you're sharing block space because it's like, and like fundamentally the way that like Ethereum gas models work is basically, you know, the more contract calls and interactions on the network, not just but your contract, but in general, the more expensive that it gets.
And so, you know, we saw it's like an NFT launched or something like that.
You know, cost of running a node would go from like a reasonable price because we optimized it to something like crazy high and ridiculous.
And so it was like those main two reasons, which is just like scalability and then also just like cost efficiency and like sharing block space were like the two main reasons why we decided to kind of look into app chains.
So we were of course looking into a few other app chains kind of app chain ecosystems if you well.
So like you know, Avalanche has its subnets.
You know, a lot of other app chains ecosystems are now kind of spun up since we were looking into this.
But in the end, honestly, the smoothest process was actually just Cosmix.
And, you know, at the time, it was called Starport.
And now it's called the Ignite CLI.
But we were actually, like the entire tech team went to East Denver, 2022.
And, you know, we had done a lot in Cosmos before that.
So we were already very familiar with it.
And then we were like, hey, listen, what happens if we just use Ignite and just try it out, right?
And so, you know, we were like sitting at East Denver, like in a we work sometimes as well.
And we were just like, okay, let's just do this.
it took us about like five days in total, like of conversion and whatnot,
to get like an internal devnet up and running.
So like five days to convert from a solidity smart contract,
which we had worked on for like a year,
to like completely over to a Cosmos app chain.
And it was really cool to see.
Of course, you know, then taking dev nets that you spot up in five days
to something a little bit more concrete with the validator sets,
definitely took some time, but and lots of learnings.
But no, it was, it was like,
loads of fun, and that's kind of the reasons why we switched.
What does the Ignite CLI, you know, as a sort of like non-developer, but you know,
someone who's fairly technical, I guess, what is it about the Ignite CLI that just makes
that process of porting over an Ethereum smart contract to essentially an app chain easy?
Can you talk about, you know, your experience in doing that more specifically about, like,
Of course.
Yeah, so with the latest release of the Cosmos SDK and with the Cosmos SDK Eden release coming up a year or so, I don't actually know the timeline there.
But they're definitely removing a lot of the need of the boilerplate code that you need when running a Cosmos app chain.
The problem is that when we did this a year ago, there was still a heavy need to have all that boilerplate code around.
And so, you know, you know, developers just coming into the ecosystem, they don't fully know their ways, like, in and outs of all the different things that you need to, like, fully run a Cosmos app chain, right?
And so what the Ignite CLI does is it basically takes all that boilerplate code that you may or may not know that you need to have, right?
As you're just like starting in Cosmos, it takes all that boilerplate code and generates it for you.
And so it's like makes it really easy to say, okay, listen, this is what I want my state of my blockchain to look like.
It's going to have these values.
And then it kind of just generates a boilerplate for you from there.
Long term, of course, if you want to actually run a chain of production,
definitely, you know, you need to remove a lot of that boilerplate code
and really understand the fundamentals of like what's going on,
which is what we did after the fact.
But like what was really like fantastic and what made the switch so easy is the fact that they just
handle all the kind of the cosmos nitty gritty details for us without us needing to like fully learn it.
And then that allowed us to really easily then say, okay, this is our core logic.
Let's just only focus on the core logic and ignite will handle all the cosmos logic.
And then after we focused on our core logic, we're like, okay, now let's dive deep into the cosmos land and see what's up.
So yeah.
Cool.
Yeah, I mean, that's something that I've also heard from other people, I guess, is like, okay, the Cosmosa S.K.
does have a lot of boilerplate code that is necessary to launch a chain,
but that the Ignite CLI allows you to forego having to write a lot of that code.
I guess a lot of people now are also maybe going like an even simpler route,
which is building things directly in Cosmwasum.
You're not like, you know, even foregoing the Cosmos SDK to some extent,
like with initiatives like the Cosmwasum SDK.
And I think it's only going to get easier to start.
up a Cosmosis app chain in the future as these tools become more and more, yeah, more mature
and easier to leverage.
For sure.
Yeah, I mean, like using Cosmosom Smart Contracts for a lot of your logic is actually really
smart, I would say.
Like Mars is probably the latest project that has done this where basically they took all,
they basically just took a boilerplate Cosmos SDK app chain.
And then they said, okay, cool, this is like all the stuff that we need.
And then they just put it direct into a smart contract, which is really nice.
And like, yeah, like you said, the cosmolasmusum SDK that Larry from the Mars team is working on, that's also some really cool stuff.
I think that there's actually like a lot of cool joint stuff that you can do there to kind of optimize everything.
Yeah.
So let's maybe just take a bit of a step back here because we do want to talk about Kive.
And before we do so, since we've never actually done an episode with Kive on Epicenter, although I've interviewed Fabian on the interop.
and Felix does interview with you on the course one podcast.
I do think that for our listeners here,
it bears reminding people who don't know about Kive what it is
and what problem you're solving at a high level.
And then we can dive into some of this other stuff we wanted to talk about,
which is are we even cosmos and some of the use cases.
I think for listeners who really want to dive deeper into Kive,
probably that won't be the
sort of art of this episode
but you should definitely listen to
my episode with Fabian on the Interop
or Felix's episode with you
on the course of my podcast
those are both great
great resources you want to sort of like start at the ground
level but yeah
go ahead John. Okay perfect
yeah so just to clarify you want me to kind of like talk about
like what Kive exactly does
yeah yeah just at a high level
sort of explaining what it is, what problem you're solving and the different components.
And we can talk about some of the other aspects we wanted to discuss today.
So, yeah, you can really think of Kive as building a massive decentralized data lake.
Now that's like a lot of buzzwords.
But basically what it means is we've connected to a lot of major L1s and L2s.
You know, kind of like I mentioned, we've been working since our very beginning with, you know,
like Pocodots, Lana, Nier, Cosmos.
avalanche, like you name it, a lot of major L1s and L2s.
And what we do is we've actually built this proof of stake network that can connect
to all these different blockchains and then parse information from them.
So this could be as simple as really just block and transaction information.
It could be ledger information like account balances and whatnot.
Or on the more complex side, this could be things like EVM traces, you know, logs from
smart contracts,
And what happens is that this validator set, then independently, each one of the validators goes and connects to the blockchain.
And then they fetch information from it, do some sort of transformation if needed on top of that data.
And then what they do is they use our consensus layer, which is tenderman, comment BFT based.
And then they basically say, okay, listen, like, I have this piece of data.
Did you guys also get that same piece of data?
If they did, then what happens is then, of course, that data is then validated.
and then it's then automatically pushed to storage network like RWave.
Basically, what that allows you to have is then that allows you to have one single network,
which is Kive, which has access to a lot of data from a lot of different networks
that's automatically completely trustless because it's already been validated and it's stored
forever due to RWeave.
That's really cool because then you can build like a lot of things cross-chain.
Also, it helps a lot, even like in kind of like the more traditional.
aspects where basically, I mean, Felix, we can kind of talk to you briefly about like an
accounting tool, right? Or basically, you know, you can use this on-chain data that not a lot of
nodes or RPC endpoints might necessarily have to then really easily fetch, you know,
data that you need on the accounting side of things. There's a lot of different use cases,
which I think we'll get into later. But yeah, fundamentally, we're building a massive decentralized
data. Yeah, I think what's really interesting there, right, is
that you have these sort of two roles,
like the protocol layer and like the validator layer, I think.
And can you maybe, right, you mentioned the storage pools.
Can you sort of walk through a flow of someone,
maybe like actually the entire lifecycle,
actually someone uploading some data and then do they retrieve it
with Kyiv again or do I have to go directly to our weave?
How would I use this?
Of course.
So we can kind of start the,
base layer. So we actually have two layers in our network. And this is, of course, until vote
extensions is the thing in Cosmos. But basically for now we have two layers. So we have kind of
the normal tendermint cosmos layer that everyone's kind of familiar with, where you have this
validator set uses delegated proof of stake. Tendermint kind of manages that all for us,
reaches consensus on the state of our network. That's what we call our consensus layer. The name's
pretty obvious, I would say. And then what's on top of the consensus layer is actually
something we'd like to call the protocol layer. It's actually a separate validator sets.
And this is actually just to kind of keep everything a little bit more efficient instead of
building everything directly into the tenement validators themselves. But the protocol layer really
manages data part of Kive. So like you mentioned, we have something that we like to call storage
or data pools. I always like to give the example of actually you can think of it like
liquidity pools on osmosis, for example, where each liquidity pool manages a pair of tokens.
Very similarly, a storage pool or data pool on Kive manages a specific data stream.
So that just keeps it a lot easier because some data streams can be more computationally expensive or anything like that,
just kind of like a separation of interest there where basically you can have a data pool for Solana data.
You could have a data pool for uniswap smart contract events, anything like that.
The way, like now kind of zooming in directly into like a specific storage pool,
The way that it works is it's a round-robin approach, very similar to tendemant, actually, where a validator gets randomly selected.
We call this validator actually the uploader.
So this is normally in other proof of stake networks called like a block proposer, but in this case, we're not actually proposing a block.
We're uploading data. So that's why we call it the uploader.
The uploader's job is basically to kind of propose the piece of data that they think is valid.
So they go off, talk to the blockchain.
Let's just say, for example, Solana, right?
So they go off to Solana, say, hey, listen, Salana, I want blocks 10 to 100, something like that, right?
It's a lot bigger snapshots than that, of course.
And so basically what happens is the uploader fetches a snapshot of blocks from Solana.
And then it proposes it to this validator set on the protocol layer.
And then basically they say, hey, listen, I have blocks, you know, one to 100 of Solana.
Are these blocks correct?
And so then each of the other validators that are not the uploader in this case, they then go off independently using their own connections to Solana and validates to make sure that, you know, blocks 1 to 100 are actually correct.
If they do, then, you know, there's a voting process, again, very similar to tenderman and proof stake in general.
There's a voting process.
And then if, you know, more than, you know, 67% of, you know, the validators voted yes, then the piece of data is actually correct.
Once the piece of data is correct, then of course it's then uploaded and like pushed off to RWeave.
We're going to be onboarding a lot of other storage solutions soon as well.
So it's not just RWeave that we support.
We kind of have a cool little wrapper around any storage layer.
So you can really easily integrate IPFS or file coin or storage or you know greenfield,
which is like the new finance one for example.
Yeah, exactly.
That's kind of like the life cycle of the data.
being validated.
On the retrieval side, we kind of offer a few retrieval products.
Of course, you know you independently can just go directly kind of talk to Kaib and
Kide will say, hey, listen, if you want this piece of data, it's in this
R-Weave transaction ID and then you can then go and fetch it from R-Weave and do all that
on stuff yourself.
But we kind of offer two products right now.
One is kind of a more B2B product, a more enterprise product, where basically we automatically
kind of do all that internal fetching for you where basically all you need to do and it's
powered by Airbyte by the way which is actually quite cool it's a really easy way to kind of connect a
data stream to any web to data infrastructure so this could be MongoDB it could be something
more complex like a Kafka queue or snowflake or anything like that but basically it's a service that we
provide where basically you just say hey listen this is how i want the data and it's like an ETL pipeline
and then we will automatically give the data back to you.
Of course, like a little bit more like on-chain B2C customer product
that we're going to be offering soon is actually our Oracle,
which will be really cool because it's powered by IBC.
Cool. Yeah.
So what is the, how do leverage IBC here?
And like I guess I'm, yeah, I'd like to maybe transition a little bit here
into, you know, how are we, or sorry, how Kive connects Cosmos to our,
weave. So yeah, how do you leverage RBC and, you know, maybe describe how Kive sort of sits in
between Cosmos and RWeave in some way? Yeah, for sure. So kind of, we kind of sit actually more like
on top of them, right? Because you have Rweaves main net and then you have like, of course,
our consensus layer and then our protocol layer kind of like sits on top of both of them
and then connects to both of them individually. Yeah. So like there's no like we didn't create
to IBC clients between R-Weave and between R-Weave and our Cosmos chain.
That would actually be incredibly interesting because R-Wev is proof of work.
So that would be fundamentally incredibly interesting to create an IBC client between the two of them.
But anyways, that's like for maybe Kive 2.0 or something crazy like that.
Anyways, so the way that it works right now is like I kind of mentioned before,
that uploader has the job of, you know, not only kind of like pushing the data to R-Wrovec.
but also reaching consensus on that piece of data using our tendermint
blockchain.
And so it's kind of like a connection between the two of them where basically the
uploader first uploads that piece of data to R weave and then the validators then
reach consensus on that piece of data.
And then if that data is incorrect, then basically the Kive network will then like
store, hey listen, this RWeave transaction hash or you know, this IPFS CID is actually
validated and is correct and this is the metadata for it.
And that's kind of how we connect to R.
We even do that.
On the IBC side of things, that's kind of more going into like our Oracle product,
which will leverage IBC to basically be able to,
you just send a token like for payment.
Like you just send like Kive tokens or any governance enabled fee payment method.
And then you basically just do an IBC transfer and then utilizing the memo field of that IBC transfer,
you can basically initiate a query.
for any data that Kaip is archived.
So you could basically say, hey, listen, I'm going to pay, you know, one Kive.
Again, not the real numbers here, but like I'm just going to pay one Kive.
And then I would like to fetch, you know, Solana blocks one to a thousand, for example.
And then what would then happen is that would be sent by IBC.
And then, of course, Kive has that piece of data.
So then using Enterchain queries, it would just send the response back.
And then you could now have completely trustless Solana blocks on.
your cosmos chain and then you can do whatever you want with that.
But yeah, so that's kind of how we leverage IBC and then also that's how we kind of connect to
ROE as well.
Can you maybe expand a bit in terms of you said basically the proposer uploads the data and then
the validators have to sort of validate if it's correct?
Does that mean like all the validators have to sort of download that state of that chain
or that program or whatever it is?
So yes, like at the end of the day, it's really up to the validator to decide how they actually connect to the network itself, right?
Because fundamentally, if they receive invalid data, then of course they're going to be proposing with like invalid data and then of course they're subject to a slashing and like it's a major slashing.
And so it's kind of we've left it up to the validators.
So basically like we expect a certain RPCN point for certain different networks.
But fundamentally, how the validators actually connect to that is up to them.
So, you know, a lot of the times, and this is what we encourage, of course, is, you know, running your own node in that network, right?
Just for, like, optimum security.
But of course, you know, if you want to go for like the more hosted Coinbase Cloud or Block Damon approach, like, that's also okay.
Like, we don't like put like an exact architecture requirement on it.
But at the end of the day, it's really up to the validator to decide, like, what risk they're willing to take here.
The like the important thing though is like we make sure to like double check to make sure like where the data is from.
And so like if everyone's just using like infra, for example, it's like an RPC endpoint, then of course that's going to be problematic.
And so we have checks in place to make sure that it is from many different data sources.
But yeah, of course, like how the validator actually connects to the network.
They can either run their own node, they can use an RBC provider, something like that.
Could you leverage and, you know, and have you thought about,
encouraging folks to leverage a decentralized RPC provider, like lava, for instance.
I think there's some other ones like Gateway.
Lava, Gateway, Pockt, for example.
Pocket.
I mean, oh, sorry, yeah, pockets.
I just abbreviate it.
No, like, yeah, I think, of course, there's definitely some, like, cool collaborations to be done there.
Actually, like, lava is definitely very interesting because there are also based in Cosmos.
So I wonder if there could actually be, like, a little bit more synonymous.
there because of course they're based in Cosmos, so IBC kind of connects us all, right?
So like haven't directly thought about it, to be honest.
It was just like we just kind of settled on this architecture for now.
We wanted to get main out as soon as possible.
But of course, going forward, this is definitely something that we want to look into, right?
Because it's like if we can even decentralize the RPC side of this, it would be a perfect world.
Right.
And so like, yeah, we've done everything that we can to decentralize on our side.
But of course, definitely further collaborations are definitely, we're open.
for that. Okay, cool. I want to talk about R-Weave a little bit. I have a close friend who you might know
who's, they're building a cord. Yeah, according. And, you know, he, he shills R-Avevee to me like all the
time and I kind of get it, but I still like also kind of don't get it or I don't really get,
you know, how economically, how it works, like with how the economic works.
how the smart contracts work.
And so, yeah, I'd like to,
I'd like you to maybe settle once and for all for me,
how Are We've smart contracts work?
Because they're fundamentally different from, you know,
another form of smart contract.
And what I've understood is that RWe've smart contracts
are executed off-chain and settled on-chain.
And so I've got about that far.
But yeah, it just sort of breaks my brain.
Yeah, so are we smart contracts are definitely something interesting to wrap around.
And I, to be honest, I don't exactly call them smart contracts because, and we can get into
why later.
But let me first kind of touch it briefly on like what are we is and how it works.
So it's a proof of work blockchain, been around since 2018.
Lots of, you know, projects use it.
It's really big in like the NFT space, especially because, you know, if you don't pay your
IPFS alone, well, sorry, everything's gone, right?
So there was like a massive adoption.
You know, they partnered with meta, for example.
They're also partnered like on the more on-chain side of things with,
oh man, I'm blanking now, just as I was about to say it.
But like the main like Solana and Ft marketplace, stuff like that on Metaplex.
There we go.
And so basically the way that it works is it's not actually a blockchain, it's a block
which is really cool because what happens is every single piece of data that is
mined into R-Weave, what happens is that there's basically a pointer back to a previous data
item that has previously been uploaded to R-Weave. So basically, every single block points to a
previous block, which is why it's a block weave, which is actually quite cool because that
ensures that there's always like some access to some historical piece of data on R-Weave, right?
So if you are the current minor of a block in RWeave, if you don't have access to the randomly selected piece of data, then you can't produce the block and then it moves on to the next person, right?
So basically encourages the entire network to have as much of the data as possible.
The way that the incentive model actually works is quite interesting.
So, you know, it's proof of work.
So there's just a fundamental block payout per block.
I don't exactly remember what that is and something.
I think that there's a having mechanism in place.
etc. But what's really cool is the transaction fees are calculated with this really cool equation
which uses, and I quote this because it's not really Moore's Law, but it's like a similar concept
of Moore's Law where basically it calculates how much it's going to cost to store this piece of
data. So let's say that you're archiving like one gigabyte, right? So it already basically the
network calculates how much is going to cost for the next 200 years to store.
store one gigabyte.
And then basically what happens is that at that 200 year mark,
there's like a slow decay curve that happens,
which basically shit says over time,
SSDs and HDD should be getting better and better and better.
And there should be like a significant cost reduction where basically after 200 years or so,
give or take, that data should be fundamentally free to store.
And so basically what happens is on our
you pay for 200 years of storage up front and that's what your transaction fee is like transaction
cost to store your piece of data and then what happens is that actually doesn't get paid out to
the validators or sorry not validators the miners in the network but that actually then gets put into
a block reward pool i forget the exact name for it but what's cool about that is then basically
you have this like almost like a community pool in cosmos right you just have like this treasury of like an
insane amount of R weave that's just like sitting there blocked.
And what's cool is the network automatically calculates if the current level of data on the
network, if it's actually like like the block payout, like the normal approval work block
payout, if that's not enough incentives for the miners to automatically cover the cost
of storing all the data.
And if it's not covering all the costs, then it automatically does a distribution of that
massive treasury, um, which is.
really cool because then over time, you know, let's say for example, one year storage costs
goes up by a lot, then automatically that would then incentivize the valid like the miners still
in the network or if it gets cheaper than vice versa, right? So that's kind of like fundamentally
how are we works in a nutshell and I can clarify anything else if that was too complicated.
Okay, smart contracts though, and I don't know what to call them, right? So SmartWeave is like the
official name for it. Like you mentioned, there's
not like there's no execution on chain. So in, you know, in the case of Sala, you know,
Cosmosin even and you know, the Sala VM, like basically every single normal smart contract
platform, the way the way that it works is you deploy the code on chain and then you
interact with the code with a transaction and then that transaction has a result that changes the
state of that code and all that happens on the chain. The way that SmartWeave works is fundamentally
different to that. What happens is you deploy the code. And so you basically have this, you know,
contract ID, if you will. And then what you do is you just interact with it by just sending transactions.
But those transactions are literally just like normal R-Weave transactions. There's nothing special
about them except for like the tags in place. So like you can then tell that it's a smart
transaction. But if you ever want to access the state of the smart contract, it's not stored on chain.
What needs to happen is off chain, you then need to basically create this execution environment
where basically you fetch the code and then from Genesis, you need to replay all the transactions.
This is actually one of the reasons why we switched away from SmartWeave because, of course, you know,
having to maintain almost like a caching layer off chain for your smart contract is actually quite intensive
and completely centralized.
There is definitely some things going on now in REE that like are trying to solve this problem, making it a little bit more decentralized.
But fundamentally, if you have off-chain computation of states, there's not a whole lot that you can do there, which is like kind of the main reason why we kind of switched away from that and then went to the EVM-based smart contract.
Again, and like that's not like throwing any shape on smart weave.
That's just like explaining how it works, which is just like off-chain.
execution. Yeah, I think fundamentally R-Weave is really great layer zero, where basically you can
just throw any piece of data that you want onto R-Weave, and it handles that really well. I think
smart contracts does have a little bit of time that it needs to, like, grow into like a fully
developed ecosystem. Right. Makes sense. So you're basically saying, right, like, are we still
very useful at like this data storage layer? But essentially with the smart weave, it's sort of trying to
also compete, I guess, on a smart contract level, but that might like...
Exactly. Yeah, it's like like like for me,
RWeb might be trying to do a little bit too much with the smart contracts.
Right. It's like they've really and they've done a really,
really, really good job with dealing with the data side of this, right?
But like and like that's why I said they're a great layer zero.
You can really throw whatever data you want at RWeep and it's like a perfect economic model.
That's great. But yeah, like fundamentally,
just like smart contracts on top of it,
treating it more like a layer one, layer two like model, not quite there yet.
But yeah.
When someone described sort of our we've smart contracts to me, it sounds, you know,
we were talking earlier about, you know, our previous enterprise blockchain building days.
It sounds very similar to something we were trying to build back then, which is, you know,
this sort of off-chain execution, on-chain settlement.
But there's not even really settlement.
with SmartWeave. That's the thing.
Yeah, or like, whatever, I mean, whatever you want to call it, like on-chain notarization, I guess.
Yeah. And then the other aspect about our weave that I still have a hard time wrap my head around is like the economics of, you know, you were talking about this storage, like this on-chain storage.
And yeah, sure, I guess we can assume and we should assume that storage costs go down over time.
I think that's, you know, a safe assumption to make.
There is one thing about Rweave that I find a little bit weird is like this this, this inability to delete things.
Yeah, like I can kind of step in there.
Like fundamentally that was the first thing that I thought about when I thought of Rweb.
So like, yeah, this is great if there are only good people in the world.
There's also lots of bad people in the world too.
Like this is obviously something that could go terribly wrong.
So like the thing is is that actually built into each R-Weave note
There's actually kind of like a blacklist that you can put in or like a block list now that has been a rename
There's a block list that you can put in where basically pieces of data or content that you know is like in fact
Vulnerable or malicious or you know
Illegal treading lightly here what you can do is you can then like inside your R-Web node just choose not
to store that piece of data.
And what's actually really cool is that if more than 51% of the network has a specific
transaction blocked, then well, actually, it's dropped from consensus.
Assuming it's not encrypted.
Exactly.
Assuming that it's not encrypted, right?
And so there is a way to delete data, right, where basically the majority of the network
does not have the piece of data, right?
But yeah, of course, that does imply that it has to be encrypted.
sorry, it has to be decrypted and a lot of things have to apply there.
I think that they're actually trying to spin up like a Dow to manage this a little bit.
Yeah, I was also wondering, like, do you, is it actually clear, like, who runs the RWEF network,
who are like the, can you like trace back how, who produced how many nodes and sort of see
how much control there is from certain entities or something like that?
Like, I guess how you have voting power in proof of stake, sort of.
to be honest, I don't actually know.
So you can, like, check who the peers are in like, but,
but like that's more like the active peers in the network, right?
So like if it's like,
and maybe this is just like not a problem,
but like definitely like a design flaw in approval work.
Or basically there's like no way of like who's actually the validator sets
right now.
Right.
So I actually don't know.
I think like, so of course you can check who is the minor of every single block.
Of course, I would take a long time, but you can.
And then you can kind of go back and see like what has been like what data has been produced where.
But yeah, improve work, there is really no way of like checking who's in the current set, if you will, which does actually suck a little bit because that would actually also make security a little bit better too.
Right. I think it's one of the core kind of benefits of proof stake, right, that you have slashing that you can go back and like destroy steak of someone versus like proofwork where you can't burn down the server farm.
So like I've heard that comparison, that's comparing like Bitcoin.
And I guess it also applies to our weave to a certain degree here.
But yeah, that's that's a nice excursion.
I guess we can we can take it back a bit to I mean,
you already also mentioned right like Hyde can use other data stores solutions.
So that's I think also like very cool right.
It's very abstracted there.
And I think that's very useful.
But maybe we can also talk about, you know, now you have the
I saw on the website like something on the roadmap like Kai-F version 2.
Can you maybe expand a bit?
Like what else is coming there?
Of course.
So I guess like, yeah, the main thing involved there is actually the Oracle.
There's like IPC enabled data querying.
With that, quite a lot of stuff actually comes with it on like the optimization of the protocol
layer.
So you know, like I mentioned at the very beginning, we have two layers.
Once the vote extensions are released, we're definitely going to look into kind of combining those two layers because right now there is a little bit of a fight between the two validator sets, right?
Because it's like whichever valetor set has the biggest API, right?
Like talking about like just like delegators for this pure sake of earning rewards, right?
Of course, there's going to be like a little bit of a stake fight going on there, if you will.
And so of course, you know, we're looking into solutions like superfood staking or whatnot.
not. But long term, of course, combining those two validator sets is definitely something on the
roadmap for sure. And that's kind of fundamentally what goes along with Kive 2.0. Another thing as well
is like right now, the Oracle is only allowed to like query what we store directly on our chain,
right, which is more of like a summary of what's stored on RRW. So let's say for example that
we've, you know, done a massive snapshot of like 10,000 blocks of cosmos, right? What happens?
is that all of that data is stored on RWeave,
but we only store like the headers or the
hashes of each block on our chain, right?
And that's just strictly because like we're not built
like Rweave, we can't handle a lot of data, right?
And so basically what the Oracle can currently query for right now,
and this is going to be released soon,
is basically just like the headers and whatnot of the data.
But like long term, we really want to make sure
that like what will happen is you can just do an Oracle request,
And then our network handles all the going off talking to ROU, indexing the data and giving you a really nice response.
Also, we've kind of tied into Kai 2.0 there.
Of course, there's like a lot of other stuff that I probably can't talk about, but that's like what I'm really excited about for sure.
And like, yeah, it's actually mostly tied around kind of like bringing the querying process on chain with the Oracle.
Right. Maybe also I guess we sort of went into it a bit, but I'm sort of wondering,
what if there is a mismatch, I guess, between the data provided or like there's no consensus
and then there is there is a slashing? You said, is there like some dispute mechanism if that
like sort of was actually the right data or how do you like sort of settle that? Yeah, so the way that
it works is you have the uploader. There, you know, of course, you know,
proposing a chunk of data.
If the uploader has, okay, now this is now getting into like now how proof stake works, which is basically each validator will then say, okay, is that correct?
If more than 51% of the validators say yes, this is the correct piece of data, then that data goes through.
What happens, which is actually quite interesting is all the validators that didn't say that that data is correct, we basically
put them on like a watch list, if you will.
Well, basically, we give them like a point.
And after a certain number of points, they actually do get slashed.
So if those validators that are actually voting in the minority,
if they keep voting in the minority for a while,
then there will be a slashing event there.
But there's not really a dispute mechanism after the fact,
just because we want instant finality to go along with tendermint.
Or basically a piece of data while it's reaching consensus,
is still undecided, but once consensus is reached, it's finalized, and that's when you know
the data is valid or invalid. So there's not a dispute mechanism in place, but basically we do
have mechanisms where basically the majority is what is actually happening. And like, that's what
reaches consensus. And then the minority, we always keep our eye on the minority to make sure
that that that doesn't get out of hand or out of control. Right. Right. That's cool. So it's sort of like
a multi-round game that you're playing in seconds. Okay. Yeah.
Cool.
So let's get into use cases here.
I mean, I think there's a lot of use cases,
and we already touched a little bit on some.
But yeah, maybe describe, you know,
which are the ones that you're most excited about.
And maybe also which are the least obvious ones
that people should be thinking about or, you know, yeah.
For sure.
Let's do it.
So yeah, so I mean, I've kind of already touched
on like our ETL pipeline, which is like kind of the more, you know, enterprise focus
solution that we provide that this is going to be more for like, you know, indexing partners
and, you know, accounting tools and whatnot, people that really want access to like large
amounts of historical information. Of course, you have the Oracle, which is like more on chain
and that, you know, can be, you know, I mean, on chain oracle information is like powerful
and like smart contracts down to like the core layer of blockchains too. Kind of the unexpected use
case, because that kind of ties into the Oracle a little bit, is we're actually partnered with
Say, so like Say Network and Cosmos. And it's actually quite interesting because they want to use
us to fetch like weather information and like sporting information. And okay, for people that
don't know what Say is, it's basically trying to create a market around anything. And so what
we've done at Kive is we've made ourselves so general that we don't actually need to store like
like we don't only need to store blockchain data we can also store web 2 data as long as there's a way of
validating it so basically meaning it has to be deterministic data right it can't be changing every
other minute um so for example like weather is a good example like okay like a weather not
predictable weather but like you know historical weather you know and stuff like that um that of course
you know, we can handle. And what say
wants to use us for is basically they want to
fetch information like that, kind of
a little bit information that you wouldn't
have actually expected. And then
they want to create a betting market around that,
which is, you know, cool, right?
Like definitely something I wouldn't have expected,
but, you know, it's cool nonetheless.
Of course, you know, like more of the
expected use cases that I'm quite excited
about is, you know, we're working with quite a few
validators, like course one, for example.
And what we're doing right
now is we're coming up with a really, like, clean way of syncing a tendermint node and not
just tendermint nodes, but like other nodes from other networks as well, syncing a tenderman
node using Khyap's data, which is really cool because right now, if you want to sync a node
and prove a stake, you had to either use a trusted snapshot and you kind of have to just trust
whoever gave that snapshot to you that is actually correct. Or you can sync from Genesis, which is like
a multi-day to multi-week process, depending on how old the network is.
And so, yeah, so because we have all this data already stored historically,
it's really easy to then just like connect a tenderment node directly to Kive and then sync
directly from there.
And not just tenderman, but also like near and other networks as well.
That's kind of the use case I'm most excited about because that's like the initial use
case that I started Kive with, which was basically like this exact thing.
And so kind of seeing it come to fruition now is really exciting.
Yeah.
So I wonder if this address is also another issue.
So the state sync use case.
When I talked to Babylon, they were saying they also kind of blew my mind and pointed to the fact that in a lot of cosmos chains,
there has to be this kind of conodical, you know, checkpoint where,
validators have to have to agree on like what is the the most the sort of state that everybody agrees is valid and in a lot of I don't remember where but like somewhere in some cosmos documentation it says that people should agree on this on this state like on off-chain channels like Twitter or or WhatsApp it's like so what
consensus yeah yeah so this is like social consensus and so with with Babylon what they or what they
what they allow is basically to, you know, create a hash of state at some point and then like
notarize that on the Bitcoin blockchain or sort of committed to the Bitcoin blockchain.
And then the way they describe it is that like it would eliminate this, this social consensus
need, but also potentially eliminate the need for long unbonding periods for staking,
for staking, for stake basically, you know, for delegators to unstake because we,
we know that there's social consensus around like state being valid up until this point.
Does,
does,
does Kive also address the same,
uh,
problem and could it also reduce on bonding periods for staking in the same way that,
say,
Babylon is,
you know,
sort of trying to,
to add as a feature?
Interesting, actually.
Yeah,
because I've,
like,
I've,
I've talked to the Babylon team many times and exactly the same thing,
you know,
social consensus. This is why you should never, ever, ever, ever, ever have a voting period
in Cosmos longer than the unbonding period, because then massive problems happen. Same thing,
why there should be like a clear ratio between IBC trust periods and the unbonding period.
Like there's a lot of like numbers that you should really not play around with that much in Cosmos.
And it's, yeah, it's hidden somewhere in the documentation that it's like, yeah, like worst case,
just reach social consensus, right?
which is like, cool guys, but no.
And so it's like, yeah, I don't actually know.
So like, I never really thought about it in that way.
It depends because, right.
So we are technically anchoring data of these chains on R.E.
But it kind of boils down to fundamentally the market cap or like security level of the anchor point.
right so kind of Babylon's value add there is a basically Bitcoin well it's Bitcoin right it's like
it's the biggest market cap in crypto period right so although yes we are an anchor I think fundamentally
anchoring on Rwe has a smaller market cap than I eat like osmosis for example or something like
that right so like yes although we could definitely be an anchor it's kind of missing that little
period there which is just like you know the the
The actual like the market cap comparison is not actually quite there.
What could be interesting though is like creating a storage backend for Kive or basically we go off and talk to Bitcoin.
That could be something cool.
Right.
Or basically, you know, like again, we don't want to make competitors here just to be clear.
But just like thinking out loud, like I think that there would actually be a way that we could then, you know, instead of going off and talking to Arweef, like I mentioned, we can talk to a lot of different layers.
We could then go off and talk to Bitcoin.
And then we could then be an anchor on Bitcoin, similar to Babylon.
Interesting use case.
Haven't thought about it yet, but very cool.
Right.
I guess the difference is also with Babylon, you would probably only see if your state is sort of correct, right?
You just check the header if it matches, but you can't actually get the data from the Bitcoin network, right?
You would still, someone has to send it to you.
What we would do is we would first validate.
the entire state of the blockchain.
We could then push the headers to Bitcoin
and still store the rest of the blockchain,
like the rest of the full data on R-Leave.
We could kind of do a mix.
That would actually be really cool then, right?
Because you have like the anchoring and security side of things on Bitcoin,
but then you can still access the validated data on R-Weaf.
Very cool.
Right.
Use case.
Great.
Exactly.
Yeah.
We figured it.
out here on Epicenter.
Perfect.
Any other use cases that you're particularly interested in or bullish on or would like to
see people build using Kive?
Not off the top of my head.
I'm sure that there are.
I'm sure that I'm missing some for sure.
I think that like the use cases as I mentioned for sure are definitely kind of the main
ones that we're really excited about.
Of course, you know, we're starting a grants program soon.
So hey, if you have a cool idea, send us a message.
Go build it.
Yeah.
Exactly.
All right.
Well, let's talk about the roadmap a little bit.
And, you know, that grants program is something maybe you can touch on a little bit.
But yeah, what are the plans for Kive 2.0?
And also, I don't think you guys have a listed token yet.
Exactly.
Right.
So like a more imminent roadmap is right now, of course, actually if you go right now and you look at our network,
we actually only have the consensus layer live.
And we just did this to basically reach enough security on the consensus layer,
making sure that it's actually very stable before then launching our protocol layer.
So right now we're not validating any data in Mainnet.
Of course, in both of our test nets, we're actively validating data.
But on the mainnet, we're not validating data just yet.
So, like, first thing on our roadmap is kind of launching the protocol layer with our first storage pool.
Can't say which one just yet, but we'll be very exciting.
That should be coming out definitely in the next month, for sure.
So that's like the first thing on the roadmap.
Second thing, of course, is also the listing on Dex's and Sexes.
So that's going to be very exciting.
Yeah, kind of going more long term.
Of course, definitely getting our Oracle product out.
That's going to be super exciting.
We're definitely hoping to launch an Oracle MVP as soon as possible.
Like you mentioned then, kind of Kyve 2.0, maybe like a little bit of a revamp there,
like I talked about earlier with like combining the two layers together, stuff like that.
we just kind of have to see where we're at that point.
That's kind of like the roadmap for now at least.
And then of course, you know, alongside all that, the grants and the bug bounty program
or stuff that we're actively talking about now inside of the foundation.
And so, yeah, very excited to kind of see that all come to fruition as soon as possible
because, yeah, kind of giving back to the community, doing a lot of like sub-DOW initiatives,
and then also like seeing the community just kind of build lots of use cases that we
might have never seen before. Yeah, that's the roadmap.
Right. Thanks so much, John. I think, yeah, we learned a lot about Kaif and especially also
are we today. And I think, yeah, very exciting use cases you're tackling. You can see already
that, yeah, many, many areas are like actually touched upon this, like even in the blockchain
data world, but I'm sure there's like a lot, like if you, if you step beyond that, like the say use
case, I think it's going to be like some really exciting things if this all works the way I think
it does. So yeah, super excited. And maybe before we wrap up, I guess like a final question could be like,
you know, where do people find more about, more about Kive and like maybe the grants program can
share a little bit how people can get in touch with you and get involved? Of course. So like for now,
there's not a lot of stuff around the program just yet because we're still working out all the
internal details and whatnot. But of course, you know, easiest way is just, you know, like our
email is like on our website, you know, website is kive.network. You can reach us there via email,
hologram, discord, pretty much everything, to be honest, we have there. We're launching the forum
soon, actually, and that could be like another way that you can reach out to us. Yeah, sorry,
another thing on the roadmap with the forum,
which of course we need.
We're going to be doing a lot of stuff around
our governance process and stuff like that.
That's probably the best way to reach out to us
is just like going to the website
and using your favorite social network
to get a hold of us.
Great. Well, thanks so much for coming on.
It's been a fascinating conversation.
Thank you so much.
Thank you for joining us on this week's episode.
We release new episodes every week.
You can find and subscribe to the show on iTunes,
Spotify, YouTube, SoundCloud,
or wherever you listen to podcasts.
And if you have a Google home or Alexa device, you can tell it to listen to the latest episode of the Epicenter podcast.
Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter, so you get new episodes in your inbox as they're released.
If you want to interact with us, guests or other podcast listeners, you can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show, and we're always happy to read them.
So thanks so much, and we look forward to being back next week.
