Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Gil Binder & Yair Cleper: Lava Network – Decentralising RPC and Node Providers
Episode Date: February 23, 2024The monolithic blockchain era appears to be sunsetting. Among the first to contribute to this paradigm shift was Cosmos, which introduced the idea of specialized sovereign blockchains (appchains), mad...e possible by the Cosmos SDK. Nowadays, the modular thesis employs external data availability and even execution solutions, which enables the creation of countless new chains. Each new blockchains comes with its own ‘specs’, and centralised RPC and node providers have to adapt to each chain’s setting. However, the saying ‘Jack of all trades, master of none’ applies in this scenario too - we have often witnessed RPC failures during high demand periods. Lava Network aims to provide a competitive marketplace for RPC and node providers, which would compete based on their performance and user feedback.We were joined by Gil and Yair to discuss Lava Network’s RPC decentralisation in the modular, multi-chain landscape.Topics covered in this episode:High-level overview of Lava NetworkWhat problem Lava Network solvesDecentralised marketplace for RPC providersQuality of centralised vs. decentralised RPC providersGatewaysLava Network participantsSpecs & APIsQuality of service & provider optimizerHow services are pricedRelayersModularity & chain abstractionMagma & Lava mainnetEpisode links:Yair Cleper on LinkedInGil Binder on TwitterLava Network on TwitterSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: Chorus One is one of the largest node operators worldwide, supporting more than 100,000 delegators, across 45 networks. The recently launched OPUS allows staking up to 8,000 ETH in a single transaction. Enjoy the highest yields and institutional grade security at - chorus.oneThis episode is hosted by Sebastien Couture & Brian Fabian Crain. Show notes and listening options: epicenter.tv/536
Transcript
Discussion (0)
This episode is brought to you by Gnosis.
Nosis builds decentralized infrastructure for the Ethereum ecosystem.
With a rich history dating back to 2015 and products like Safe, Cow Swap, or Nosis chain,
NOSIS combines needs-driven development with deep technical expertise.
This year marks the launch of NOSIS pay, the world's first decentralized payment network.
With the Gnosis card, you can spend self-custody crypto at any,
visa-accepting merchant around the world.
If you're an individual looking to live more on-chain
or a business looking to white-label the stack,
visit nosispay.com.
There are lots of ways you can join the NOSIS journey.
Drop in the NOSIS Dow governance form,
become a NOSIS validator with a single GNO token and low-cost hardware,
or deploy your product on the EVM-compatible and highly decentralized NOSIS chain.
Get started today at
Gnosis.io.
Course 1 is one of the biggest node operators globally and help you stake your tokens on
45 plus networks like Ethereum, Cosmos, Celestia and DYDX.
More than 100,000 delegates stake with Chorus 1, including institutions like BitGo and Ledger.
Staking with Chorus 1 not only gets you the highest years, but also the most robust security
practices and infrastructure that are usually exclusive for institutions.
You can stake directly to Quaros I's public note from your wallet, set up a white table node,
or use the recently launched product, Opus, to stake up to 8,000 eth in a single transaction.
You can even offer high-year staking to your own customers using their API.
Your assets always remain in your custody, so you can have complete peace of mind.
Startsaking today at corus.1.
Welcome to Epicenter, the show which talks about the technologies, projects,
driving decentralization and the blockchain revolution. I'm Sybessiakwichu, and I'm here with
Brian Crane. Today we're speaking with Gil Binder and Yaira Klepper, co-founders and respectively CTO
and CEO at Lava Network. Lava is an innovative project that aims to decentralize data
provisioning for blockchains. I like to call it Uber for blockchain data. And so we're going to be
diving deep into lava today and understanding how it works, the different participants of the
network, and what it means for decentralized application development. Hey, guys, thanks for joining.
Hey, how are you? Super exciting to be here. I'm trying to think when was my last podcast with Gil.
I can't remember. Pleasure to be here. For it to be here, guys. Cool. Yeah, thanks so much for joining
us. Can you guys just focus through it. Like, what is the high-level vision of lava? What is the
problem that you're trying to solve.
Yeah, definitely.
I think, you know, every great solution that started in the world in general, especially
in blockchain, I had some thesis behind that.
And we jumped into lava about two and a half years ago, almost three years.
It was developing MEV, NFT, sniping, and, you know, he dragged me in to jump into
the rabbit pole.
and we explored the gap that there is about the RPC service
that basically was rubbish and super unreliven.
So we thought that, you know,
the most of the Web 3 is the core data
that at the end of the day,
there are a few infrastructure providers there.
So this is how Lava began.
It began exactly how said mentioned before
as the Uber for nodes.
So building a resilient, scalable and this centralized RPC network
that every developer can plug into and get service.
Today, you can get this service from more under the 300 providers,
and it's super reliable and super scalable.
But that was two years ago.
And we see that there was maybe a handful of different chains back then.
And since then, the Web 3 evolved.
We've seen many different shaping events,
kind of the lunar crash, and today we have Bitcoin ETFs.
We didn't think about that two years ago.
So it meant to us that the problems that back then developer has, today they are much
more bigger.
It's not only in the larger scale, but today they spread across hundreds of different
blockchains.
So obviously the key innovation here that enabled that is what we call the modular blockchain.
And modular blockchains were very interesting to me.
Back than two years ago, I think Mustafa mentioned that in his white paper,
a lazy ledger.
It called it a pluggable, maybe a less sexy name.
But today everyone talk about modularity, chain obstruction.
And obviously, the main thesis for now is building hundreds and thousands of blockchain.
And this is what modularity will create over the very near future.
And we've seen that there are three main layers for modular blockchain.
You have the execution layer, you have the data availability and consensus layer,
and you have the settlement layer.
But as we see more and more blockchain, while Celestea, for example, made it 100x time
easier to roll up one, we think that there is a convergence in which every roll-up, every roll-up,
every L1, every layer that's going to come up,
every blockchain that's going to come up will need.
And this is the data access layer.
It started with RPC and indexing.
You know, if for those here who doesn't know what's RPC,
it's basically communication protocol in order to query data.
Every time you open the Metamask wallet, making a transaction,
basically it's an RPC request.
if you check the account balance
so MetaMask is querying the blockchain for that
and today in order to scale
the apps over blockchain
they need to get access to data right
but what's the point
what's the point of storing the thing
yeah just jumping in a little bit
I'm curious like so there's a lot of like
different things one could do in crypto right
a lot of different business models use cases
like what was it about this particular problem
that, you know, you found so exciting.
Like, why built this?
Why not something else?
It's really a great question.
You know, when I started in Web 3,
we were like doing MVV stuff, you know,
writing bots to snipe tokens and to snipe NFTs
and some other really cool stuff.
And it was a huge struggle to just have the nodes, right?
So if you were trying back then to get the data
to run all these operations,
you needed to either use like centralized providers, like inferior alchemy,
which sometimes were just very slow.
It's just not fast enough for this type of operation.
But you also weren't sure that the data is up to date, like that you're getting the latest block.
And this is, you know, what makes or breaks a trade.
It's super critical for the bot to have the latest data.
So we started with that.
So I was running some bots.
You know, I was running some notes here next to me at some machines, running some notes.
And then I wanted to do the same thing on Polygon.
right so I was like okay now I need to run Heimdel and you know I need to run war I need to run
more and more nodes and then I want to do it on Avalanche you know and he just became such a
struggle and I realized you know this is not scalable there's no way that I can keep you know
writing and running all the services myself in a way that's scalable and perform it and I think
this is I think this is one of the core problems in Web 3 if you take it
out into the bigger picture in which is just fragmentation.
Every single blockchain, every rollout, they have their own concepts, they have their own
data access, and this creates this huge user experience problem for everybody.
If it's like, let's say you want to trade on a wallet, you know, we're getting an airdrop from
somebody.
You have to get another wallet, another wallet.
You have to figure out what gas token is being used.
And I think this connects to their longer vision, you know, yes, you have Celestia, you have dimension,
and then I think one of the key unlocks is having love on top of that.
And then making everything connect all of those chains, they need access to the data.
They're useless without all these roll-ups, on eigenlayer, on Ethereum.
And this is what we do, right?
we're building this
permissionalness network
that lets anyone
join, give service
to any of these blockchains
in a very easy way.
So I think the long-term vision
is like
making all of Web 3
like a 1
or 2 or 3
but top 3
like super usable experiences.
With RPC,
I mean today,
right, like you described
I think the problem well, right?
Like, okay,
you want to do something on one chain and then you need this node and now another chain you need another node and of course this yeah like a lot of projects applications are struggling with this and and so they like how like free lava right if we ignore lava like how are people um or i mean my understanding is right they mostly use centralized providers like you know alchemy quick nodes um chain stack maybe infura things
like that, how would you rate the quality of those products? Like, what is the, what's sort of the
weaknesses and problems with decentralized RPC providers? So I think some of them provide
amazing service, you know, I think they optimized the service very well. For example,
Alchemy on Ethereum is really good. Yeah, Quicknow is a really good service and Furize as a pretty
decent service as well. So I think that there are tradeoffs to be to talk about. But if you
look at the numbers of supported chains, they're very small because it's very hard to optimize
for many, many chains. And you might be a champion or like provide the best infrastructure
for a single chain. And like write your own layer of caching and optimizations to make
it very cost efficient and fast. But to scale it up to many chains and this comes back to the
core issue, you need to be an expert in that specific chain. And this is where Lava is different.
First, it's not just one provider.
It can be in fewer a quick node and alchemy.
All competing, it's like a death match.
Imagine, like, all the bots are fighting.
Who's giving the best service?
And in the end, you're like, the few champions who are getting the belts.
And this is what Lava is like.
It's a competition between the providers to give the best service.
So in every single chain, you have this competition.
And whoever comes up with the brightest, you know, most unique, most novel way to optimize.
wins. And that translates to really good user experience for the user and the most rewards
for that provider. So it kind of makes sense. Okay, yeah, thanks so much, Gil. That was very helpful.
So basically, you feel like one of the main value propositions is that the centralized RPC providers,
they can only cover sort of a limited amount of networks and they're going to have a hard time
scaling that up, maybe especially if you see a big increase in number of chains.
And then with Lava, you can have a sort of uniform, like, developer experience, uniform way
to integrate RPCs across, you know, like all the different chains.
And because then you can have different providers basically sort of together providing
a service and some focusing on different chains.
So, of course, be way easier to scale to like, you know, support a much.
much larger number of chains.
Exactly.
Lavement is like a distribution channel for these providers.
Think about it that way.
They can write the best code,
run the best operation and infrastructure,
and then they compete on this battleground,
and in the end, they get all the users, the best ones.
So they're able to iterate,
they're able to innovate,
they're able to build the best data access,
you know, capabilities,
and then they can win and get more reward.
And this is real time.
So over time, they can get better or get worse versus their opponents.
I guess the big question that comes up for me here,
and I think that maybe ties sort of in the next topic,
if you actually want to go a bit deeper in here,
is I guess one of the, I mean, I understand the downside of the centralized RBC providers.
But of course, the upside is there's like a company,
and I can sign an SLA, and they're going to have a support thing.
and I get to call them up.
And, you know, I can probably expect a pretty consistent level of service.
Whereas here, if you have, like, all kinds of different operators for different chains,
different, you know, maybe big companies, small companies, how does that work in terms of
the quality of service being provided?
It's a really good question, right?
So I'll just say quality of service.
service is like at the core of what we've built at lava. We spent so much time and energy and
effort into ensuring that the quality of services is top-notch. And I think it really is on
Lava and I think it's only going to get better. I think it's going to crush everybody else.
Just because we design it from the ground up in such a way that the users are actually the ones
rating the provider based on their experience. But I also want to touch on your question about
SLAs, you know, signing contracts, because I think these are really, really important.
And also support. Support is critical. You know, we spoke with, last year I think we spoke with
approximately a thousand projects, you know, from DAPs to chains, to ecosystems,
and getting their feedback on, you know, what do they need? You know, how is, what, what are
you not happy right now with their infrastructure? And support is critical for them. So the way I see
get, you know, in this world of Web 3, there has to be a way to build a business that also
supports the ecosystem that it serves.
And the way I see, maybe it's very similar to the Red Hat Foundation, for example, where
you have this open source project or like you have the Lava Network, and then you have this company
or multiple companies that basically provide support for customers via onboarding them through
gateways. So they can use the gateways, they can pay for these gateways, the clients also pay
on chain for the service, but they get the support package, they get the SLAs, and they get all
these enterprise features, maybe it's on SOC certification, so that they can operate under the
standards that they need to. So I think these are two answers. And can you explain what
gateways are? Yeah, okay. I'm sorry, I didn't touch on that. But basically,
When you use Lava and we built it like this because we thought this is the best way to provide really good quality service to everybody.
When you use Lava, you don't have to go through any central server.
So think about centralized servers as basically like a middleman that sits between you and the data that you want.
And this middleman is basically going to it and it's going to get the data for you and then it brings it back to you.
in Lava we build it the own way, in such you talk directly to the providers.
So there's no middleman.
You directly go to them, you get the data.
A gateway is a middleman like that, only it enables easier access to the data by providing
support and SLAs and everything that you mentioned.
It's funny, Brian, when you were asking about whether, you know, you could, well, when you
were comparing to centralized providers and you were.
saying that with a centralized provider you have this SLA and you can call them up and whatever.
I just couldn't help but think, well, isn't, I mean, that's the same for your bank and you still
use blockchains in crypto, right? I mean, you know, this extends to the broader, you know,
the broader crypto space, I think. And, yeah, so it was just like a sort of funny remark there.
You're just using Web 2 tools in order to operate in Web 3. And, you know, for us it didn't make
sense in the beginning. And in order to scale, you know, to cause the billion debts and
people like onboarding on Web 3, we need the scalability infra.
Yeah, no, I think that answer makes sense. I mean, I guess one thing, the other thing that
comes to my mind here is, of course, we've also seen some decentralized projects actually
doing like an okay job, my understanding, or me maybe even good job at the
I think Gizmosis is like an example, right?
Where they used a community pool to like fund teams to deal with support requests and generally
have heard like good feedback that seems to be working pretty well.
But yeah, the gateway thing also makes sense.
So basically then I would, yeah, like let's say as an operator or like as someone needing RBC, I signed some contract
and then I can always query basically all the requests get routed through.
through that gateway
node and then
they can like let's say
track up time and if they're issues
they troubleshoot it and
maybe I can pay them
with dollars
or credit card and they
pay because in the end the operators
would get paid on chain
using lava
tokens or like how do the payments
work to the operators
so
yeah so the way it would work and I think
you know, the gateway concept is an open gateway.
You know, anyone can talk to the foundation,
put up a governance proposal, and start operating a gateway.
But the important part, and this is also touching on Seb's points,
I think, which is interesting, is that it's done in a way that is also familiar to the customer.
I think this is in general a way that, you know, Web3 projects need to be able to give answers
to Web2 type customers.
I think it's crucial.
In the end, you want to operate with many, many different companies that want to use
a service, and they need some sort of a gateway, right, to get familiar with it.
Because they don't know how to buy tokens and go on chain and buy a subscription on lava.
So maybe it's easier for the first step for them is to pay this type of gateway operator
to buy for them and then give them the service.
And then the operator goes, buy tokens, goes on chain, and buys it for them with some
kind of referral code.
And this is how it works.
And this referral code basically gives that provider a portion, whatever he agreed with the governance, to now operate this contract.
So say you want to buy the service, you come to this operator, you pay, let's say, 100 lava.
He goes on chain, he buys it for you, and he takes a 20% cut.
And then with this 20%, he operates the servers, he operates the support center, he offers SLAs, he does audits, and gives you.
the convenience that you need for Web2 project.
But if you're a fully Web3 project,
don't need any of that crap,
you can just go on chain,
you can buy the subscription with tokens,
and you don't need to talk to anybody
or go for any governance or go for anything else.
Let's go a little bit under the hood here.
I'd love to get a better understanding
of what the network topology looks like.
So, you know, we talked about some of the,
like, these operators,
but what are all of the roles in the network
and how are they interacting with each other?
And I think importantly, how are they interacting with applications?
I think one of the core principles that guided us when thinking about lava.
And by the way, we are lava protocol.
So the core contributors to the lava network was to keep it simple, right?
We have three different actors, main actors on the chain.
We have the DAPs, buying on-chain subscription.
We have the providers taking Lava in order to participate.
And we have the validators that keeping the service running.
And down into that, we see that not provider serving data modules that call specs.
And those specs are being defined by the champions that Gil mentioned before.
champion can be anyone from any chain, whether it's a small upcoming roll-up, or it's a
used existing chain.
It doesn't matter.
But just coming up with the governance proposal in order to define this spec, actually is
creating this unique spec that the DAPs can afterward use.
So these DAPs and consumer, they're using this data and they being paired with the
off-chain,
peer-to-peer communication protocol
directly to the provider themselves.
So imagine that from the browser,
you can get a list of top providers
and receive service.
When Gil mentioned before,
we were speaking with more than thousands of projects this year,
they constantly refer back and told us
about the problems that they're having.
They're using centralized provider,
and usually not once,
but then they need to come.
up with no balancing, with disaster recovery, with all of these different things that
as a small DAP, they don't need to do.
So Lava taking this burden away from them and implementing that and give them 99.999 availability
of the service.
After the service has been done, the DAP signing the transaction and sending back all the different
parameters about this service.
availability, the reliability of the service, the accuracy.
And this is made, this is being used for this score to score, again, the provider for them
to get more and more service.
That's in general the different actors on the chain.
So just one thing, you know, think about the blockchain like a restaurant and think about
specs as like the menu at the restaurant, right?
So the menu of the restaurant tells the people come to eat, this is what you can have,
you can have this type of salad or this type of soup or entry, and this is what specs are.
So for every chain you have these different menus, right?
And the way we see it is that any chain that is launching an OP stack, you know, like Blast or any other ones,
or any chain that is launching on dimension, for example, any roll-up that is launching on dimension or arbitrary,
They can immediately, this is the vision, as far of the launch,
propose this spec on chain, this menu,
and that immediately you have all these hundreds of providers,
this is their job, this is their bread and butter.
They come in and they're like, we're going to run notes for this.
We think this is going to give us a lot of rewards.
They go ahead and run nodes.
Then you have this amazing network of node operators
giving all these customers the best service.
One thing that always happens,
and we see it every single time.
I think everybody's going to agree with me.
As soon as there's like an air draw
or there's like a test net being launched
that people start farming or playing with,
the first thing that happened is the RPC crashes and collapses.
We see it every single time.
So this is where like Lava can completely shine
and take over all this usage and scale really easily.
So I want to talk a bit more about this spec concept.
So a spec is a document or specification that a chain or project will put forth, and then it specifies, as a specification does, what data to provide.
Does it also specify how to, in the case of an indexer, for example, let's say we need to fetch some time weighted average price, like some some, some, some, some, some, some,
some price or like some complex data feed that requires some complex
calculations about pricing data that is found on chain, does the spec also specify how to
arrive at that result so that it can be displayed in the application, in the front end?
And I think importantly, what is the method by which this data is verified and validated
in order to maintain
accuracy of the data that's provided.
That's a really good question
because, yes, so the spec,
it defines, as you've said,
what type of data you can get like a restaurant,
like a menu of the restaurant.
Okay, you can get the soup,
but you want the soup without,
you want the salad,
but you don't want the dressing,
you want on the side, right?
So the spec would specify,
oh, can you get the dressing on the side?
Can you remove the onions?
I'm going to have a steak.
You got a steak.
if your mind.
So what's your, you know, how would you like it cook?
Medium well.
So this is what the specifies, right?
He does not specify.
He does not go into the kitchen and tells the chef,
oh, make it like this or make it like that, right?
This is something the spec does not.
This is the burden of the developers that are building the chain.
So they're building the way to actually go and get the steak,
make the steak, where to get it from one supplier.
what it does
do with the menu
it does tell you
what the stick is
right is this a WagyuA5
or are you getting some
you know
something you know low rate
so this does define
the quality right
so these are
I hope it explains as well
I think it's a really cool concept
so I'm curious
so I mean
in the case of IPC
then that maybe it's like
slightly less relevant, I imagine, no, because, like, you have, like, a chain has this, like,
RPC spec, or, like, maybe, but it's, like, you know, it's, it's basically, like, RPC node for,
like, a particular chain is, I imagine going to be pretty uniform.
But I guess, um, I wonder, is that where this sort of API concept comes in as well,
where someone can make, uh, you know, a spec for something like, you know, that you cannot get
from a normal RPC note?
and yeah maybe can you talk a little bit about sort of beyond rbc's and especially like how
lava can be used to provide APIs yeah so first it's not uniform at all at all
right it's completely any every single cosmos chain for example has like a different
variations and add-ons and every one of them has a different versions of cosmos SDK okay so
every version has a different spec.
So it's not, it's super fragmented.
This is why RPC is difficult.
That's why it's tough to get right.
This is why I was built it in such a modular way.
The second thing that is important
before I get to API is that a spec
also allows you to specify validations,
which is like, how many blocks
are you expected to get?
Because a lot of our customers, they want to get,
they want to index the whole blockchain
from block zero.
For that, you need to store on many chains,
you know, 10 terabytes, 20 terabytes, maybe even more in some chains.
This is a huge burden.
Obviously, not every operator can do that, right?
So this is why we have these types of validations and extensions.
So an extension allows you to say, you know, I'm offering, you know, the latest data for
the last two weeks, whatever, however many blocks it is.
Or you could say I'm in archive node.
You know, I have the storage and capabilities, but I expect to get paid five times.
for archive data because I am taking a bigger burden.
To touch on APIs, so yes, the way we've built specs is a way that allows anyone to write
the spec in a way that is supporting of APIs like indexing, like subgraphs, any types of
APIs.
So you can go and write a spec that gives you access.
And we're working on partnerships with different indexing projects like Subsquit, for example.
that you basically copy-based a subgraph,
run it on faster infrastructure,
faster indexing using Lava and subsequent
and still get all the many providers,
quality of service, and the data reliability.
So we touch briefly on the quality of service aspect,
which is very important in the context of RPC
and data provisioning.
Can you go into detail about how the,
quality of service algorithm works and how do users that are pinging RPCs know that they're going
to get a good quality RPC provider or indexer? Yeah, we went to great lengths to make sure that
the service is really top-notch. We really did. And it starts from the beginning. Basically,
how do you choose the provider? So you choose the provider based on how much money, how much stake
they have in the system. So the more stick they have, the more likely that they will be in your pairing
list. Paring list is the list of providers that you can use. Once you have this list, you can talk to
any of these providers. So we've written this very complex engineering feat we call the provider
optimizer. By the way, everything is seamless. For you, just like using a regular RPC. So the
The provider optimizer is basically like a friend that goes and checks every provider.
Give me the latest block.
Give me the latest block.
Whoever is the fastest and has the most relevant, fresh information, like the latest block.
So you're not looking at all data.
It basically saves them at the top of the list.
And then when you get the data, you're talking to them.
So let's say you have all these providers.
You're in Europe, right?
So someone in
Central Europe
has a server
and he's really, really fast
you're going to be talking to that.
So this is what the RPC
the provider optimizer does.
Now, as you talk to him,
you start writing down,
okay, this guy, okay, is giving me,
you know, it's fast, it's slow.
Oh, here he was, you know,
he was not as fast as I expected.
I moved to somebody else.
You write his report cards.
You save them.
And then you save him.
And then you send them, and then they get on chain from these providers.
There's a process.
I'm going to get into it.
These report cards go on chain, and they affect the providers payout, basically.
So the provider has an incentive to always provide really good service.
But it doesn't affect the whole payout.
On top of that, we have what we call excellence, quality of service.
and this is the same report cards over time,
they accumulate on chain,
and they are saved in this provider's profile.
This is like a way to build reputation
for a provider on chain.
And then, the last step,
not too much,
the excellence quality of service score,
the reputation for this provider
is also one of the factors
that affects its paring list.
So go back to the first step.
it's how much money he's staking.
Because if he stakes a lot of money,
saying, I'm serious, I'm here, I can handle your requests.
And then over time, how good were you?
And we have all these customers that are paying
and they're using their payouts basically to score.
If you go back to the menu example,
it's like you have TripAdvisor connected directly
with the menu itself.
So it gives you constant feedbacks for the meal you just ate, but it's built in.
You don't need afterwards to get something.
Everything is happening on chain.
Okay.
And so users are rating the service providers.
It is the end user, like the guy who's doing the transaction in Metamask on Uniswap or whatever application is using Lava that is doing this rating or is it the developers.
Yeah, it's, when you say doing, that's an interesting way, because everything is happening automatically, right?
So during the transaction, we are aggregating all the different parameters, about the scalability, the freshness of the data.
So those are being used in order to score the existing transaction, the existing session with the provider.
And during the lifespan, obviously, the scoring change according to the service they give.
So how does that work?
if the actual service is provided like peer to peer without some node in between,
like how do you even have that information?
That's really good question.
I think it's a bit complex just because of how the protocol is working.
And I think, you know, this is where the gateway is coming and this is where the SDK comes in.
So as I've said in the beginning, you know, you can use lava without any central middleman.
To do that, you can use the SDK or you can run your own gateway, right?
and this is all open source.
The other way is if you use a gateway.
If you use a gateway, then the gateway basically scores for you.
Okay, so this is how it works.
Now, the spec that we talked about,
which is like one of the core components of Lava,
is basically defining the communication
between the consumer and the provider.
So actually, there's a protocol,
there's a network protocol
that wraps the actual RPC query
with everything we just spoke about.
This is like a channel between the consumer and the provider
with everything, the rating,
the quality of service, the conflict detection, which we didn't talk about, but it's a way to ensure
that the data is accurate as well.
Cool.
And how does the pricing work here?
How is it determined how much these services cost?
You know, at the beginning, we were always giving this answer that, you know, ask us,
how come you coming up with this good service?
And basically we said we took all the top centralized provider pricing,
put it into the chat GPT, came up with a better price.
The thing behind Lava is that is a real economy, right?
It's a real market that's going to balance between the supply and demand.
So if you start with a certain price, it can go up and down based on the demand.
So I can give you an example.
If there is a, you know, a service, an NFT drop that's being given in like a place that there are not many supply,
obviously the price is going to go up and going to invite more and more providers.
Let's say, you know, suddenly there is, you know, an NFT drop and a need in Africa or South Africa or something.
And you have only two providers there.
So obviously the demand upon the drop going to be higher and higher again.
And this is how we're balancing.
The protocol is automatically balancing the supply and demand mechanism.
I'll just add that even though we're, you know, we've been running in TestNet, we've already signed contracts with ecosystems.
And these ecosystems have distributed tens of thousands of dollars to providers.
This has already been done.
And there's hundreds of thousands more that are going to be signed and distributed in this year.
So I think, you know, it's already showing its, love is already showing stability to really support
different blockchains.
And to touch on the pricing, as they said, for ecosystems,
today, the way it works is they go to, let's say,
one of the big centralized providers and they sign a contract.
This contract can be for a whole year, can be for multiple years.
And this contract locks them in to a certain price.
And one of the complaints we've heard from them is that they don't want to be locked in
because sometimes the market is not active.
And sometimes the market is really active.
So they want to be able to balance the payouts to these node riders based on the demand.
And I think it makes complete sense, right?
It's like a pay as you go.
So it's the most efficient way to pay providers.
So this is what is built in love.
So these ecosystems like EVMOS, like XLR, like NIR, and many others that are signing and have signed already.
They are able to set their own budget.
you're saying this month or next three months I'm going to put let's say 10,000 lava tokens
in the pool on chain, on the spec, on the menu.
And then all the providers know this month you get it 10,000.
So they can decide, is this worth enough?
Okay, there's a thousand providers.
Does it make sense for me to join?
Maybe not.
Okay, there's 20 providers.
Does it make sense from the loan?
Yes, hell yet.
You know, if I'm good, I can get.
a really good share of the pie. And this is why it's a dynamic market. Interesting. Yeah.
So there is this kind of like market equilibrium that you hope to achieve here depending on like
demand and, you know, how well networks are being served. Is this something you've already demonstrated
with the networks on which you've been working with like that data providers sort of rearrange
range and reposition themselves on particular networks when demand is higher and retract when
demand is lower?
We've been able to demonstrate that using Lava, you're able to give an excellent top-notch
service of RPC and attract many providers.
But the payout has been constant.
In the near future, as we launch Mainnet, I believe this is going to change and it's going
to be much more dynamic.
But it's a process that will happen over time.
We are seeing that, and this is already live on TestNet, that these rewards, they make sense for providers to run and give really good service.
We've received a lot of praise for the different services we're offering to these ecosystems, and we're seeing inbound from other chains that have recently launched test nets and have been struggling.
They've reached out to us asking, can we use the same incentivization model to bring in more providers for our network?
So we're definitely seeing it taking off.
Yeah, and I think it bears reminding people that Lava has been in Tessnet for a while,
but that it's actually like working with live chains.
So even though the network is in Tessnet, all of the data providers are actually providing like real data.
And there's something like 30 or 35 networks currently live on Lava, correct?
Yeah, it's a product.
grade system right now for one year.
And I just want to touch upon the previous, you know, what brought us to ecosystem.
Because Lava started as a decentralized service.
And today everyone is talking about decentralized RPC service.
But then we got approaches by ecosystem to solve and take away the burden of public RPC.
Because public RPC is the first top of every dev into the ecosystem.
So if the ecosystem is able to provide a scalable service RPC and then APIs and solve these headaches of how they're accessing data for different apps, it goes without saying.
So ecosystem came back to us and told us, can Lava actually provide this public RPC?
Can you give us insight into the ecosystem?
And obviously we jumped into that and we presented in October,
first time the incentivized public RPC when the ecosystem put in incentives
for different providers to join permissionlessly and get reward for the service they give.
And we started doing that in EVMOS and afterward in Axelar.
We just announced near.
And all of this distribution of the different rewards being done only in the last few months.
So working on three different chains.
So in the pipeline, we have now from FileCore and Starknet, Koya, Gorick, and more are coming to level.
Okay, very cool.
One thing I wanted to inquire about as well is relayers.
So one of the issues, you know, particularly in cosmos chains, is that like relays are not incentivized for their work.
Does lava facilitate the matching of, you know, chains and relays,
as well, or is that completely outside of the scope?
Yeah, I think that, you know, we're seeing a lot of discussions around IBC and its fundamental
design, and I think, you know, IBC has been an incredible protocol that really puts, you know,
Cosmos lights here ahead of any other ecosystem.
But at its core, there's a core issue with payments for relays.
and I think that we will definitely be able to see Lava help with that.
Right now our focus is launching Mainnet, you know, bringing the power of, you know, the modular unlocked as we can do with Lava to chains and roll-ups, you know, they're built on eigenlayer on Ethereum on dimension.
But for sure, you know, once we're, you know, we're more, we're done with that, we're definitely be able to help.
with relayers and try to solve core issues like that
in the cosmos ecosystem as well.
So you mentioned modular again.
Yeah, you mentioned it earlier.
So, of course, I understand that, you know,
for also modular chains, roll-ups,
well, you still need RPC, you still need this thing.
So I understand how, like, you know,
the, you know, Lava stack is still relevant
in that kind of modular world.
But I'm curious, is there,
more to this or what do you see as sort of the connection between lava and modular chains?
If you look at Web 3, right, if you look at what's happening today, I think that everyone's
coming to the same conclusion, right? There's going to be an explosion of chains, but how many
chains are going to be like full purpose, not specific purpose, like if you're on full purpose,
you know, they can run anything you want, smart contract,
or you have application-specific ones on Cosmos or roll-ups.
So I think it's pretty obvious that in the future,
we're going to have all these chains, they're going to explode.
They're going to be super successful,
but how do you weave one path through all of them?
It's super difficult.
And it sucks.
I mean, it sucks today, trying to use all of these chains.
It sucks.
It's really difficult.
even for, you know, I'm a tech user and consider myself a power user.
It's difficult for me to keep track of all the chains and all of my assets across all of them
and to interact with them and bridge tokens and make sure I'm using the right bridge and don't get hacked.
Until we solve these core issues, we're not going to have a good system.
And one of the things that we're seeing prop up and I've been thinking about it a lot is how do you, like,
imagine the future Web 3.
You have one wallet and he has access to all of the chains and he shows you the assets,
on all of your chains.
And you don't care where your assets are.
You don't care if they're on, you know, on, you know, Cosmos Haim.
You don't care if the assets are on Ethereum, an arbitrium.
You don't care where they are.
They're just there.
And you want to move them somewhere?
No problem.
You don't need to think about, wait, do I have osmosis, osmosis, osmo to perform the trade, to move the tokens?
Or do I have this token or that token?
Or start thinking about the gas.
You know, so I think the future of Web3 is some kind of system that allows
you know for full chain obstruction
so you don't even think
you don't know what a bridge is you don't know what gases
this is the only way to reach mass adoption
which I think is the goal of what we're doing
and when we were thinking about Lava
we think about modularity
we're thinking how can Lava be
one of these core
key unlocks in this in this stack
and we see that you know okay
there's RPC we can talk about RPC all day
but it's really really boring
but you have to do it really well
but then you have to be able to launch it really fast for all these chains.
What if you could, on top of the RPC and on top of the APIs, use this now network
that has basically stakes from every single ecosystem, and all these providers are now
staked to verify that the data is correct on Lava?
What if you're able to use that to build song chain abstraction that enables you to, you
know, more freely, move assets around, you know, access all the different blockchers without
even known, and still trusting that the data is correct, still have everything decentralized.
So this is what we're trying to think of the grander vision of chain obstruction and
how long fits in and unlocks this.
If I add on top of that, you know, what you said before, when describing like, it doesn't
matter if you take the monolithic path, thesis.
right, the modular blockchain, or if you go to the roll-up centric, all of them need the data access layer.
If you give the right tools to build the infrastructure, if you build them, if you take the burden
away from them, from whatever they're building, this will scale the industry.
And, Seb, you mentioned before the banks, right?
So if there is two providers that you go for them for the data, and two of them, two of
them is down, how you're going to get access to your Metamask? How you can promise your users
as a wallet that they can make transactions. All of that we believe with this modular layer,
it's something that Lava is able to save. And we call it a build whatever, wherever.
Build whatever it's, you build it whatever you want, whatever adapts, not focusing on the
info. And wherever is in the multi-chain. So can you guys talk about
Magma.
Yes, of course.
Magma is a super exciting
confidential project
that's...
Not anymore.
I'm going to laugh.
Ah, sorry, but it's not live, right?
So we're going to
yeah, we're going to announce it
on the 15th
of February.
And this is basically
brings a non-technical
users. All of us that
have a wallet to share the Lava vision, to share the Web 3 values.
The Magma is the point system that Lava is presenting.
It allows any user to use the Lava endpoints,
chain that at the back end of the wallet in order to get access
that is super available, it's decentralized,
and obviously it's scalable.
So basically, jumping quickly into the Magma,
Magma is the phase coming always before the lava.
And we already announced that the main is going to come up soon.
Magma point system is a program that every user that has a wallet can sign up
and start receiving points for the usage you do with the wallet.
For transaction, you get scores.
for just watching the wallet it also creates points
and because all of them is at the core
that making RPC request
and every RPC request is being scored
for you as a user
and you can watch it over the lead port
cool and so
which brings me to my next question
when main net
main net
you know our engineers working around the clock
in order to address the main act.
We are trying to push for end of Q1.
And yeah, this is super exciting times.
So, you know, stay tuned for that.
Very cool.
And so where should people go if,
well, I guess there's a couple things, right,
if you're building an application and want to integrate Lava
to provide RPC to your users,
There's like one category of people that should check out lava, but also if you're an infrastructure provider and want to provide infrastructure for the lava network, where should these categories of listeners go to?
So I think the team did a great job writing the documentation. We have an amazing disco channel with thousands of members, very vibrant.
and always asking questions, getting them answered by the community themselves for the new users.
So start with our website, lavenet.xyz.
You can see there are a lot of documentation and jumping to our Discord.
We call also the non-crypto, non-tech users to amplify and bring more the values of Lava.
In the upcoming months, the foundation will reach out and, you know, present different programs.
And we all start tomorrow or the 15th of February with the Magma program.
Cool.
Well, thank you so much for coming on.
Actually, we just wanted to mention one thing, which I think we should have mentioned the start of Eagle for God.
So, you know, as most of you know, my main thing is running course one.
and of course one we did invest in lava
and we were running a validator
and I think Seb with his fund
also invested in lava
so we just wanted to mention that
yes interventures are also investors in lava
so full disclosure
and running validators
cool well thanks so much guys for coming on
that was super interesting
I think it's definitely something
that I could see getting a lot of traction
right and it will be really great
to see how this sort of things evolve
once in many is life
and you know
over the next year. So thanks so much for coming on, guys.
Thank you so much.
Pleasure being here.
Cheers.
Thank you for joining us on this week's episode.
We release new episodes every week.
You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud,
or wherever you listen to podcasts.
And if you have a Google Home or Alexa device,
you can tell it to listen to the latest episode of the Epicenter podcast.
Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter, so you get new episodes in your inbox as they're released.
If you want to interact with us, guests, or other podcast listeners, you can follow us on Twitter.
And please leave us a review on iTunes. It helps people find the show, and we're always happy to read them.
So thanks so much, and we look forward to being back next week.
