Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Jasper De Goojier: SEDA – Intent-Based Modular Data Layer
Episode Date: March 9, 2024As technology progresses, infrastructure should be commoditised, especially in Web3, in order to avoid the creation of bottlenecks and gatekeepers. Blockchains are naturally oblivious to off-chain dat...a, so they need oracles to fetch data. However, given their past technical limitations, oracles have failed to provide a decentralised and permissionless framework for data query. SEDA seeks to change this by creating an intent-based modular data layer, which brings off-chain data on-chain, in order for it to be available to any party, regardless of who requested it first.We were joined by Jasper De Goojier, co-founder of SEDA Protocol, to discuss the oracle landscape and how SEDA aims to decentralise it and make data access permissionless.Topics covered in this episode:Jasper’s backgroundHigh level overview of SEDAOracle use casesHow SEDA Protocol functionsIntent-based data availabilityVerifying subjective data & LLM integrationZKPs & FHE for data privacyInteroperability & bridgingRoadmap & optimistic oraclesSEDA token migrationChain abstractionEpisode links:Jasper De Goojier on TwitterSEDA Protocol on TwitterSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: Chorus One is one of the largest node operators worldwide, supporting more than 100,000 delegators, across 45 networks. The recently launched OPUS allows staking up to 8,000 ETH in a single transaction. Enjoy the highest yields and institutional grade security at - chorus.oneThis episode is hosted by Brian Fabian Crain. Show notes and listening options: epicenter.tv/538
Transcript
Discussion (0)
This is Epicenter, Episode 538 with guest Jasper DeKoyer.
Welcome to Epicenter, the show which talks about the technologies, projects,
and people driving decentralization and the blockchain revolution.
I'm Brian Crane, and today I'm speaking with Jasper DeKoyer.
He's the CTO and co-founder of Seda Protocol,
and Seda is a sort of universal Oracle protocol,
so looking forward to getting into that.
Before we start talking with Jasper,
I just would like to briefly tell you about our sponsors this week.
This episode is brought to you by NOSIS.
NOSIS builds decentralized infrastructure for the Ethereum ecosystem.
With a rich history dating back to 2015 and products like Safe, CowSwap, or Nosis chain,
NOSIS combines needs-driven development with deep technical expertise.
This year marks the launch of NOSIS pay, the world's first decentralized payment network.
With the Gnosis card, you can spend self-custody crypto at any visa-accepting merchant around the world.
If you're an individual looking to live more on-chain or a business looking to white-label the stack, visit nosispay.com.
There are lots of ways you can join the NOSIS journey.
Drop in the NOSIS Dow governance form, become a NOSIS validator with a single GNO token and low-cost hardware, or deploy your product on the EVM-compatible.
and highly decentralized nosis chain.
Get started today at nosis.io.
Corse 1 is one of the biggest node operators globally
and help you stake your tokens on 45 plus networks
like Ethereum, Cosmos, Celestia, and DYDX.
More than 100,000 delegates
stake with CoreS1, including institutions like BitGo and Ledger.
Staking with Chorus 1 not only gets you the highest years,
but also the most robust security practices and infrastructure
that are usually exclusive for institutions.
You can stake directly to Quarice 1's public note from your wallet,
set up a white table node or use the recently launched product, Opus,
to stake up to 8,000 eth in a single transaction.
You can even offer high-year staking to your own customers using their API.
Your assets always remain in your custody, so you can have complete peace of mind.
Sasking today at chorus.1.
All right, cool.
Thanks so much for coming on, Jasper.
It's great to have you here.
Thank you for having me, Brian.
Super excited to be here.
Yeah, like, tell us maybe, first of all,
how did you get into crypto,
and how did that sort of,
how did that road lead you to say them?
It's a fun story.
So I think I bought my first crypto
just purely as a speculator.
some in like my first year of college, I think, in 2016 or second year.
And I ended up dropping out of college to start this company that was doing data analytics on mostly Facebook data.
And me and two friends did it.
We ended up getting aqua hired, but we got increasingly frustrated with the way Facebook managed API access.
So instead of being able to innovate on the product that we were building,
We were constantly just like running around with like metaphorical duct tape fixing like problems because of them essentially rugpooling API access to certain endpoints, etc.
So it was a super hard to build a sustainable like business that was able to innovate and instead you're just like fixing the the product that we had.
That really sort of stifles innovation from third party developers.
and that's what got me more interested in the crypto space
because I sort of bought the script,
I kept like an eye on the market
and then as soon as I,
the moment where it clicked was sort of like seeing smart contracts
as immutable API interfaces.
So the idea that you always know that this data will be accessible
and you can sort of like as a third-party developer
have this permissionless composability
to build third-party applications on existing infrastructure.
The way that empowers developers is what really, really sort of like was the aha moment for me.
So I dove into the space, had a programming background, taught myself solidity, did some consulting,
ended up doing sort of like independent research funded by the Ethereum Foundation with a few other people.
We were researching plasma, which turned into roll-ups.
So we were trying to build ETHL2s.
That was really cool, learned a lot, surrounded myself with smart people.
and that was the thing that sort of like kept me in in the beginning.
Because I joined like full time back in late 2017, early 2018,
like right before like a super brutal bear market.
But the sort of like the level of intellect and like the people that were in the space
was so like magnetic to me.
It was such a pleasure to work in such a young industry where everybody was hungry
and had a mission.
It was really that that's what's really got me going next to sort of like the initial
sort of like interest into smart contracts.
Ended up meeting my co-founder at a hackathon in 2018.
We started building products in the space.
And yeah, that's how I rolled into it.
Cool. Thanks so much.
And then so with Seda Protocol, what is the vision for Seda?
So how I describe SETA is sort of like as a layer for data,
that should be accessible to any developer, right?
So if you look at sort of like the infrastructure cycle across crypto, starting from the beginning,
is you have Bitcoin as sort of like this application-specific L1, right?
It just does accounting value transfer.
That's essentially what it does.
And then Ethereum popping up and allowing developers to build sort of like arbitrary business logic
in a form of smart contracts that then have this composability and people built like a suite of
different products on. And when Ethereum was launching, like, nobody knew what the use cases would
be that actually would get traction. That's sort of like the beauty of it. The beauty of it was that
anybody could come in deploy a smart contract and then any other person could interact with it,
right? And I think that that sort of like permissionless creation,
and permissionless access
is what created this giant network effect
of what became crypto.
And we're not seeing that with a lot of the infrastructure today.
So a lot of the infrastructure providers
that are very necessary,
such as data infrastructure, like oracles, for example,
or bridges,
they still act as sort of like kingmakers and gatekeepers, right?
Like you have these startups or larger companies
that essentially you need to prove to them
that you're worthy of deploying
to your chain or like allowing access to like a new feed or allowing allowing you to access their data
from a new environment and it all has to do with sort of like technological tradeoffs that were made
and that were necessary at the time because a lot of these projects were built when there was only
one chain and with seta the goal is truly to build like this entry point to real world data
or data outside of a blockchain's own execution environment that can be accessed from any l1 or l2
and where you can essentially spin up your own data feeds, right?
So you can deploy a program on the set of network
that essentially dictates what data should be queried where
and how it should be computed,
that then gets stored on the set of network,
and from there it's accessible or verifiable
from any smart contract on any L1, L2,
or sort of like crypto network.
Okay, okay, cool.
So you said, first of all, right,
the data, like, you want to have off-chain data,
or data that's at least outside of this particular chain
or this particular context, you want to have that accessible.
And of course, the reason you sort of write, even today, right,
a lot of things rely on, let's say, chain link, right,
for maybe writing price feat onto the chain, right?
That, like, maybe, okay, what is the USD to Ether price or something like that, right?
So and of course there's a lot of like yeah other applications right they say like or like what do you feel like or some of the
applications that like you're most excited about that you think would get enabled by having that capability.
Yeah.
I think that there's a few things to touch on here.
I think the first thing and you sort of touched on this is it's not just price feeds.
I think that when people hear the term Oracle, they sort of immediately think of price fees.
as the use case, but I would actually argue that there is like the category of Oracle is extremely
broad. I like to say that almost everybody's essentially trying to solve the Oracle problem in this
space, right? Like bridging is essentially an Oracle problem solution, right? It's like an application
specific Oracle. Price Feeds and Oracle. Real World Assets is an Oracle problem solution. I think that
you could even argue that Uniswap, for example, is essentially an Oracle where there's an economic
incentive to ARP the price to centralized exchange prices, right? And that way it reflects sort of like
off-chain data. The use cases that I get most excited about are still defy, just so price feeds,
but also interoperability, real-world assets, the things that I sort of mentioned. And the thing that
excites me the most about what we're building, it's sort of like the permissionless aspects of
deploying new things. So just like Ethereum launched and had no idea what would really stick,
we have a better idea of the things that works of like the low-hanging fruit. But the new things
we enable is where for me the real sort of like mystery still lies, right? Like as soon as there's
like fully permissionless data access for any one, I have no idea what people and developers will
come up with. And so it's like this idea of empowering developers is what really is super
interesting to me. And then the second thing is, as
we are seeing the app chain thesis or modularity thesis or whatever you want to call it
sort of play out over the last sort of year and a half and I feel like it's going to go exponential
in the next year and a half, there is core infrastructure that's necessary to enable a ton of
use cases on these new roll-ups and app chains, right? And I think probably the most core is
having access to real world data. So I think that the idea of being able to
launch an app chain, let's say we launch SETA today, tomorrow I could launch a roll-up
and immediately have access to all of SETA's data, which means that I have base interoperability
and access to price feeds, and developers can just start building. Or we as an application-specific
network have access to the data that we need. You know, like, I think that is, those things
are the things that get me the most excited about the product that we're building. Okay, cool.
Well, let's go a little bit into details into how this works.
So you mentioned SADA Network.
So, right?
So my understanding is SETA Network is a Cosmos SDK chain you're building, right?
And then is SETA network basically the way I understood it,
a developer would go on Seda Network and then would sort of almost like create a job there
and say, hey, I want like X data on Y chain and maybe set a few.
parameters or something like that.
That is essentially how it works, big picture.
If we dive a bit deeper, how it works is we have set a chain, which is used for settlement
and checkpointing, right?
So it's like where slashing happens and staking happens and where data references and, like you
said, jobs, which we call programs, are stored.
If we go through the flow of set out chain, I can deploy a program on set a chain.
that essentially is a set of instructions on how data should be queried from where and how a final
outcome should be computed. So for a price feed, it could be like query ether to USB from six
exchanges and then give me a median value, right? That could be some of the instructions that you
give it to this program. And then the program is deployed almost like a smart contract.
So it's stored on the chain. And then if you want to,
that data to be queried, if you want to get that data answered, then what you do is you ping
the chain referencing sort of like the contract address or ID of that program. And then the chain
picks a verified secret random committee of a second layer, which we call the overlay layer,
which is a network of NPC notes that get randomly selected to actually perform that computation.
So from like a technical perspective, the program is stored as it wasn't binary.
this wasn't binary
gets executed by the overlay notes
in a wasom VM
so they all get essentially
some sort of uniform outcome
that should be close to the same outcome
and then they commit it back to set a chain
on set out chain what we then do
is we batch all of the feeds
or all of the jobs together
and we mercilize them
and have them signed at the end of the block
maybe we can walk through this
like a bit more slowly.
So you mentioned,
so let's take this example, right?
So you mentioned like, I don't know,
roll up or someone launches like some chain
or we, yeah, roll up or maybe it's like some,
let's just say it's some cosmicistic chain or something.
And they want what you mentioned, right?
So they want to have the price feed
and they want to have, yeah,
the median of the sixth largest
centralized crypto exchanges to write.
like, you know, just a simple price speed.
So you deploy this program on Seda chain,
and then the chain chooses sort of like kind of like a particular set of notes
to then perform this job?
Exactly.
So we do like, yes.
So the chain uses VRFs to pick from like a larger pool of validators
because tender mint validator sets.
have some scaling, scaling issues.
Yeah.
Okay, but is this the people,
sort of the nodes who write this data afterwards?
Are these the validators or these are other nodes?
Yeah, so these are not the setup chain validators.
There can be overlap, but this is like a second set of nodes, yes.
So the second set of nodes,
and then some are basically chosen,
sort of randomly from that set?
Yep, yep.
And it's like a secret random committee,
so they don't know which other nodes
have been selected either.
It's like to prevent collusion.
They don't know beforehand, though.
They don't even know at the time.
So they don't know beforehand,
and they only know after sort of like the commit stage, right?
So then after they commit the outcomes,
they know which other notes were selected, of course,
because the chain is now
is public.
Is it like a fixed number of nodes are chosen
or it depends on the security requirements
of a particular program?
Yes.
So the program can set
like the replication factor,
like how many of the overlay notes
they want to have run this feed.
And it depends on the,
it depends on a bunch of things, right?
So that way you can sort of like cater
data security based on the use case that you use it for us.
If it's for something that doesn't have a lot of risk of,
if it's something basic or non-defi-related,
like it's okay, maybe if it's like a small set of overlay notes,
if it's something that holds a lot of value,
you probably want to increase the replication factor.
And then as somebody who creates this program,
because I want this job, I then, for example,
basically fund, I put some money into this program that then pays
the node operators who...
Yeah, yeah. So when you
ping the chain and reference the program in order to have it
executed, you provide some gas fees essentially for
computation. Computation gets done.
When you say ping the chain,
like who pings the chain and how?
Yeah, so anybody can ping the chain to say like,
hey, query this price feed, right? So if you have a
price feed, like you want to have updated at a fixed
interval, for example, you can ping the chain at that interval. And we're working on something
we got continuous data requests as well, which essentially sets, hey, for the next X amount of time
ping every time essentially within an interval or set a condition when you want it to be,
what wants it to be ping. That's completely permissionless, right? So anybody can do that. Like,
you can write a program that pings the chain directly. You could fire an event on your destination
chain. So I launch a roll-up. I fire an event there that's then picked up by, I don't know,
some solver or relay that sees, hey, they want this data. So we are going to do it for them.
And Bridget Bag, there's like a bunch of, a bunch of ways this can be done. But it, yeah.
Right. Because the ping happens on Seda chain. Yes.
Right, right. And then, and then the result is written to some destination chain.
The result is written to Cedachin first, where it's then batched with any other requests that come in in the same block.
And at that point, you collect the batch, what we do in like a Merkel tree.
And then, like I mentioned, we signed a route.
So from there, as soon as the route is signed with a batch of data, you can verify the signature
from our validator set,
essentially from any smart contract
and verified that the chain is a certain state.
So what's cool there also
is if one person or a group of people
fund an ETHUSD price feed
that is updated every minute,
any other chain now also has access
to that data
because all of the data is stored
essentially in this manner.
Okay.
And then you need some,
you need like on the,
destination chain, you need to be able to verify, right, that signature. So you need to keep track of,
like, the validator set or? Yeah, yep. And that's, that's the great thing about Tendermint is that the
validator set is not, does not change that much, right? And we have this, we have a pretty
vanilla chain, I say, I'd say. But there's one extra condition, which is that if you leave the active set,
you're still required to at least sign the um the roots the the batch roots for another epoch uh which
you're still deciding on it's like next 12 hours or something oh so the epochs are relatively short
or yeah yeah yeah yeah it's like 12 hours to two days we're thinking yeah and and then that's the most
often that valde said could change so that's the yeah you were
would not have to sort of update on each chain, destination chain, more often than, yeah,
not too frequently.
And what's cool is in the batches, in the Merkel tree, there's always we reference sort of like
the next validator set. So as long as you have some data pushed within that time frame,
somebody can say, hey, update, set, and done. Okay. I'm one thing I'm curious about
I mean, I know there's been some discussion in Cosmos on, you know, if something like BLS signatures where you basically sort of can aggregate a lot of different signatures in a single signature, like doing something like that.
Is that something that would be very useful here because you could reduce the size of this thing?
Or does that matter for you guys?
Yeah, for sure.
We built a BLS signature implementation.
we build a few
because not every smart contract chain
has access to the same cryptographic hashes
or sorry, the same cryptographic curves.
So we build a few signatures
that can be applied to the batches
so that they can be at least verified
with subsidized verification functions
on most VMs.
Yeah, what else is important about
what are we missing
sort of from this process of like
the chain goes there, there's this program.
Now you have to notice are being chosen.
They're right on there.
Yeah, I think the one thing we can touch on is sort of like the way that data is then transported,
yeah, transported from set a chain to the destination chain.
Because essentially what you have now is like you're essentially collecting data on our network,
right?
And yes, it's verifiable, but how do you incentivize, but how do you incentivize,
people to then bridge that data back to the chain where the data is actually supposed to be consumed.
And for this, it's almost like an intent-based network where there could be multiple reasons
why somebody would go there and sort of like bridge the data back to the destination chain.
And the first one would be if there's MEV exposed.
So we can essentially run a lending network on an L1 that we spin up ourselves or
or an app chain that we spin up ourselves,
use set as a data source,
and as soon as there's liquidations that's supposed to happen,
then solvers can come in
and then choose to bridge the data over
and perform the liquidations.
Sorry, I didn't totally get this.
So, okay, I understand right, the challenge
is that basically now the program is running
and it has some price feed,
and then it now,
Now that's written on to SETA chain.
And now the question is, I guess, how does it get to the destination chain?
Yeah.
How do you incentivize somebody to bridge it over, right?
So it can either be, if we run the landing protocol, like, we can run our own solver to sort of, like, bridge this data over and perform the liquidations.
Or you can rely on third-party solvers or like searchers.
So is a solver?
Because it sounds kind of like a relay.
to me, I guess solver is would be in the case if it's like integrated in some, let's say,
some kind of exchange type thing that relies on solvers. Exactly. Exactly. So you, because it's like
permissionlessly queryable, you can just add the ability for solvers to essentially bridge the data
or bridge the proof with any action that it would normally do to perform, I don't know,
liquidations on perp taxes or lending markets.
So basically there's like different ways that this can happen.
And I mean, I guess one example would be, you know, I create some application on X chain
and I want to say the price feed and I as the application.
I'm just going to run some kind of, you know, relay and I'm just picking it up and I'm putting it
over there and you know I have some external intentive because I want this application to run
I guess that would be one model for sure is it also part of the program that I could just say hey
anyone can do this relaying thing and if they do it they get paid some fee yep yep so that's the
bounty idea right so you essentially place a bounty on the destination chain that covers gas fees
and then some for bridging the data over, and that way you outsource it.
And then the final one is, like I said, is like, I just build it as a source for liquidations,
for example.
And I assume that at some point when a liquidation occurs, that exposes MEV, so there's like an
incentive for third parties to come in and bridge the data over, right?
So it's essentially the same thing as a bounty, except like you don't place the bounty.
It's just part of your network.
When you say liquidations, there will be some kind of, can you walk us through an example for this?
Yeah, of course.
So let's say we have a lending protocol and I hold a position of, I borrow USDC against ETH or something at some liquidation ratio.
As soon as the price of ETH drops below the liquidation threshold, there is a fee to be made for liquidating that position.
right so if you can bridge the proof from set out to that chain and prove that eith was below the
liquidation threshold you now earn that fee without the protocol saying like hey it's been x amount of time
so you can claim a bounty essentially right so you have like other types of incentives to uh bridge the
data over as well that be that are sort of natural for for certain defy applications right right okay so
So in this example here, of course, this would only work, I guess, if you only need the data,
you know, if the data is only needed for the liquidation, right?
Yeah, yeah.
Right.
So, of course, then it's only if there's liquidation, only if there's that incentive,
then someone else can make money.
So that's when they're going to.
And would they then also, they would then also potentially pay to run their,
to basically on say the chain, right, to get the data and then you take that and move it over.
If the feed has not been updated to reflect the current price that is below the liquidation threshold,
then yes, they would have the incentive to also run the feed, run the program.
Okay, cool. Yeah, I mean, that's a very, yeah, it's a very powerful,
this field looks like a very powerful primitive.
I'm one thing I'm curious about
so we talked about
price feeds
because it's you know kind of like a known
and and I'm sure it's probably still going to be
or I imagine it's still going to be the largest use case
right because in the end like crypto is basically
you know people build financial applications right
and they want to trade and then price fees are kind of essential for that
but I guess if you look sort of at other types of or other types of data,
maybe also data where there's maybe more of a subjective component.
Yeah, I actually say that price is one of the more subjective things
that's being pushed on chain today even actually.
Because like there's no truth on like what the price of eth is right now, right?
There's like a ton of algorithms that you could use to get as close to.
the truth as possible, but it's really hard to figure out what is truly the price of
now. So you just sort of like take a bunch and then you use a bunch of like algorithms essentially
to try to get to something that is probably close to the truth, as close as possible.
I think that more subjective data such as, for example, so previously, that's probably good to
touch on as well, we actually build an optimistic Oracle as well, which is more similar to like
an Uma or an auger, which essentially allows you to ask practically any question to the Oracle,
and you essentially have humans coordinate or some machine programs or humans. Anybody could
essentially coordinate and come to a conclusion of what the answer to that question is. I think
SETA as a primitive, as you call it, which I like a lot, is not really fit for that type of data.
it's more fit for like API data but yeah maybe if you could give an example of like the
the sort of like more subjective data sources I mean I guess you could you could be like I don't know
who's you know who's the best band in France or what's the best band in France or something like that
yeah no I don't think unless there's an API for that it probably would choose something else
yeah yeah right yeah I mean but what you
you could do is you could ask an LLM through setup.
Okay.
To answer that question, right?
So I think that is another use case that I forgot to touch on earlier is the fact that
because anybody can plug in data to the system, you can query anything from a smart
contract.
So LLMs, for example, become also accessible to smart contracts through setup.
But then how would you verify?
Because the LLMs are not really deterministic, right?
So you ask chat GPD something and I asked it something and then we get different answers.
And then so yeah, how would that work?
That's a great point.
There's two ways.
Some of them have deterministic endpoints that have some sort of a seed that you can prove like, hey, this was actually generated by this LLM.
The second one, which might be a little bit less sexy, but still works.
is that when the data provider plugs into the network,
you have them sign the prompts, right?
So let's say chatGBT is a data provider to SETA.
They sign the prompts,
and you can verify that the prompt at least comes,
or like the response at least comes from.
Open AI provides that, for example.
They provide like signed output.
Well, not yet.
Not yet.
Okay.
But if they would plug into SETA,
we would ask them to do that, yeah.
Yeah. Or like if I launch a chat chitp tapper on top of seta, I would sign it, right?
So then you don't have proof that it was actually ran by chat chvetypt,
but at least you have like somebody to point to as like a testation, essentially.
You have proof it was run by Jasper who said he used to chat chastro.
Exactly. So that's not, that's not really bulletproof.
Depends on, it depends on who hosts a rapper, I guess.
Yeah, and I mean, I'm sure people are going to have, you know, solve this in some way, right?
Where you're going to have some kind of, you know, deterministic verifiable LLM outputs.
Yeah, Mid Journey, I believe, has deterministic prompt to image.
I forgot how to do it.
It's one of our engineers build a prototype using, I believe it was Mid Journey.
I mean, we're talking a little bit about sort of the,
determinism and verifiability, I'm curious, are like zero knowledge proof something?
Like how do zero knowledge proofs sort of intersect with SEDA?
I think fully homomorphic encryption could be very interesting for sort of like keeping the data
private before it lands on chain. So essentially what you could do is have
these overlay notes that actually do the query and computation, perform the query and computation
through like a fully homomorphic encrypted service or data providers. I think the ZK thing,
I'm not 100% sure how we could use it except for privacy for CK version of what we're at least
like, it's not on the roadmap yet, but I mean it's obviously something we think about.
I don't see much more than that, except for, like, the ability to query ZK-like clients and prove chain state, like, in a more elegant way.
That's something that makes a lot of sense.
And then set out can be used as more of a data transport layer, which is essentially designed to do.
Right.
So, yeah, that's the other topic I wanted to come back to because I think you mentioned earlier.
bridging, right, as a problem.
And of course, bridging is something where
interoperability is something where we've seen, you know,
a massive amount of activities and investment.
I mean, you have like protocol like IBC that has a lot of usage, right?
That basically relies on like, you know, light lines in each chain
that, you know, there's a lot of usage in cosmos,
but including also a whole bunch of teams trying to bring that, you know,
like everywhere, like things like.
like polymer union and other ones.
And then of course you have other protocols like wormhole,
things like Axler or Layer Zero.
There's like a huge amount of activity there.
How does SETA do you think that SEDA would be like a viable alternative
or competitive to these existing bridging solutions?
Yes.
I also think that we can be very complementary to a lot of them.
right so we're talking to some of the teams you mentioned about setup being part of their stack
some of these solutions still require like oracles or like essentially like somebody to a test
data right a lot of them are in the end still some sort of multi-sig but you try to like make the
multisigs like for example right layer zero requires still like an oracle and a relay as they call
them, which essentially like 2 of 2, a multi-sick and which an Oracle and a relayer have to agree
on the state of something being something. And they use still a lot of like RPC data for example.
So, okay, so how it works with SETA is because everybody can plug into the network and provide
data and anybody can verify SETA Oracle state essentially through some sort of like,
what we build is pretty similar to IBC. It's just got like a one-way version.
light version of IBC.
So RPC providers can provide data to SETA, like we're talking to a bunch of them.
And essentially what they do is they open up their API to SETA.
And then people that provide these, people can write programs to query a bunch of RPC providers
to verify chain states and then you query multiple for the sake of redundancy, right?
So you can write a program that's where it's like, I don't know, like infura, quick nodes and alchemy to verify the state of East to be a certain, to verify, I don't know, like block hashes or contract state.
And then you have that run through setup stored there.
And then that can be queried from any L1 that then has essentially access to that piece of state from Ethereum.
right so that could be used by these interoperability protocols as like part of their consensus
and that's that's something that's the tweet we talked to with a bunch of them and it's it seems to make
sense and I think that the thing that makes sense most is that fact that as soon as it enters our chain
it's verifiable from any other chain so you get like this super broad distribution mechanism
simply just by having the data come into our network.
And it's very interesting for RPC providers too,
because they get to monitor that data through on-chain traction as well,
not just because right now, essentially, RPC providers just sell their data to
people that want to query chain state from, I don't know, like a server or something.
But, I mean, there's a huge demand on-chain to query chain state as well.
So we're sort of like allowing them to tap into that as well.
Yeah, I mean to open sort of an additional market there.
Yeah, exactly.
Is RPC also something that say,
we just did the podcast with Labmon Network the other week, right?
So when I was talking quite a bit with them.
And I mean, I think the thing that,
course to me, that actually is a lot of similarity here.
right, where, you know, they also basically,
I'd have some kind of on-chain contract thing, right,
that then, you know, requires a bunch of people,
that they can run basically RPC notes.
Although I guess in their case,
not the results wouldn't be written.
Like on-chain, right?
It stays off-chain, right?
So, yeah.
But all they need to do is plug their,
one of their RPC gateways or whatever they called
into the set of network
and now they also get to monetize on-chain.
And I think that that's actually really interesting
especially for like the more distributed RPC networks
because they have like this sort of like
permissionless aspect to them as well, right?
Like that's one of the like core goals of one of these more
like I know like the Lava networks
or the pocket networks of this world is like to allow access
permissionlessly to RPCs
and have like super high quality service to distribution
like load distribution and network distribution,
I think that plugging that into a network like Setta
is extremely,
makes a lot of sense.
And we are in conversation with some of these providers as well.
I mean,
I guess the sort of flip side is, right,
that in the end,
I guess,
setup provides this verifiable,
right?
Because if I'm like,
okay,
I want to ask,
what's the balance of,
you know,
this account on Ethereum or something like that,
right?
Then maybe I want this.
verified, right, because it's going to trigger something, right? Or maybe I just want it to show
something in the website. And I guess, say that would be really good if you want it verified,
and something like lava would be good if you don't need it verified. Or if you don't need
it verified on-chained. Seta couldn't work without somebody like lava providing that data to
seta, right? Or like an infura. So, and then the more, the merrier, because you can create redundancy,
which increases security even more.
So I think that, yeah, that will be the main point, actually.
Yeah, yeah.
Basically, they're just at the same operators
that would serve something like Lava could also serve something like Cedar.
Right.
Yeah, I think the idea is basically that Lava does the exact opposite for Futseda does, right?
The exact other direction.
So they try to make on-chain state through RPC accessible
to off-chain, right?
We are trying to make all of the RPC data available on-chain.
And we don't want to build our own RPC network, right?
Like, that's a whole different beast to slay.
So we would rather have somebody like Lava plug into our network
and provide data and be paid, essentially,
to do what they do already, but for smart contracts.
So, I mean, I get that what CEDA seems very, you know, useful for is some degree of
interoperability in the sense of, you know, I want like maybe state from something that happens
on chain A, it triggers something on chain B, right?
Like that, that seems like getting that data over.
But then I guess when you look at, let's say, of course, one of the main use cases for
introbability is basically moving tokens from one chain to the others.
It means you lock it up in one chain and you create some sort of, you know,
asset that can be redeemed for the asset on the original chain.
I guess that is something that maybe one could theoretically build on top of Seda,
but it's not something that like sort of Seda natively will be able to do, right?
Yeah, so you need to build it into SETA.
through the form of a program.
But what you could also do is plug it into hyperlain or layer zero,
use it as the Oracle,
and now you have a super decentralized bridge, right?
So if you use SETA using like a third-party bridge application
that's configurable,
all of the sudden you create like this,
sort of like hyper-interoperable due to the permissionless aspect,
sort of like scaling.
So I think that's really interesting because, yeah,
you could launch a roll-up tomorrow,
plug your RPC into SETA and then use Hyperlane,
configure the Oracle to be SETA,
and now you're interoperable with essentially any of the other chains
that also have RPC data available on SETA.
And I think that that is super powerful.
Like like you said, it's a primitive.
And I think that that's where we need to go in the space, right?
Like infrastructure need to be sort of like commoditized
and accessible to all instead of being gatecapped
and sort of like acting as kingmaker.
makers. Yeah. Cool. So I know you guys have been working on this for a few years. Like what is sort of the, like, where are you right now?
Yeah. Yeah. So we've been working in the Oracle space for a few years. And it's actually, like I mentioned, the optimistic Oracle we built before, before we also launched a first-party Oracle.
Optimistic, like, what's an optimistic Oracle? Yeah, an optimistic Oracle is essentially an Oracle that you,
you ask a question, right?
Like, what's the price of ETH?
And somebody will answer it, right?
Or a group of people will answer it.
And then you have like this dispute resolution mechanism that dictates in the end what the outcome was.
So the benefit is it's very accessible.
You can ask subjective questions.
The annoying part is that it can be incredibly slow.
if there is, if it's a, if it's like a controversial question.
Or because you need some challenge period or something.
Yeah, if challenge periods and then if the challenge period, if within the challenge period
there is somebody challenged and it like extends, right?
Sort of like up to a certain point and it can take just super, super long.
It's cool, but it requires, the use cases are not that obvious, right?
You see it with Uma, who has built this really cool Oracle, but it's hard to find third-party
developers that get it enough to build on it. So they end up building a lot of the
projects that are built on top of their Oracle in-house, like O'Snap or across, for example.
So we were building that. And in parallel, we were building sort of like the opposite,
which was a first-party Oracle, which is purely API data. And we just give essentially
software to data providers that then push data on-chain themselves directly. And that worked
really, really well. So we had like a few data providers pushing price data on chain. And the cool
thing was that we grew from zero total value secured to like over three billion in total value
secured over the span of like two months I believe so is so value secured how how like what kind of
value was secured with that so it was mostly like lending defy but the so so essentially total
value secured I say is if our Oracle got hacked how many money could you steal right that's sort of like
if it got exploited how much money okay
So these were some D5 protocols that basically dependent on the Oracle you guys have.
Yes, exactly.
Like the cumulative total value secured of D5 protocols using Flux first-party Oracle.
So, yeah, we grew up to the second largest Oracle in the last bull run.
Do you know how big was that number for ChainLink?
Like $60 billion?
60 billion, yeah.
Yeah, maybe 80.
Like between 60 and 80 somewhere.
Yes, so we grew extremely quickly, which was cool.
But we also ran into sort of like the horizontal scaling issues, right?
Like you have like this, we were the gatekeepers at that point where we're like,
we are picking where we deployed to and like the monitoring of all of these chains,
making sure there were balances on all of these chains, making sure the validators or the data
providers were all like happy and pushing and that everything was going well.
It just didn't scale well, and that's why you see this huge wait times for integrations with sort of like these more centrally run Oracle networks.
And that's really what we were trying to solve.
And that's when we started designing what became CEDA.
Next up, though, at the end of this quarter, we're launching our token migration.
So we launched the token.
So end of this quarter means like end of March or something.
Yeah, like mid to late March.
We just kicked off audits.
Because the CEDA token exists today and it's on Ethereum or...
Yes, exactly.
So with these previous oracles, we launched a token.
And that's going to be the same token that's going to be securing a setta chain.
So we need to have some time to migrate a large part of the tokens over before we go to like
Mainnet with the Oracle modules.
So we're launching sort of like a vanilla cosmos chain with token migration enabled.
and are looking to launch the first like Oracle modules to main net hopefully in Q2.
So like by Q2, there should be the first main net like data flowing to change through Zeta.
Cool. That's exciting.
Do you already know some of the some of the first applications or use cases that are people building on top of SETA?
Yep, yep.
We want to.
So the thing I'm like most passionate about this, I think for like the next,
this and next cycle is the idea of chain abstraction.
Can you explain chain abstraction?
Yes.
So right now, when you have all these roll-ups popping up, right?
And they're all completely siloed off.
And as a user, if I want to interact with it,
I have to find a way to bridge tokens to it or get tokens to that instance of a chain
that I want to interact with.
They have to go to my wallet, integrate the RPC, swap to the correct,
chain. And it's like if I want to switch from Facebook to Instagram, all of a sudden I have to
change my IP settings or something. It's a really, really horrible UX or you like have to bridge
your account over, like your photos or something. I don't know, like statuses. It's like terrible
UX. So the idea of chain abstraction is that you create. So I think there's multiple
implementations of it. But to me, it's like you create a single layer that people interpret.
interact with. And then from there, there's like in the back end, there is actors, solvers
that actually perform the actions in your name on the destination chains. So you don't ever have
to touch them, but you still own representations of those actions so that you have like,
yeah, you're the rightful owner of the positions on these other chains. And in order to do that,
you need primitives that allow you to work.
state for all of these chains, right?
And you cannot rely on like some third-party actor to deploy their like interoperability
primitive to all of these chains separately.
So I think that that is where CETA has an extreme advantage is the fact that you can
permissionlessly deploy CETA on all of these chains and you will always be able to rely
on sort of like the same standard to query data to verify,
whether something happened or not on on these destination chains so so that is something i
personally like focus on to make sure we have like we're working with multiple teams um that are
trying to solve for this and the other ones are like i mean new roll-ups launching that need uh need
price data um existing roll-ups that are sick of waiting for another third for another oracle
to launch on their network and are like can you guys please go live so that we can
have access to like price feeds and like base interoperability and such i think that uh yeah that that
that that's that's the sort of like lowest hanging fruit for now and then we have some more
interesting like knee shoes cases that we're working at that involve some of the um like l lm things
that uh that i that are that are that are that are that are fun but are sort of like to see whether
they are going to be uh core of our business but yeah that's that's the those sort of like the
categories that we're focusing on internally.
Cool. And then are these
Ethereum Oracle, are these Flux
Oracle still running or those have
been discontinued? No, we shut them down.
We sunset them
early last year or like somewhere
September last year, I think. So the token
has sort of been like dormant
and kind of waiting for the
rebirth of... Yeah.
Yeah, exactly.
Exactly. We've been really stealth also.
Because we had this
thesis that this is where the space was going.
But until we saw it play out, we weren't really comfortable being loud yet,
and until we were closer to launching and confident that we came up with, like,
the correct solution.
But now that we are, we are a lot more comfortable going on podcast and
talking to a ton of projects about what we're doing.
I mean, to me, it sounds like a really logical way or approaching this problem
and like something that, yeah,
can see like a massive market for this so sounds very cool yeah yeah we're super excited we can't
wait to get this thing live it's uh it it we're ready cool well uh thanks so much for coming on jasper
so i think if people want to check it out right so the website is like seda so seda dot xyz
and anything else you want to sort of chill yeah
Yeah. Follow us on Twitter at Setup Protocol. I'm Jasper Flux on Twitter. Check out our Discord.
Come hang out, ask questions. If you're a builder in the space and you're launching your own network
and the idea of finding Oracle providers seems daunting, like reach out. We're happy to spar with
you on how to get data to your chain, talk to you about what data you need. And yeah, make sure
we're able to support you as soon as we launch.
Cool.
Well, thanks so much for going on, Jasper.
It was great to have you.
And thanks so much for listeners for tuning in.
We'll look forward to being back next week.
Thank you for joining us on this week's episode.
We release new episodes every week.
You can find and subscribe to the show on iTunes, Spotify, YouTube, SoundCloud, or wherever
you listen to podcasts.
And if you have a Google Home or Alexa device, you can tell it to listen to the latest
episode of the Epicenter podcast.
Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter, so you get new episodes in your inbox as they're released.
If you want to interact with us, guests, or other podcast listeners, you can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show, and we're always happy to read them.
So thanks so much, and we look forward to being back next week.
