Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - RedStone: The Oracle Pioneering the Future of DeFi - Marcin Kaźmierczak
Episode Date: January 8, 2025The DeFi landscape has significantly evolved since 2018 when Chainlink was launched. Recent developments such as L2 rollups, liquid staking, restaking and the rise of BTC DeFi have created huge demand... for more customizable, modular oracles that would be able to provide accurate data for countless use cases, crosschain. RedStone set out to do exactly that and are now securing over $6.6 bn worth of assets (1000+ assets), across more than 60 chains, without a single mispricing event.Topics covered in this episode:Marcin’s backgroundEarly oracle landscapeRedStone’s technical architectureNetwork incentivesData aggregation moduleNode operator modulePush vs. Pull oraclesRedStone’s business modelThe role of RedStone tokenPyth vs. RedStoneRestakingOracle extractable valueSynergies between oracles and institutional investorsEpisode links:Marcin Kazmierczak on XRedStone on XSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: Chorus One is one of the largest node operators worldwide, supporting more than 100,000 delegators, across 45 networks. The recently launched OPUS allows staking up to 8,000 ETH in a single transaction. Enjoy the highest yields and institutional grade security at - chorus.oneThis episode is hosted by Brian Fabian Crain.
Transcript
Discussion (0)
Chainlink, the biggest player on the Oracle market, is naturally positive to the growth of the ecosystem,
but it's definitely not supporting all the use cases we wanted to create.
Delivering data to blockchain ecosystems and DAPs is a very, very broad term.
The most important is to create a system that is scalable and also sustainable long-term.
The biggest problem of Oracle's at that time over then was the gas cost on Ethereum and also the cost of adding a new fee.
So we tried to create a model where both of those challenges are solved.
That's the reason we started Redstone out of the RWE blockchain, that storage chain incubation program,
with the promise of creating more modular approach that can be tweaked and upgraded as the progress in technologies achieved.
Welcome to episode in a show which talks about the technologies, projects and people are driving decentralization and blockchain revolution.
I'm Brian Crane and today I'm speaking with Marston, who is the co-founder of Redstone.
Redstone. Redstone is an Oracle project. They've getting a lot of traction recently, so I'm excited to dive in with Marsen on Redstone and oracles in general.
And just before we get dive in with Marsen, we'd like to share a few words about our sponsors this week.
If you're looking to stake your crypto with confidence, look no further than course one. More than 150,000 delegates.
including institutions like BitGo, Pintera Capital and Ledger trust Corus won with their assets.
They support over 50 blockchains and are leaders in governance or networks like Cosmos, ensuring your stake, is responsibly managed.
Thanks to their advanced MEV research, you can also enjoy the highest staking rewards.
You can stake directly from your preferred wallet, set up a white label note,
restake your assets on eigenayer or symbiotic or use their SDK for multi-chain staking in your app.
Learn more at chorus.1 and start staking today.
This episode is proudly brought to by NOSIS, a collective dedicated to advancing a decentralized future.
NOSIS leads innovation with circles, NOSIS pay, and Metri, reshaping, open banking, and money.
With Hashi and NOSIS VPN, they're building a more resilient privacy-focused internet.
If you're looking for an L1 to launch your project, Nosis chain offers the same development environment as Ethereum with lower transaction fees.
It's supported by over 200,000 validators making NOSIS chain a reliable and credibly neutral
foundation for your applications.
NOSISDAO drives NOSIS governance where every voice matters.
Join the NOSIS community in the NOSISDAO forum today.
Deploy on the EVM-compatible NOSIS chain or secure the network with just one GNO and affordable
hardware.
Start your decentralization journey today at NOSIS.
Thanks so much for coming on, Marston. It's really great to having you on.
Hey, everyone. Thank you for having me. I'm excited. EpiCenter is OG and I'm super happy to be over here.
Yeah, absolutely. So let's start at the beginning. How did you get into crypto?
Oh, it's an interesting one. So 2017, I was writing my bachelor thesis about blockchain and Bitcoin.
And then I got all into the weeds of, you know, early ICO white papers and how the ecosystem was structured by then.
I joined one of the startups called Polish Accelerator of Blockchain Technology because I live in Warsaw, Poland.
I was working on a project to create something that nowadays resembles chain analysis or elliptic.
So we were working with Polish Poly, national one, to track ransomware attacks that asked the victims to send Bitcoins to a specific wallet.
And then Polish police at that time in 2017-2018 had zero tooling to track such a
criminalist, like a crime person. So basically I was coordinating such a product back then,
but the startup went bust. I didn't get three months of salary, so that was a hard start.
Then I went to work in a couple of startups at Google Cloud, and in 2020, together with Jacob
Bajoski, the technical founder of Redstone, we kicked off Redstone. So that's how I got here.
So how did you decide to focus on Oracle's? What was the, um,
What was the reason to focus on this particular area?
Well, Oracle's as a concept is super interesting in a sense like delivering data to
blockchain ecosystems and daps is a very, very broad term.
The most important is to create a system that is scalable and also sustainable long term,
especially as the crypto technology is evolving all the time.
Like the innovation cycle is probably, I don't know, three, maybe six months that you have to
keep upgrading and beyond a little bit.
look out on the most effective technologies to implement. And in late 2020, early 21, we acknowledged
that chain link, the biggest player on the Oracle market, is naturally like positive to the growth
of the ecosystem, but it's definitely not supporting all the use cases we wanted to create.
For example, when we asked them about some price feeds around interest rates at the time,
or something that is more sophisticated than the blue chips, the queue was super long. Like, we were
waiting a couple of miles to get something simple.
That made us realize the biggest problem of oracles at that time over then was the
gas cost on Ethereum and also the cost of adding a new feed.
So we tried to create a model where both of those challenges are solved.
That's the reason we started Redstone out of the RWeaf blockchain, that storage chain incubation
program with the promise of creating more modular approach that can be tweaked and upgraded
as the progress in technology is achieved.
So whenever there is something new that can improve the technology,
we don't have to recreate the whole end-to-end pipeline like chain link has nowadays.
And so far it's been going pretty good.
Right.
So basically with chain link you saw as the biggest issue was basically
sort of the time and flexibility to adding different feeds,
supporting different types of data, maybe different chains.
and the gas cost of oracles.
I mean, we've been running at course,
running Chain Lake for many years,
and I think it's gotten a lot better than now,
but at the time,
I think the amount of gas it was causing
was absolutely bananas.
So I definitely can relate to this challenge back then.
Aside from Chain Link,
how else did the Oracle landscape look like back then?
That's an interesting question.
So 2017, 2018 is not actually the period of ICOs.
So some people just gathered quite an interesting amount of money
to create some projects and protocols that ended up not working extremely well.
Some oracles I remember from back then are Taylor or Bands.
And I think there was also oracalyzed or something like this.
Majority of those didn't get traction.
And there are a couple of reasons for that.
The major one is oracles have a kind of like economic
of scale. So the more protocols are using you, it's easier to get the new ones to join the network
and share the cost. So when you have such a dominant player's chain link back then, and also the market
in 2020 and 21 was a bit smaller, it was expanding, but it was still like fairly niche. Then there were,
there was not enough probably space for many oracles that were fairly similar to each other,
I would say. Many of them were just trying to copycat chain link and then try to do it like
cheaper, right? So not much of an innovation. So in 2021, chain link was only the dominant player.
Right now, I would say it's not a monopoly anymore in 2024 and 2025. On the market,
like the top three players right now are chain link. There's PIF that started in Solana and then
went cross-chain with Wormhole Bridge that is trying to innovate as well. And Redstone is in the
top three oracles when we talk about number of clients. So right now we have,
over 130 clients.
And when it comes to a number of chains supported,
so right now we support over 70 chains in both push and pull model.
And we've pushed specifically.
So the chaining one, we support right now over 30 chains already more than chain link.
So the ecosystem is evolving.
And I would say as crypto right now is truly like getting proper acceleration,
this nimbility of an Oracle is going to be super crucial.
So let's talk a little bit about, you know, how does Redstone work?
Like, what is the technical architecture and what does the solution look like?
Okay, that's a good question.
We created four modules in the Oracle, and three out of four modules can be used on all the networks we expand to.
So we don't have to redeploy the whole node architecture on every single chain we try to launch at,
as chain link does. That's the reason for them a cost for going to a new ecosystem is very high,
and they need to create enormous contracts with some of those chains. Like we've heard stories
paying not even millions, but dozens of millions of dollars to get that. With Redstone,
the modules are the data sourcing module. So we pull from right now over 180 sources
the data, node operators module, where data provides,
run notes sourcing from those sources the data and then aggregate that.
Data aggregation module, which we call data distribution layer,
which is an off-chain component that you can think of as a decentralized cache.
So anyone can join it. There is no like token-based consensus,
but it's like an open architecture that everyone can join with a gateway node
and pass signed data packages from those data providers.
it's available for anyone.
And early in the day, the biggest challenge
was to create a solution over here
that is protected against DDoS.
So we did extensive work to make sure
that crash tests are held
and no one can actually spam the network.
So those signed data packages
go to the third module,
which is this data distribution layer.
And from there, it's transparent, it's public,
anyone can pick up a signed data package
is delivered to the on-chain environment
with data delivery layer.
And for EVM networks is a standard package,
which we call EVM connector.
So in the pool model that we have,
every single EVM chain can use Redstone almost right away,
as long as it's EVM compatible, in fact,
because some of the networks claim the ARB,
and then the needy-gritty is turned out
that it's not necessarily the case.
And with the push model,
we create data delivery service,
so like a pushing infrastructure
to every single ecosystem
that would like to also have the chaining compatible interface.
And here, for example, we can give kudos to Chalink
for standardizing the interface with chaining aggregator free
that people just like and they use,
and we also adapted over here in the push model itself.
When we go to non-EVM ecosystems,
all the three components they started with,
so the sourcing, node operators, and data aggregation can be reused.
So all of that is kept off-chain,
so it can be utilized on any blockchain that we go to.
and the data delivery layer, this EVM connector, can be adjusted to Tone connector.
For example, we were the first Oracle to launch on Tone network,
Starknet connector, FUL connector, TUL connector, Tully connector, Solana connector, and so forth.
So the piece that we have to adjust for a new ecosystem is way smaller than in comparison to chain link
to redeploy the whole architecture.
Okay, so let's go into detail a little bit in detail.
So you say, first of all, the data sourcing module.
How does that work?
So each data provider operating a redstone node can choose one of the two paths.
Either adopt a library that we prepared and utilize all the data sources that we identified,
such as on-chain sources, in example, curve, uniswab, balancer, and so on,
so many of the decentralized exchanges.
And this is super important for yield-bearing assets, for example, LST,
LRTs, EITNA, and other youth-bearing collateral, because those assets are not traded on centralized
exchanges.
They are usually traded on decentralized exchanges.
So those sourcing is really important.
Historically, both Chalinger PIF struggled a lot to call directly the own chain source to create
a price fee.
Second types of price of source are decentralized exchanges, so regular Binance, Coinbase, BuyBid,
OK, X and so one.
and the third type are the aggregators, such as KICO, Kiko, Kiko, Kho market cap, and so on,
that aggregate data from many of the other sources.
The second option for the data provider node, so the node operator, is to implement their own sourcing module.
So, for example, one of the operators of a data node is Kiko or Oros or fairly soon alchemy as well.
those can implement their own pricing methodology if they have so.
And the reason we allow that is we have not only redundancy on the level of the notes
that we have many notes, but we have also redundancy in the methodology.
So if our methodology that we share with the data providers for any reason has a glitch,
all of them will follow that glitch, but the guys that come with their own sourcing methodology,
there is a high chance are not going to replicate that.
And this is one of the problems we've known the paraders of chain link,
because all of them are following literally the same methodology.
So if there is a problem with the methodology,
even though there is redundancy in the number of notes,
the outcome is going to be skewed because there's no redundancy in that form.
Okay.
And then how do you, I mean, I imagine there can also be some issue
with people just implementing their own methodology
because, I don't know, maybe there's flawed or maybe there's some sort of conflicts of interest.
or like is there some, how do you kind of manage with people?
And then also what's the incentive for someone to,
because I imagine it's quite a bit of work to develop your own methodology there.
Well, the entities that come with their own methodology usually don't do it specifically for Redstone.
It's rather they have the methodology either way.
Like Kiko, for example, is a company that creates a methodology for pricing assets anyway
and deliver that to many of the traditional markets.
We also work with market makers that have all the data about people trading on virus venues.
So it's usually more like if a data provider comes in and they have already the methodology,
they are comfortable utilizing that to add redundancy to the flow.
The incentivization, right now with early data providers, we give out grants in the future Redstone Tokus
so that they operate the node itself in a, let's say, redundant way.
Since the beginning of the network, we have had zero downtime issues
or even like price skewing issues because we run autonomous checker
that verifies if the data delivered is a lot outside of the boundary from other providers.
So if there is a skewness, for example, BTC price,
the checker is going to panelize that data provider.
Once the Redstone token is launched, that should happen soon.
We are preparing for that, preparing the new version of white paper
and all the necessary steps to further decentralize the network.
There is going to be a staking contract where all the data providers
will need to stake Redstone tokens.
At the moment, they have either downtime or huge negligence
in terms of the accuracy of the data,
they will be automatically slashed.
And also there's going to be a module for people to vote if the price feed was outside of the reasonable boundary, but still not caught by the checker.
So imagine there's Bitcoin USD price feed and it trades at $100,000.
Is a reporting of $98,000 a wrong price?
Hard to tell.
Depending on the asset and depending on the markets that are utilizing that price.
So for each single asset from the blue chip ones, we have specific boundary.
within which it still deemed acceptable price.
But if anyone thinks that within this boundary,
the acceptable price was actually incorrect,
can raise a dispute against a specific data provider,
that, hey, this was actually a fraudulent price.
It shouldn't be accepted as the correct one.
And then token holders can vote whether that was true or not.
Okay, okay.
So that's the data sourcing module.
And then the second module,
he said there was something about the node.
Node operator.
So the thing I explained here were the two,
both of the modules.
So data sourcing,
you can either use redstones or your own,
as you wish.
And data providers module is they aggregate
from those sources the data,
and then deliver to the data aggregation module,
which is data distribution layer.
And if they deliver the,
the package they deliver is skewed, as mentioned,
they are going to get slashed the moment they have redstone token used as the collateral.
Okay, so the data aggregation module then basically says,
okay, we have some kind of algorithm to say,
okay, we are going to take the median or some kind of statistical thing
and remove outliers and try to.
And this is happening on-chain or off-chain?
Off-chain.
Like the part that I explained over here is happening off-chain,
but we are also introducing right now an alternative module,
the restaking one, on top of restaking protocols like eigenlayer and symbiotic.
We have the test that on eigenlayer running right now,
and we are also exploring symbiotic for that,
to allow this aggregation to also happen on the restaking notes.
So that the attestors attest that the data was delivered and signed,
and it's within this boundary.
So this flow that I explained right now can also happen
on the restaking flow itself.
Okay, so then it will basically be that this ADS,
you'd basically have some kind of checks happening on there
to verify the data that's then distributed on any,
after any of the redstone chains are supported,
or is this specifically for data that's related to restaking?
It will be for the most used assets that people care about,
because adding restaking just for restaking, let's call it,
doesn't bring much value.
Like restaking costs.
This is something that people usually don't mention,
but running a restaking flow also requires to give out the incentives
and making sure the infrastructure is running.
So there is a cost associated.
So there should be also like a gain in terms of like a business and go to market.
And we are not going to run this module for the coins that people don't care about,
like Longtail.
most likely it's going to be for blue chips such as EVE or Bitcoin.
On the networks, people have a lot of TVL at stake,
so that they are sure that the restaking can also scale up
with the amount of TVL that we are protecting,
for example, on base, on arbitrum, or even into remain as.
Okay, okay.
So we talked about, so first of all,
we have the data sourcing, right?
There's a lot of different places where people pull the data.
they can either use your libraries or they can have some sort of own methodology.
And then, so note operator module, I mean, in the chain link case, right, they basically, you know, choose operators for the different feeds.
And then, you know, you have a specific amount of, you know, operators that will report the data for this particular feed for some particular network.
Do you have a similar model there?
Or is, so you guys choose basically a bunch of operators to do this?
Or is it like a more open system where kind of anyone can come in?
It's not open in a sense that everyone can come in
because creating a truly open system for data sourcing in oracles, in my opinion, is impossible.
There isn't is because then you can sophisticatedly create like an attack.
pack on an Oracle. So there should be a sort of an authority or reputation system. And in our
case, right now we are white listing the data providers. So you have to apply as a data provider
with Redstone. Then we review the application. We put your staging for at least two months.
And we analyze the quality of the data that you deliver. So becoming a data provider is not
like out of like on a moment. It takes time. But we are onboarding like new data providers
regularly to make sure that the system keeps growing. And one thing that you said that is super
important over here is that all the node operators have to choose which network they support.
So again, you can see the matrix of the complexity of chain link is growing. So not only you have
many node operators, not only you have many assets they have to subscribe to, but there's also many
networks they have to support. And we abstracted those away, especially when it comes to the networks
they support.
So they just have to dedicate which assets they support and run the node.
So the networks part is fully abstracted away because of this data distribution layer.
Because they send it not on chain, but to this cache environment that is off-chain.
So they don't have to necessarily subscribe to a specific chain ecosystem.
Okay, so basically the node operators, they provide this data to this off-chain module,
and then the data is basically delivered to different places.
Exactly.
So the same data, the same signed packages are transparent, are visible,
anyone decode is open source, so everyone can check it out.
And then those signed data packages are delivered to a destination blockchain.
And imagine this package is delivered to, let's call it, Ethereum and Avalanche, the same package.
On both of those networks, the smart contract that is receiving the package has a number
of validated signatures that are acceptable in the flow itself.
So if anyone tampered with the price package before it got to the smart contract,
the signature is not going to match and the price package is going to be reverted.
As long as the signature matches on the destination chain,
then it can be utilized in the context of a smart contract or a specific debt.
So thanks to that, we can literally tap into the security of the destination chain
with this verification of the signature.
But it means that, for example,
the smart contract on Avalanche
or on Ethereum and stuff,
it has to basically be updated
to say, okay,
this is the signature I accept,
so then you're basically
going to update this for each of the
node operators.
To say, okay,
this node operator is a part of it,
so that's their,
that's their,
and now the packets or the data design can be sort of distributed to all these different chains
and verified natively there.
Exactly.
And on the destination chain, if, because we support both models, the pool Oracle model and the
push Oracle model, those are the two most popular ones in blockchain right now, ecosystems.
And in the pool model, when it's more like DAP specific, for example, gearbox is using that
right now, Delta Prime, Kiann Curvanus. They can even specify which data providers they want to
accept and which one they want to exclude. So they can specify which of the whitelisted signatures
they want to still utilize. So imagine they don't like one of the providers or they just
deemed this provider can have a shakeup. Imagine back in the day's FTX and the whole, let's say,
the crash. So when someone already smelled that, hey, there is a chance it can collapse. I mean,
they put a lot of leverage on it,
and I just don't want to play that game
because it doesn't add much to my ecosystem.
I can just use all the other ones and not lose much.
They would just exclude FDX as the data provider
and utilize all the other ones.
So we want to also give the builders a lot of flexibility,
and in general, our motto with Redstone is buy builders for builders.
So we always try to focus on the engineers,
because they are the ones that are integrating the Oracle at the end.
Can you explain this pull model and push model a bit more?
Like, how do those two work?
Of course.
So the push model is the one that was the most popularized by chain link.
So you take the packet and then you throw it on-chain to the on-chain storage.
And all the protocols that are on a given network, let's call it Avalanche, can just read that data.
And the updates are happening based on the deviation threshold.
and the heartbeat.
So in example,
if USD price feed on Avalanche
or ITERU main net
can have 0.5% deviation threshold
and 24-hour heartbeat.
So by design,
the users of that data
accept that the data on chain
is going to diverge
from the real price
on centralized exchanges
and all the sources
by at least by maximum,
in theory,
0.5%
but we also checked the historical performance on chain link.
And many times due to the consensus mechanism,
it's even more than half percent.
So by design with the push model,
you accept it's going to be inaccurate,
and then it's just a question of how inaccurate you're going to go.
And the reason for that is the gas cost, as you mentioned.
Each update with the push model requires the Oracle
to spend gas on the update itself,
and on Ethereum Mainet, it can go as high as even thousands of dollars
with one update.
When the network gets crazy
with, for example,
Cryptokitis back in the day
or even eigenlayer AirDrop campaign,
then the gas was going very high.
Then all of those updates are crazy expensive.
In the pool model, as a contrast,
it's more like DAP first.
So the push model is chain first.
It delivers it to the chain
and every DAP on the chain can use it.
The pool model is more like a DAP specific,
a specific project.
Take gearbox as an example.
Gearbox is using right now Redstone Pool Model
in a way that they fetch a packet
from this data distribution layer.
So off-chain environment, assigned data packet,
whenever there is a transaction from a user to use gearbox.
So imagine, Brian, you're using gearbox
and you're assigning a transaction in Metamask or Ledger or whatever.
And then gearbox quickly with their front end
would fetch a signed data packet from Redstone.
Take your transaction to call.
data of your transaction attach a signed data packet, then your transaction is delivering it
on chain. On chain in a smart contract, signatures are verified, whether it comes from an accepted
data provider. If it is, then the transaction is executed. So essentially what is happening
over here, you as a user covered the gas. So there is a small margin we are adding on top of the
gas that you would pay. So instead of paying, for example, $3 for a transaction, you're paying
3.02. We optimize the gas a lot, so it's pretty marginal for the user. And also, you're updating
the price feeds with every user interaction, because what matters for the DAPS is not like what's the
price on chain. What matters for the DAPS is what is the price when the user is taking an action
or a liquidation is happening. So one user interaction can be you creating a new position,
as I explained, but another user can just run a liquidation bot,
and the moment it sees a position that is underwater,
they can fetch this signed data package from Redstone data distribution layer
and liquidate the position together within a single transaction.
So for this Redstone distribution layer, like where you go and you fetch that update,
is that kind of like a central, like basically one calls like,
the Redstone API server or like, how does that work?
So those are gateways.
We as Redstone are running, I believe right now, five gateways so you can choose from.
You can run your own gateway.
We have people from our community also running a gateway.
And for redundancy, we are also utilizing some anti-Didos solutions such as streamer network
to ensure those packets are available for anyone to pick out.
So essentially you are just utilizing a specific gateway to fetch that packet, whichever you like.
Okay, okay.
And so basically the gateway, now if someone were to run this gateway, then basically by default,
if you have a node operator and you sign some package, then you just sort of send it to all the
gateways and they have some kind of PDPer network exchange to keep them updated.
Exactly.
And imagine you're a protocol and you're afraid.
that, you know, the gateways might blow up at one point or whatever, you can run your own gateway
or you can run your own free gateways. The cost of running one is fairly small because we also
optimize to make sure the gateways are lightweight. So you can add as many layers of redundancy as you
like. Yeah. Okay. No, that seems like an elegant solution where basically, okay, whenever a protocol,
someone interacts with a protocol, they just sort of supply along the, uh, the,
current state and then update it. And then, of course, you basically have the latest data
whenever that happens. Exactly. So in this data distribution layer, right now we update the
price feeds every three seconds. So instead of getting 24-hour heartbeat and half percent deviation
threshold, you get three-second latency. And we are about to upgrade those gateways to also offer
fast gateway, so even faster.
So for people that do not
care that much about decentralization, and
usually those are like perpetual
dexes and highly optimized
solutions, there is
always this trade of a new briah, I know it more
than anyone else probably. If you
go faster and more
accurate, then the decentralization
usually has to be traded
off. The more decentralized
you are, you also have to add some latency
component to that. So we
also are right now publishing new
gateways that would allow for sub-second price delivery.
So that for perp dexas, it's not too old.
I see there's obviously advantages to this pool model.
What are the biggest trade-offs between the push-pull model?
What are some of the advantages of the push model, for example?
I would say the biggest advantage of the push model is that it's already widely known.
So ChainLink did a good job, as mentioned, with ChainLink V3 interface.
So people are just familiar with that.
With the pool model, you have to do some code updates to make sure your smart contracts
are capable of receiving this signed packet, extracting the data, and checking the signatures.
So there is some need for updating the smart contract.
So what we see, there is a very, I would say, common path, that the old
older protocols, so forks of
AvevB3, compound V2,
or any protocol that was designed
in 2020-22,
they usually prefer the push model because
they're already audited, they don't want to change the
code and, you know, want to be very
much aligned along the
older infrastructure, but all
the new protocols, they try to optimize
with the pool model because they still
have some plasticity in the design
that they are creating.
And one big beauty that people
don't recognize in the pool model,
once you integrate pool model from Redstone,
you can utilize on any EVM network.
So, for example, if you're a protocol
that cares a lot about cross-chain availability
and you want to deploy to 10 or 50 networks quickly,
once you integrate it on one EVM chain,
you can go to every almost EVM chain
as long as the EVM compatibility is called end-to-end.
Yeah, yeah.
No, I can see that.
So there's basically a little bit more of like development overhead
in the pool model,
But then you have, you know, you have the advantages that you have faster updates.
It's kind of cheaper and the cost is borne by the actually users who are calling these contracts.
And it kind of easily scales than to, you know, just wherever you deploy the smart contract,
any EVM chain, it will sort of just support it out of the box.
Exactly.
And maybe one important aspect is all the price feeds that we call.
configured in a sense of like pairs are available for anyone in the pool model.
So with the push model on many of the networks, only like five or ten price sheets are supported.
You can go to app to Redstone.finance and check out the push model portal.
So you can see on some networks we push 20 pairs, but on some we push only three pairs, right?
Because it's very much dependent on who is going to pay for the gas of those updates.
In the pool model, the gas is not a problem anymore because the user is incentivized to pay for this additional payload to execute the transaction or a liquidator.
Thanks to that, you can utilize right now over 200 price fees that are in the production mode.
And we have over 1,200 assets in the demo mode.
And whenever you need to bring from demo to production, you can just ask our team to bring it in the pool model.
It's a fairly simple process.
Okay, okay.
I'm curious, what is the business model for Redstone?
What is the business model for ChainLink?
This is the question.
They are the leader.
No, I mean, this is a good question
I would say for the Oracle as the category, right?
Because the moment you push data on chain,
it's available for anyone.
So you have no way to, let's say, monetize on that
because anyone can just query.
it because it's in the on-chain storage in a public ledger on the quorum, avalanche, arbitrage,
and so on.
So how we create a business model?
In the push model, we usually ask the ecosystem to cover the development cost and the gas
of pushing data on chain.
And with new ecosystems, they are very willing to cooperate with us because we do it, I would
say, in a reasonable scale in contrast to the leader on the market.
whenever you're a protocol that issue and you coin, imagine Itina, Eter, Phi, Renzo,
and you would like your coin to be integrated into DFI, you need Oracle for that.
And the process of chain link usually takes weeks, if not month, and is also fairly expensive.
That's the reason we were the first Oracle to create LRT price feeds, for example,
EtherFi, Renzo, Paffer, Kelp.
Then we were the first Oracle to create a price sheet for Etina, USDE, and S-USDE.
we're the first Oracle to create a price feed for symbiotic PZE from Renzo as well on the Volt operator.
And now we are the first and still only Oracle to create LBTC from Lombard.
It's a Bitcoin Liquid Staking Protocol.
And right now only Redstone offers that price fee.
So for that development work, also with those protocols we are discussing like contracts.
And last but not least, the protocols that are utilizing the pool model,
usually we add like a small margin
to some of the protocols
when they are calling our feeds.
Right now it's marginal.
For some it's even like turned off.
The reason is we want to scale.
We are still in the growth phase.
But in the future you can also add
a small margin on top of the gas feed,
gas price that the user has to pay.
Can you explain like the redstone token?
Because like would those extra fees
then end up in some sort of treasury
that's like on each chain
that's kind of controlled,
the Redstone token or like how do you how will that fee manage?
That's a really good question, especially that we are getting closer and closer to
decentralizing the Redstone ecosystem further.
The node operators will be getting a portion of the fees collected by the pool model.
So essentially it's going to go to a smart contract that later is going to distribute those fees
to the node operators.
And very interestingly, when we were designing this architecture, restaking was not present.
So we were designing that in 2022, early 23.
And then eigenlayer came into place and symbiotic as well.
So right now we are considering also utilizing this restaking flow so that people can restake
red as the token itself, but also other LSTs like EVE and the major LSTs on the restaking
infrastructure. But essentially, all the fees collected should be distributed to the operators themselves.
Okay, so the fees get distributed to the operators. So, and then the operators have to stake red.
Correct. They have to stake red to make sure they are eligible for delivering the price feeds.
And if they have downtime or risk misreport, they get slashed. Okay, okay. And so I, I,
I suppose one mechanism here is that, well, if there's a lot of revenues that node operators can generate,
then it sort of makes them more willing to maybe buy.
So they would have to acquire red tokens then to stake them.
Exactly.
So in the early days, naturally we'll be giving out like some incentivization programs
to make sure the early adapters are there, like, you know, the node operators.
but in the long term,
they will be also able to acquire the token on,
you know,
the market to get into the system
and also increase the portion of the stake that they're going to get,
their fees that they're going to get.
Okay.
Do you, is there also going to be,
let's say someone has red tokens
and they, let's say,
course, one runs some as a node operator,
then, okay, one thing is we,
course well itself could like stake some red tokens do you also have some system where
people can delegate and stake with different operators or yes this is right now being like finalized
whether we are going to utilize fully the restaking uh contracts for that so the restaking flow where
people can easily like you know restake or specific operators or we are going to create our
own infrastructure to do so. But there will be a capability for regular users to also delegate
to a specific operators. Would they then earn some portion of the fees that the operator?
Exactly. Okay. What else is there to, are there any other functions for the token?
Well, as we will be progressing with Redstone, naturally new functions can be added,
but the major one is going to make sure that the data providers are a report.
in correct data because this is the essence of an Oracle and also this delegation.
So for now, those are the major utility aspects that are planned for the token itself.
I'm curiously mentioned like the biggest player today being, you know, ChainLink, Piff and you guys.
So I think for ChainLink, we kind of talked a little bit about like what their approach is and how it differs from Redstone.
What about PIF? How do you see Piff and how do you see how they compare?
with Redstone.
This is an interesting one.
So PIF started in the Solana ecosystem.
So essentially what they did,
they forked Solana and called it PIFNet.
And then they asked the data providers
that they call publishers
to publish the data feeds to PIFNet.
And from PiffNet, Wormhole is utilized
as the bridge to deliver the data cross-chain.
And while this approach can work pretty nice
on Solana itself,
we believe there are a couple of drawbacks in the cross-chain expansion.
One important one being the gas costs.
So with every packet that they deliver cross-chain,
they also have to verify signatures of wormhole,
which is fairly expensive.
For example, on I don't know exact numbers,
but it's orders of magnitudes higher than redstone.
So for each network that the gas is not so optimized,
so Ethereum, Bitcoin-L-2s, many of them have no optimized gas.
or other networks with high demand for the block space
and the possibility for the gas to go up,
this is a big problem.
Like Solana itself is fairly cheap in terms of gas.
So it's not such an issue, but for many networks it is.
Second thing is they cannot source data from on-chain sources.
So for example, when there is a new yield-bearing asset,
usually the majority of liquidity is on curve, balancer, velodrop, uniswap, and so on.
And we have all of those sources plugged already in the data sourcing module.
So if we want to create a price fee for them, we can do it within hours, like very quickly.
But for PIF, they need the publishers to deliver that data to PIFNF.
And those publishers are usually either centralized exchanges or market makers or bigger players
that care about centralized exchange trading volume, not the liquidity on chain, rather
the trading volume.
so they struggle to create such price feeds in a timely manner.
First aspect that we differ is that PIF doesn't support the push model.
We as Redstone started with the pool model, but then we quickly realized that for the OG
protocols, the push model matters a lot because they don't want to interfere in the smart
contracts that are already well audited.
That's the reason we made a strategic decision to also offer both the push and the pool model.
So whenever there is a new network, imagine, for example, Unichain, we are a launch partner over there, Inc, we are a launch partner and many others that are coming like Barra Chain, Monad and so on.
They think about their ecosystem, okay, I want to have both push and pool model for the protocols building on top of me.
I can go to Chaling for the push model, I can go to PIFT for the pool model, or I can go to Redstone and get both of them.
So I would say the value proposition for all the new ecosystem is fairly visible when it comes to choosing.
Redito itself.
And last but not least, I would say the mechanics of PIF is they are traders.
So they are ex-quant from Jump Crypto, like the big market maker in the world.
So the origin of the PIF as a company is that they started as an incubated entity from
Jump.
And that's the reason the major characteristics of people in the organization are traders,
not developers. Whereas with Redstone, right now 75% of our team members are engineers.
Jacob, who used to be smart contract auditor with Open Zeppelin in the past and has been in the
Ethereum macro system since 2016, makes sure that all the engineers that are coming to Redstone
are also very seasoned ones. So the average number of years of experience right now, I think for an
engineer, is 13 right now at Redstone. So we make sure that the pipes, let's say the infrastructure,
is very, very much solid.
Okay, okay.
But then the PIF model is also this pool model
and then you basically say,
hey, I'm going to call some contract on some chain
and then I basically receive the feed via warm hole.
And that also works in the same kind of latency
as with the Redstone model or?
I'm not sure what's the latency now.
I believe it depends on the network with PIF.
With some networks, they have seconds,
with some they can go ready sub-second as far as I know.
So it's similar to Redstone approach and it says,
we are going to give a decision to the integrating protocol,
whether they want to have more distributed gateways,
but higher latency or more centralized gateways with lower latency.
With PIF, the issue is they all have to pull from the PIFNet.
and PIFNet is governed by PIF.
Like there's no token that is, you know, like governing the PIFNet.
Like the PIFT token itself is used for, I think, staking or some other capabilities,
but not for the PiffNet itself.
Like PiffNet is a centralized blockchain that they're running for the operators itself.
So there is no, let's say, choosing.
You always have to pull from that one source.
Okay, one question.
You guys decided to build an ABS.
I think eigenlayer secured AVS.
Why did you guys decide to go down that route?
And what were maybe some of the alternative designs you considered?
Well, when we started designing Resto in 21 and then the final mainette was launched in January
2023, up until then, restaking wasn't a big thing.
It wasn't a hot topic, right?
like I would say end of 23 and beginning of 24 was when restaking started to boom.
But as mentioned, we made sure that the architecture we have is modular and we can implement
new solutions as they appear on the market.
I met Shuram for the first time, I believe, end of 23 at Def Connect in Istanbul, as far as I
remember correctly.
And over then, I already knew that the restaking game is going to be important for the systems
that care about decentralization
and cryptoeconomic security at large.
So we decided to create an alternative module
to all the flow that I presented
where restaking is utilized
to secure the most important price feeds over there.
We already created the test set
with the help of Authentic,
which is an infrastructure provider
on top of eigenlayer
to make sure all the modules
that you create in the restaking flow
are easier to implement.
And we are also experimenting with symbiotic
to check out
how their infrastructure is operating.
So it's still not finally decided
which path we're going to go,
but the reason we do that is we believe at large,
especially with those blue chip assets
for big protocols,
they are going to care about the crypto-economic security
of how much value someone would have to have
to skew the price feed
and create an attack
that can end up being a profitable one.
So we were very much aligned with the restaking flow,
and that's the reason we started to build that direction as well.
So one topic that I think has gotten like some attention in the last years,
or maybe more recent,
is the topic of Oracle extractable value.
Can he explain what that is and what the significance is?
So Oracle extractable value is a value that,
appears in the blockchain ecosystem when an Oracle update is delivered and causes a liquidation.
Say it can be on the landing market, on CDP stablecoin, Perps protocol and others.
The way it works is whenever the update is delivered on chain in the push model and it's available for anyone,
the liquidators bribes the MEV researchers to make sure their liquidation is included in the next block.
and that they are the party that's going to liquidate a specific position.
Let's take, for example, Venus Protocol, the biggest lending protocol on B&B chain that we work very close with.
They decided to work with Redstone to implement a solution where we create a very flush auction
before the price feed is delivered.
So let me explain that.
Imagine there is B&BUSD price feed, which is the most important one on B&B network.
and next price update on chain in the push model is going to cause the liquidation.
We partnered with Flash Lane that created a protocol called Atlas for very quick auction.
When the data is being delivered to a next block, before it happens to be delivered on the blog itself,
there is a 500 millisecond auction created for liquidators to bid for the price feed,
to get a privilege access to this price feed and liquidate the transaction.
So essentially what is happening, instead of paying, for example, 5% liquidation bonus to the liquidators
that ended up being eaten by MEV bots, this liquidator is paying, in example, 2% of the value of
the position to get the priority pass for liquidating of this asset very much in the same block
that the price feed is updated.
and then this 2% is being redistributed back to the protocol itself.
So less value is leaking outside of the system.
And the beauty of the Redstone implementation, there are three major ones.
The first one, it doesn't add additional delay.
So the auction lasts for half a second, so 500 milliseconds,
which is negligible even for like fast networks.
Maybe on Mega-Eef, it's going to be a bit different
because Mega-Eaf tries to be like 10-second, 10-minute,
block size, so we'll see how it's going to play out of there. But for majority of the
networks, it's negligible. The second thing is the protocol doesn't have to do any code changes.
So we learned it very much by our own experience that people don't like to change smart
contracts once they are delivered and once they are audited, which we also appreciate because
it tampers with the potential security of the ecosystem. So we created a flow that is interchangeable
with chailing interface. So we can use just Redstone and then tap into all.
right away. And the third benefit of this solution is that if for whatever reason the auction
within this Atlas protocol fails, so there's a glitch or for whatever reason the auction didn't
pass through or there was no beaters, then the price feed is updated as regularly and the regular
liquidation flow can happen. So the worst case scenario, what can happen is the actual outcome of
the system and it's running right now.
This is run already in production on Windows Protocol
and a couple of other protocols that we cannot announce yet,
but they're implementing that.
And the results are very satisfactory,
with about 90% of OEV opportunities being captured.
Okay, okay, okay.
So is there Oracle extractable value,
does it specifically apply to liquidations
or are there other cases where this also comes into play?
there will be some other moments where OUV can play a matter,
but right now the biggest opportunity is in liquidations themselves.
So whenever there is liquidation, there's always this liquidation bonus.
So to make sure the liquidators are incentivized to actually take the position
and sell it on the market and repay the debt in the lending or CDP protocol.
And right now the biggest opportunity is there.
We run analysis with Venus.
and if they had implemented Redstone OEV in the beginning of Venus at launch,
they would have captured approximately $100 million by today,
which is like a pretty sizable amount of money to put it mildly.
That's a lot, yeah, yeah, yeah, for sure.
I'm curious, what kind of synergies or effect do you see
between oracles and more institutional use cases of crypto?
It's a brilliant question for 2025,
especially that Trump is going to be, you know,
officially put into the White House on the 20th of January,
as far as I know.
It's going to spark a lot of excitement within the institutions
that want to try out blockchain ecosystems.
So the inflow of new players on the market that are not so sophisticated with blockchain technology
is going to be pretty large.
And we as Redstone strategically next year are going to create a division to onboard those
institutions.
So there is going to be needed a lot of educational work to tell what makes sense, what
doesn't and where blockchain technology can actually bring value.
What I envision is going to happen are three things.
one, we are going to see way more traditional companies
launching their own L2 or a network,
similar to Sony launch Sonium this year,
and OPE stack being so standard right now on the ecosystem
that more and more companies are going to launch their own L2s.
I really would like to see the behemots like Uber or Airbnb
launching their own chain and then like settling transactions
on its own ecosystem itself.
and then also utilizing the token to incentivize the users to keep using their platform instead of competitors.
That can be huge at large.
And we as an Oracle want to support those kind of use cases.
The second area are tokenization of assets.
So right now we are cooperating with one of the largest players in tokenization of assets in the U.S.
We are supporting them in the cross-chain expansion and utilization of tokenized fund in the DeFi protocols themselves.
And I believe, especially with BlackRock releasing the short video clip promoting Bitcoin,
I'm not sure if you've seen it because it was released yesterday.
It's a three-minute video that BlackRock literally promotes Bitcoin as an asset.
I believe it's going to be a signal for a lot of institutions interested in tokenization
to actually explore either pilot or go for force.
And we as an Oracle as Redstone are ready to support them
because we already learned a lot with this first client that we onboarded.
And the first thing that is going to happen in Europe,
we are going to have new players following MECA
because I wouldn't say MECA is ultimately good for the whole ecosystem
because there are some rules that are not necessarily so clear or beneficial
for the crypto builders.
But the very important aspect is that gives clarity.
The biggest problem for institutions and corporates in Europe
to engage with crypto so far was there was no clarity.
Like there were no rules.
So they really weren't aware what's possible,
what's according to the book and what's against the book.
And right now with Mika, especially after the first quarter,
when it's going to be already tested,
I believe there are going to be more and more players engaging with that.
Just looking at what's ahead for like 2025, 2026,
like how do you see the whole Oracle landscape evolving?
I believe it will be opening up further.
So one thing I really like about Chalink is they try to expand the pie
in terms of like new players and institutions.
So they try to educate the banks and many of the big players in globally.
That crypto in general is a technology is a very interesting implementation
and they should engage with that.
So I believe this is positive to the whole ecosystem.
them. What I'm not a big fun
of with Chaling specifically
are some of the tactics about
monopolistic approaches and
some of the
less elegant plays, let's say on the market.
And I get information
from many of the protocols that are cooperating
with them, that they are more and more tired
with them trying to make sure that
they are the only dominant player.
So in 2025,
I expect the market to open up even further.
There are two reasons for that.
One, new players that are going to come
over to the market chain link will have no capacity to support them because of the design as I
mentioned. If there's a new chain, it takes them very long time and a lot of costs to deploy over
there and we are here to support with new assets is the same. So we are always the first Oracle to
support all of those new assets that people are excited about in Defi. And also there is a pretty
large legacy in terms of like the technical implementation chain link that they will have to
repay over time and it's going to impact their pipeline of new integrations.
With Redstone, our bed is there are going to be way more specified app chains,
either EVM or non-EVM networks.
We want to expand further to potential non-EVM ecosystems.
We are launching on Sui, SunProvibb, Solana, Aptos and other move-language ecosystems.
Movement L2 as well, it's already confirmed.
And as we will be progressing in the next year, I expect chain link Piff and Redstone become even more like dominant because as mentioned, there are economics of scale.
So a lot of new entrants, success to secure, for example, one or two clients or three clients, but that they realize it's very hard to go over a barrier of, let's say, 10 clients.
So the cost of choosing an early Oracle provider is pretty high because you don't know whether they're going to sustain or not.
Redstone since January
2023 when we launched Mainnet had zero
I repeat zero price manipulations or
downtimes which is not the case with both Piff and Chalink
both of those players had either
smaller price misreporting
like for example a pretty big one with Chalink
was VRAP steak if in December
2023 when Chaling reported skewed price by
25%, which is very much
like it's a lot for such a big asset.
Or when PIF had problem,
when Worm Hall was struggling to reach consensus,
I believe that was beginning of 2024.
Then they stopped delivering pricing cross-chain, right?
Because they rely 100% on the bridge itself.
So if the bridge is down,
they stop delivering it cross-chain.
And one narrative,
we are going to nail as Redstone next year
and we are very close with the ecosystem as BTC-Fi.
So programmability applications on top of Bitcoin.
Bitcoin as an asset has no ceiling.
Right now it's over $100K dollars,
and we believe it's going to keep growing and expanding.
And we are super close with the Babylon, Lombard, Pamp, BTC, Lorenzo, Solve,
and the whole ecosystem of BTC fine,
and we'll be supporting their expansion with proof of reserves
as we delivered already for Lombard,
but also other use cases that will allow them to go to new networks
and create more sophisticated use cases based on Bitcoin itself.
Cool. Fantastic. Anything else you want to touch on?
Well, I am super positive about 2025. To be honest, when I was entering 24, I was fairly optimistic,
but with 25, I truly believe it's going to be a year of Redstone. We have the number of
clients and big partnerships we have lined up. We are also running Redstone Expedition,
which is our program for engaging the community
where you can earn Redstone gems,
which are points within the ecosystem.
The team itself, we just had our Christmas event
and all of them got pretty cool new Redstone backpacks,
and all of them are pretty excited about that.
And one thing I want to finish with is
the crypto scene is getting more and more polarized.
I have a feeling that after elections in the US, people just wanted to streamline somewhere
that they want to be in favor of one option and against the other.
And you can see it between EVM and non-EVM networks.
Solana versus Sui.
Recently I saw on Twitter also a big fight between Ave and Morpho where Polygon is involved.
So people will try to polarize more and more.
And as Redstone, what we really want to focus at is growing the pie
and making sure especially the new use cases are addressed.
because we believe, okay, if someone is attacking you, you naturally have to respond and, like, defend yourself,
but we ourselves will not go into very strong polarized ideas because we get advices for many of the marketing agencies.
Like, it doesn't matter what you say. It just has to be polarized so that you get attention of people,
but we don't want to play that game. We are builders, and long term we want to support builders.
And that's the reason our core focus is just to expand the universe and bringing new value to the ecosystem itself.
And myself, I'm also very proud for the crypto folks that they kept pushing the frontier.
And right now, crypto isn't a full force to go into the bull market.
Absolutely well.
Thanks so much, Morrison, for coming on.
It's really great to learn about Redstone.
It does feel like a very elegant, elegant and scalable way.
and I think it's abundantly clear.
I think for everyone, you know, just how crucial oracles are to support like a wide range of use cases.
I'm excited to see how, you know, Redstone is going to develop in the next year
and excited to see also how the token launch will go soon.
Thank you, Brian.
It was absolute pleasure to join over here.
The token itself is going to be soon, naturally.
But we are pretty positive and I'm extremely, extremely,
excited and dedicated to keep growing redstone.
I'm working 14 hours a day and with a smile on the face,
I'm satisfied with the attraction that we deliver.
So thanks a lot for inviting me and I will keep following Epicenter for sure.
Thanks so much, Mason.
