Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Collin Myers & Oisín Kyne: Obol Network - Distributed Validator Technology (DVT)
Episode Date: May 27, 2023Ethereum's merge and transition to a proof-of-stake consensus implied switching from proof-of-work mining to transactions being validated by entities that have an economic stake in the network. Proof-...of-stake distributed networks are subject to complex game theory models (e.g. slashing). Distributed validator technology (DVT) enables a more modular validator stack at every level: key pairs, hardware and entities. This reduces trust dependencies and increases security for validator entities.We were joined by Collin Myers & Oisín Kyne, founders of Obol Network, to discuss Ethereum's 2.0 PoS consensus, the current validator landscape and how distributed validator technology (DVT) allows for an increased security and overall smoother UX for validator entities.Topics covered in this episode:Collin’s & Oisín’s backgroundsDistributed Validator Technology (DVT) explainedHow DVT improves user experience for different staking modelsReducing validator costs and increasing fault tolerance through DVTObol middleware approachObol v.2Interacting with other middleware from the Ethereum stackExpanding DVT to other ecosystemsFuture validator landscapeHow post-Danksharding data availability will be handled by an Obol clusterL2 Ethereum equivalenceEpisode links:Collin Myers on TwitterOisín Kyne on TwitterObol Network on TwitterThis episode is hosted by Friederike Ernst. Show notes and listening options: epicenter.tv/497
Transcript
Discussion (0)
Welcome to Epicenter, the show which talks about the technologies, projects, and people driving decentralization and the blockchain revolution.
I'm Felix and I'm here with Mayor Roy.
Today we're speaking with Colin Myers and Ocean Kain, who are the co-founders of Oberl Labs.
Obol is building software to enable distributed operation of validators starting on Ethereum.
Hi, Ocean and hi Colin, and welcome to Epicenter.
Hey, guys. Thanks for having us.
Pleasure to be here.
Yeah, so both.
of you actually were in touch for like a long time you've both been in the space and in
staking especially for a while so you know as is customary on epicenter we'd love to first get an
introduction to how you how you got into crypto and how you got to where you are today
maybe Colin you can you can start yeah cool yeah thanks thanks for having us again
I have been watching and observing epicenter from a
and always wondered when we'd get our shot.
So yeah, happy it finally came.
Definitely on the longer end of the spectrum based on my expectation set.
But yeah, happy you finally made it happen.
Yeah, so quick background on me.
Actually, non-developer by background, used to work on Wall Street,
worked at a variety of different banks, mostly in like credit and debt,
and got involved with Ethereum Community, 2007.
I was living in New York and, you know, lucky for me, there was consensus and there was
Joe Lupin and there was meetups and there was different stuff happening. And at this point in
my life, like 2017, I was trying to spend a lot of time, you know, trying to work at a hedge fund
or a private equity firm or actually, honestly, just totally lost. It's still like what I was
supposed to do. And there was two things going on at that point in time. One of them was We
Work and the other one was crypto in New York. And those were like the two main things that
everyone was buzzing about and work worked and trying to hit a job there, but also spent most of
my time on Ethereum.
2017, I met Joe Living in a meetup.
I was inspired to quit my job, so I did.
Then I joined ConsenSys in 2018, and I worked there for about three and a half years.
At ConsenSys was able to work on a lot of different technologies, even like space and blockchain
and some crazy stuff like that, but primarily focused on staking.
I started a project called Token Foundry,
which was like an early mover in the space to helping networks launch
and trying to do compliant token launches,
turned that project into a project called Activate,
which was recently Sunset by Codify,
something we actually partnered with on course 1 in the early days.
And throughout those time periods,
I spent a lot of independent time contributing to ETH2,
was involved in early working groups after DevCon 4,
and I primarily at the beginning focused on the economics of EF2.
They were not very well understood.
And my job was to help enable the understanding of the economics for your at-home validators,
for your larger-scale validators, your exchanges, people like coin-based consensus, and these
different types of actors.
Those initial work streams led to a lot of different enablement projects around the Genesis event.
So the majority of my time was spent around the Genesis event.
and enabling that to take place.
While at Consensus, we built the E2 Launchpad,
which is the sole individual place where you're at home validators
interact with the deposit contract and spent quite a bit of time
with the client teams and helped them with user feedback.
And ultimately, all of these research projects culminated
in what today is now DBT.
2019, Dr. Reifice and Carl B.
started a research group focused on trust-minimized staking.
and Marshmead and myself were the first to come in to help enable that effort,
got completely sucked down the rabbit hole on DVT,
thought it was world-changing technology three years ago.
No one had any idea what we were talking about.
It was kind of grossly underfunded as an effort,
and ultimately the only way that we found proper funding to turn DBT into a project
was by Oshina and I both quitting our jobs respectively
and going out and raising capital on our own
and finishing the effort
and putting the group of people together
to make the technology of reality.
So yeah, that's me.
Thanks for having me on.
I'm an Ethan Maxi.
Loving Cosmos these days, though.
But yeah, really excited about it.
And then, yeah, to give my background,
I'm a software developer by background.
And in fact, I'm the fortunate person
who's like second generation internet entrepreneur.
My parents have run web companies most of my life.
So, yeah, I would have grown up around kind of original website building web design in the late 90s, early 90s,
saw kind of SaaS companies built in the kind of Naudies in 2010s.
Out of college, I worked as a consultant and kind of discovered crypto in 2017 and got my first kind of full-time role in 2018 when Consensus opened an office in Ireland.
I did two years there in their tokenization department.
We did some projects very tokenized securities for French real estate on Mainnet back in 2019
before that was prohibitively expensive to do on MainNet.
And I left consensus in March 2020 and I incorporated myself as a self-employed kind of contractor.
I got picked up by BlockDemon to do ETH2 research for them.
Back in 2018 when the original ETH2 specs came out, I wrote an article that was a
effectively quite critical of them at the time. It was 1,500 ether minimum and slashing was
fairly severe. And I was like, really dubious that this was going to be safe or make sense.
And then, yeah, I kind of kept an eye and it got involved with Block Demon, had been building it
along and ended up building out their ETH2 stack for them by Genesis. And I'd say shortly after
Genesis, I was first introduced to the idea of trustless validators by getting an invite to
this trustless validated community hall that Colin and Marr were running.
And at the time, I was kind of agonizing over having no ability to run a backup validator or do anything when one failed.
And I was introduced to this technology.
And it's like, oh, it's high availability for validators.
This is definitely going to be a thing.
Everything in Web 2 does this.
And I was kind of an immediate convert.
But yeah, I think Colin realizes not very many other people were.
And yeah, it took me maybe a few more months of kind of being involved.
I was trying to make an NFT issuance kind of stack.
I did some work trying to issue like rugby.
video NFTs with some kind of like famous rugby players that didn't really go to plan.
And then someone wants approached me and asked me to make an ERC20 token to represent stake.
And I kind of counterpitched them and try to make like NFTs to represent validators.
And then yeah, I wasn't really very good at selling any of this idea.
I had a product, but I had no business acumen.
And I realized that I wasn't really very cut out for the whole CEO role.
So I reached out to one of the only business people I knew in the space.
who was Colin because a couple of people had been kind of pushing me his direction.
And I was talking to about what I was doing.
He showed me what he was working on and what I wanted to do with Obal.
Took me a few weeks to get convinced that it was not too hard to attempt.
And yeah, I think we kind of started around April 2021 together.
And here we are, maybe two odd years later, a bit more.
Awesome, yeah, that's a rich history you both have.
And thanks for going so deep into it.
I think, yeah, we personally also have worked a little bit on high.
availability validation so like we're both very excited about this episode so yeah keen to get into it right so
I guess maybe we can start there we already mentioned dvt a bit maybe can you explain you know what
what is distributed validator technology yeah cool I'll I'll give kind of the more macro perspective
is to like the difference between a normal validator and then ocean can get a bit more into like
technical architecture and deployment and stuff so today a validator consists of three
pieces. It is a public-private key pair. It is a machine, and it is an agent. An agent we look at
as an individual or an entity. So it's three things. And all of those are super monolithic.
And that's just the validator stack of today from an Ethereum perspective. What DBT enables is a more
modular validator stack. And now what happens post-DBT is you have key shares, not just one public-private
key fare. You have a collection of key shares. As many as you'd like to create based on your
trust properties essentially. The next piece is that those key shares then can go on multiple
machines, not just one machine, but multiple machines. And then third and lastly, this validator can
be run by a group of people or a group of entities. So it takes your modular validator today,
super monolithic, and it takes it to the more modular version where you can have multiple key shares. It
increases your security. You can have multiple machines. It increases your availability. And you can
have multiple people or entities. Therefore, it increases kind of your game theory or decreases the
chances of Byzantine behavior for that validator. And I'll give it to O'Shea and you can go more into
kind of the technical talk. Yeah, I think an epicenter podcast is maybe one of the only places where I can
talk about the nerdier details of this rather than the high level idea. But yeah, I think one of
the interesting things about it or what, you know, makes these distributed validators more
possible than they might otherwise be is the cryptography that Ethereum 2, I should, we should
probably stop saying the word E2 uses, proof of stake Ethereum. And that's the BLS signature scheme.
What's fancy about the BLS signature scheme is it's one of the first, like, elliptic curve
signature schemes where if you have, you know, independent signatures all for the same hash, you can
actually like add them together in a low trust environment. You don't need the original private
key to do so. So what happens, as Colin alluded to, is you know, you want to set up a distributed
validator. The first thing you probably do is a distributed key generation, unless you want to just
split a normal key locally, but it's better if you kind of do one with a DKG. Then at the end of that
process, there's, you know, let's call it four machines to keep it easy, each with a piece of the
private key on it. And you have a piece of software in the middle of your validator stack between
your consensus client and your validator client.
And these four nodes connect to one another over like TCP.
And they more or less play a little consensus game to decide on what they're going to sign
every time that there's a validator duty coming up.
They play a little consensus game, pick this is the hash we're going to sign.
And then every validator client is given the exact same, you know, hash to sign.
They all, you know, check it for slashing rules, do all the usual things.
What's nice is everyone gets to keep their own.
private key management.
You know, the OBLE software is just kind of read only as a middleware.
It doesn't actually have the power to sign anything.
So all of the independent validators check that nothing's flashable.
Once they're happy, they sign it with their piece of the private key,
and they broadcast it to what it looks like their consensus client.
And then in the middle there, our software intercepts them,
shares them around with the other machines.
And once any, of in this instance, like a four node cluster,
once three of them, like three of the signatures are together,
you can combine them into a full valid signature for the validator,
and you can send that onwards to the network.
And what's the neat about that is you don't need all four signatures.
You actually can have fall tolerance.
And this is kind of one of the super important things about high availability is
if you put a validator on four machines,
but you need all four of them to be online,
you're more brittle than you would otherwise be.
But if you put it in four machines and you know only need three to be online,
this is where things really start to change.
And you can have high up time.
You can do rolling restarts.
you can replace hardware that fails without downtime and all of those good things and are kind of
unlocked with distributed validators.
So from a user perspective, if I'm a staker, generally the options I have today is I can
I can either put it into a liquid staking protocol.
Lydom would be the biggest of them.
I could go to a specialized validator shop like a block demon or course.
and I could spin up all p-to-p,
when I could spin up my validators there,
or if I'm usually technically competent,
then I can run my own validator.
I'm actually curious, kind of, in these three kind of segments,
let's imagine that being liquid sticking as one segment,
the other being the specialized validator,
that's like hosting validators for you,
if you have 30 to eat,
third segment being you want to run your validators at home.
How would these three segments kind of use this technology?
And what difference would it make for them?
Yeah, that's a good question.
So I'll talk a little bit about our adoption path so far.
The earliest adoption of Oval has been at-home validators.
They're the first ones to have put it on main net.
We put our first ones on main net between core team members all running at home.
and now kind of the adoption evolution is now getting into your hosted person and, you know, your liquid staking pool.
Liquid taking pools were kind of the earliest adopters of the idea and supporters of it being in their roadmap.
So I'll start with the at-home validator, which is quite interesting.
So today the at-home validator, for example, like Oshin runs one out of his house and like we travel a lot today for like purposes of oval.
and it's always, it's like kind of chronically offline.
It's kind of this thing that you have to worry about
and you can't really have peace in mind for being an at-home validator today,
which is something that like DVT can unlock for you, right?
You can have your at-home machine.
You can run backups inside of the cloud in a non-slashable way
so that you can ensure you can kind of live your normal life.
The other side of the at-home validator is the persona type of,
I don't have 32Eath, but I trust myself to run my own machine.
And there's actually quite a lot of those people.
And they can use OBLE to come together to like create their own shared validator
because they may not be interested in taking on the risk of the pool.
And, you know, they may not need a liquid stake token, but they don't have 32E.
And that's kind of this squad staking concept, which we've seen honestly take off quite a bit
in the at home validator category, which is, you know, a whole group of people who's like,
some of my ETH is in a pool.
The rest of it, I just want to run myself,
but I don't have a full validator's work,
which has been quite interesting.
So, yeah, it's peace of mind,
but then that second user group is actually
where we think most of the tail end adoption for DVT
will be in like 10 to 15 years
is like enabling just groups of people
to get together who run their own machines
and giving each other fault tolerance in like a human to human manner.
So yeah, a lot of early adoption there,
but the middle adoption of DVT won't be there in my opinion.
They're like the early enthusiasts.
They're the ones that help set us up.
Our primary duty is to figure out how to give the at-home validator more tools to get more of them online.
And now we're bridging more into the hosted and like liquid staking pool categories.
Now I'll get into those two different types of users and why it's beneficial for them.
For your liquid staking pool, it is actually the most innovative use case of DBT today.
because most of the pools are using it for its Byzantine properties, which is quite cool.
So the example I like to give is let's just use a hypothetical liquid staking pool.
Today, there's 10 validators inside of this liquid staking pool.
All of them are supplying their own keys to the system, keys that they create, keys that they manage,
and keys that they run on their machines again in a very monolithic environment.
And the reality is, is that, like, in each, in this liquid, like this hypothetical liquid staking pool, each of those validators would have, like, a certain amount of stake that they're responsible for. And that person can actually self-slash. They can be, you know, they can sabotage. They can act malicious. They can't take the funds, right? But they can shut the machines off and they can, like, bring penalty to the greater pool, if you will, when most of these pools use social economics. So it does mean that, like, it could take, you know, it'll, it'll,
it'll hurt every single participant in the pool if one of these validators defects.
And it doesn't even need to be malicious, right?
It could also be a bad day at the office where, like, you know, all your servers crash and
everything goes down.
Or it could even be catastrophic, right?
Like you have a rogue employee who steals the keys and, you know, there's a variety
of different things that can happen inside of that construct.
So when DVT enables is that that group of 10 people can now just share one validator,
which creates this whole new defect game theory
between the different operators in that cluster.
It gives fault tolerance.
And it makes it essentially,
I hate to use the word impossible,
but it makes it very,
very difficult for any one validator in that group
to do anything malicious or defective to that pool.
So that, again,
it's availability in slashing protection reasons
and utilizing it for the fact that there's a consensus mechanism
and it prevents Byzantine behavior.
So it's one of its true core primitives.
The other reasons that they're using it is not only for that,
but they're using it as well to decentralize their validator set.
So if you're going to build a liquid staking pool that wants to be very decentralized,
then you're going to have to invite, call it middle to lower,
to sub-tier validators to like decentralize the validator set over time.
and you need to do so in a way that protects the pool, especially if it's an existing pool.
And most of the pools today, like migrating to DBT, are existing pools, and there's lots of money
inside of all of them, right?
So if you are going to let new validators in, you need to let them enter the pool in a manner
that doesn't hurt the pool.
And DVT is a great way to do that because you can just build some shared validators.
You can partner, you know, three highly proficient people with one newcomer.
And then, you know, you have some fault tolerance.
there's some give there if they make an error without really harming the pool.
So yeah, the decentralization, the validator said, is also a very interesting one for pools
and the use of it for its Byzantine properties.
Your hosted service provider is actually someone that we thought would come later in the adoption cycle.
However, they've come in quite quickly.
and it's a factor to do with the centralized staking product or the hosted product
is it was the first product in the industry.
It's been competitive for a number of years now.
It's like entering its fairly normal maturation cycle of compressed margins and increased costs.
And now those products like need to mature themselves in a way where they decrease their cost,
but they improve their security as being a centralized provider like requires,
SOC 2, SOC1 compliance, these things help to kind of institutionalize yourself.
And most of these people have to offer some type of off-chain insurance mechanism,
which is quite costly to do or you just have to be pretty well bankrolled.
So for these users, their maturation and growth is meeting DVT kind of in the middle.
And they're experimenting with it to help scale their operations in a way where they can decrease
the amount of machines they need for a certain amount of e-vety.
while also increasing the security profile of that setup.
We've seen a lot of interesting things recently
in the insurance and reinsurance topic around DBT
offering staking insurance has been difficult historically
because the definition of a bad day at the office
is losing all your ether potentially.
But that was just at the beginning.
And now we're as a community,
DVT being one of those technologies,
building things that alleviate
a worst-case scenario of, like, you know, losing all your eat.
So insurance providers have been interested in kind of including DVT into their
criteria of providing insurance because it gives them more assurance that, like, you know,
you can't lose all of it.
It's done an absolute loss situation.
So, you know, no insurer is going to insure something that's an absolute loss situation.
And technologies like DVT have now come in and helps, like, alleviate that.
So yeah, those are the three core user groups.
I think the fourth one I'll mention is kind of defy and how defy has thought about using dvt.
So like in most defy projects today, you're like taking an asset and you're wrapping it or you're taking a collection of assets and you're pulling them together and you may produce a stable coin from it, right?
Or you may produce other streams of yield or income off of a collection of tokens.
The collection of tokens that people are using today are LST tokens, liquid state tokens,
and they're doing a variety of different things with them.
Those products, since there's a product built on top of it,
it's better for those products that those liquid state tokens are as de-risk as possible.
And now people are looking at what is their risk criteria.
And mostly they haven't been looked at.
The only risk criteria that an LST token has been looked at to date is liquidity,
because that was the biggest risk, right?
how liquid is this thing?
What does that look like?
But now it's more or less like, okay,
what are the potential penalization properties of these LSTs?
And under the hood,
what types of security protocols and parameters are they using for it?
So now we've seen D5,
who has like its own interpretation of risk,
honestly probably a more mature interpretation of risk
than the validating community.
And now DBTs kind of come into their spectrum over the past,
call it month or two.
which has been very interesting for them to want to include it in their ecosystem and to learn more about it.
One of the downsides of a BVT like setup is that it adds extra cost, meaning what's the other case?
Let's say you have a hosted service provider, and they're running a validator on some cloud, probably Amazon or Google,
and they have an engineer there for uptime
and they're running it on a single machine.
They won't have 100% uptime
but they will get to somewhere higher than 99% uptime
just on that setup.
Now when a DVD setup comes into the picture,
you're running like three or four machines
and usually spread across different parties.
And you're also adding some kind of key setup
and key tear down cost, right?
Like probably the setup of these systems
will need some extra
extra work. It's one of these
cases where you are adding fault tolerance,
but you're adding cost.
And sometimes the customer
cannot perceive the extra fault tolerance,
meaning that they'll go to some kind of
blog explorer that will say,
hey, what are the returns of the validators
and the validator with 99% uptime
running on one machine,
their return will be X, and then they'll see the DBT set up, and it might be marginally better,
like X plus, you know, 0.1% or 0.05%.
How do you kind of address that issue?
And do you see that issue in practice, and what does it mean for the different parties and who might adopt DVD?
I love this question, because it actually pushed back.
a little on an increasing cost.
You're right in that it increases hardware,
but depending on what people have done beforehand,
it doesn't necessarily increase their cost.
And what I'm kind of getting at,
and you kind of talked about it in the cloud,
is originally without distributed validators,
the real problem was that there was more or less
no safe way to run a backup.
The only kind of option you had was do some sort of monitoring game
and pray that you're monitoring is, you know,
perfectly reliable.
And there's not a scenario where you turn on two machines,
the same private key at the same time.
And the problem would not be able to run a backup is it can be very hard to recover from a failure,
particularly if you want to use something like a bare metal machine and like a data center
somewhere that's usually one of the more low-cost ways to run a server.
And the problem is if you do put, you know, particularly a large amount of private keys
on a bare metal server like that and it just goes offline, which is regular occurrence
will probably happen at least once or twice a year.
You just kind of have to sit in your hands.
There's more or less nothing safe you can do at that point.
Well, you can try and bring up a backup
and hope to shut down the backup
before the primary comes back,
or you can ring up the data center
and ask them to kind of go and pull the plug out the wall.
But this not being able to run a backup
has changed a lot how people have built our architectures
in the early years.
Most of these centralized,
I say most, that's not quite true.
A lot of centralized providers at least started
running in the cloud, particularly because then if you do have a failure of your machine,
you can kind of detach the disk. You know, it's all in software. You have a lot of power to kind
of actually recover things that you don't have when it's just a server sitting somewhere.
But the problem with that, and you kind of alluded to the fact that they might run one machine,
a lot of people, they'll have at least a few servers in the cloud. Normally they'll have kind of
at least two that runs kind of ELs and CLs, which is kind of the meat of it. Then they'll have a
machine with their validator on it and plausibly they'll have some more with a key manager on it as well.
So a lot of people like kind of enterprise staking validated. If you're talking kind of three to seven
cloud servers and cloud servers are much more expensive than a bare metal server. Rule of thumb,
think about 10x more, including one of the real kickers is bandwidth costs, which I'm sure you guys are
familiar with with egress, but that can often take up 30% of the kind of operating cost.
cost of a cloud validator. So you start to pile on egress, multiple cloud machines and so on,
and it really starts to end up kind of costly. And that's kind of one way to have, you know,
high uptime without distributed validators is to kind of put it all in the cloud, have a bit of failover
and do things that way. The other option that some routes go down is they kind of do the hardware
route where they'll buy a particular server that has dual CPU socket, so they have two CPU
they have arrayed array of disks.
So if one disc fails, no problem.
They'll have dual networking cards.
So, you know, if there's problems there, they'll survive, maybe dual PSUs.
But these PCs costs in the tens of thousands of dollars, and they're not particularly cheap.
So either, you know, trying to get kind of fault tolerance by using cloud services or fault
tolerance by using really expensive hardware, both of these are, you know, not as cheap as you
expect.
A lot of, you know, some of these stacks, if you're running seven machines in the cloud, you
and field spending a couple thousand, maybe three, four thousand a month for something like that.
And if you then have distributed validators and you can have private keys like split into groups,
it opens up the ability to more safely put validators on bare metal machines because if one
of your four machines dies, no big problem. It'll stay online. You can kind of wait for it to
come back. If it doesn't come back, you can bring up another partial on another node. It's not going
to cause a slashing. It doesn't have a full key on its own. All.
of these machines are doing consensus beforehand. So kind of what we talk to a lot of these centralized
providers that are like, how could, you know, distributed validators possibly not increase my cost?
We tell them that, you know, with software fault tolerance, you can move away from like cloud-based
solutions. You can move away from having like extremely, you know, fancy hardware-based solutions
to have eye uptime. And instead you can move towards commodity bare metal and use just fault
tolerance to do it. So instead of having, you know, four to seven, 400 bucks and month nodes that
run in the cloud, you can have, you know, $100 a month bare metal machines and your full stack
can cost 400 a month instead of 4K a month. And yeah, the last kicker of that is can you do it at
scale with high load and we're, you know, running a lot of good tests in that regard, seeing if we
can do this at like, you know, thousands of keys as well to improve margins. But yeah, naively, in the
scenario, yes, you're running more than one node, but bar the home.
staker, most enterprises, most, you know, operators within the LSPs, they're not running a
single machine unless they're, you know, doing something with very fancy hardware. They're
probably running at least a few machines. So distributed validators and having fault tolerance
allows them to kind of, yeah, use cheaper servers, less, you know, they're not over-provisioned
them as much. And we're reasonably comfortable. By, you know, the people that are already running on
bare metal have a very low cost basis. There's a very large amount of validator operators that we speak to
that DVT actually will let them reduce their cost, even if they are running more hardware.
I think your question may as well kind of presents the cell of DVT, you know, today.
Because look, at the end of the day, it's a security protocol.
It's not a yield protocol, which is like what everyone wants everything to be, right?
Everyone wants to hear like, I'm not using your thing unless it makes me more money, right?
And like for us, the fact of the matter is is that it's a security protocol today.
And maybe there's ways that it can get a validator to become more profitable.
But it's not this lottery mechanism that you download onto your node.
And then one day you see 200 each sitting in your account.
It's not one of those things.
So getting people to adopt it, yes, has taken quite some time.
But there's a reason for that because it is a security protocol.
So it's supposed to give you other things that benefit you in the event that there's an increased cost for it.
However, we think to Oshin's points that it's not only a benefit to them on the security front,
but it will actually probably save them money on a relative basis today.
And then in the future time period, it's up to us to kind of standardize DVT as a configuration.
And what we like to use internally is that we've been calling this like the oval inside strategy like similar to Intel, right?
Like what Intel was able to do is like basically standardized.
It's the inclusion of their chip into like everything.
And then the machines and the software and everyone else were the competitive market.
We think getting like oval on the inside of these things is far more where it sits because it's a security protocol.
So like how do you position a security protocol?
a call in a time, one, during a bear market, and two, in an industry where, like, you know,
everyone's looking for yield.
Right.
Yeah, amazing.
I think, yeah, we talked about a little bit now the sort of cost from the operator's side.
I guess there's also Oval as this middleware that obviously you guys are developing.
You're spending a lot of resources on developing this and you also raised money.
So there's also like an expectation, obviously, of this having some sort of business model.
maybe if whatever you can share in terms of like how do you imagine like sort of obol to be adopted is it like the operators that would pay to to run obol is there like some other models that you've thought of to to utilize or yeah yes yeah it's a good question so i'll tell like a little story here and i've been fortunate enough to observe like how the eith one and eith two client
teams like were developed and hardened and staffed and funded. And that's really kind of like
the core of like how middlewares and Oble and others, you know, what types of monetary streams
they'll be able to like take on and work with. So like the early foundation of Ethereum was,
you know, there's five client teams. I think maybe six actually on the on the POS side. And then
I think maybe three or four on the one side. But anyways, these are all kind of, you.
you know, privately bankrolled now.
They're well funded.
It took years to get to this.
They were funded by the ESF and other, like, you know, large donors.
And that software is free forever, right?
So it's virally free.
It's the primary network access to the Ethereum network.
It has basic functionality, but it's going to be free forever.
And now, like, you know, Danny Ryan uses the words like ossification and crystallization.
And really what that means is that, like, the,
amount of innovation that takes place at that core client level and funding is going to lessen
and lessen over time. And new innovative space needs to be opened elsewhere. And now the new innovative
space that's being opened elsewhere is being enabled in this middleware layer. So now we have
this whole new middleware market where there'll be a variety of different protocols who are
coming to add complementary software to the core Ethereum stack.
And that middleware layer today has private funding, but tomorrow cannot have private funding
forever, right?
There's like, there has to be a way for these middleware protocols to maintain themselves,
but also continue the ethos of giving back and having regenerative economics.
So today we think that optimism has taken that responsibility.
it is a responsibility at the end of the day,
if you're building at this level,
is to try to figure out how you can make circular economics of some sort.
So the way that optimism has built their network
and started your ecosystem having a fee,
which comes in that is then donated retroactively as a community
to other projects that support the core Ethereum staking stack.
So this is people like a protocol guild.
These are people like eStaker.
This is get coin.
There's a variety of,
write these projects that require funding to maintain themselves on a different level of being
open source. And Oval and DBT sits at this new low enough layer that like we believe it's our
responsibility to utilize retroactive public goods in some sort of way is the primary way that we
start the economic machine. Where it goes after that, you know, is to be determined.
But there's a way for us to kind of use that economic model with every validator.
type, which is also the important thing. That economic model works with your at-home validator.
The economic model works with your hosted person. It also works with your liquid staking tool and what
other other future topic or use case that comes about. So, yeah, today we're most focused on
what is the version of retroactive public goods that works for Opal. Right now, we're learning and
figuring this out through this main net adoption time period. You know, Obel is totally free.
Today, DBT is completely free. You can go and, you know, play with it, do it.
whatever you want.
But like tomorrow, how do we bridge into like a circular economic system?
And that's our biggest focus today.
I'm actually curious if you think the ideal place to house this kind of middleware stack,
I mean, the specifications of it might be actually something like the Ethereum Foundation
itself.
And then the client teams actually implement this middleware.
were as part of their code basis?
Yeah, so this is Oble v2.
Today we are on our roadmap to V1.
At the moment, we're about 60, 70% to V1,
but we've been working on a parallel workstream,
which is Oble V2.
And Oble V2 further protocolizes DBT
by turning it into a specification.
So we've actually partnered with Nethermind,
for them to be the second core development team for the OBLE Network,
and they will be leading and partnering with the OBLE Labs,
current core development team for the OBLE Network,
to work on the research and specification of the OBLE V2 protocol.
And we are pushing towards a spec-driven environment
where we hope to incentivize a variety of different implementations,
right, for different people's use cases.
And the reason that we're also doing that is to kind of the prior story
that I told, which is you need to be as close to the base layer from a variety of different
perspectives as possible. One of them is roadmap. You have to keep your roadmap on. You can't get
in front of the mothership's roadmap. You've got to kind of hang out in this Goldilocks time period.
And yeah, those are the most important things that you have to follow and kind of stay on top of.
Yeah, there's one extra thing I want to add actually about the middleware side. As you said,
like why don't the client teams implement it, you know, directly?
This is something we looked at ourselves for quite a while.
And it's actually why I touched on BLS signatures being aggregated, being so important.
And I'll talk about it kind of by way of talking about Mev Boost and before that even Mev Get.
And so for those of you that aren't familiar, Mev Boost is now the product like run by FlashBots that allows, you know, people that want to get, you know, inclusion into blocks without,
getting front run, they kind of talked to validate it through the software. But before it was called
Mev Boost, it was called Mev Geth, and it was just a slight modification to the original Get code base.
And they were, like, highly successful in their rollout, more or less, all of the Ethereum miners
back in the day were running Mev Geth to the point that people were concerned it was north of 90%
of clients. And the Ethereum Foundation were kind of concerned about this. They were like,
okay, if there's, you know, anything goes wrong with the software, you know, almost everyone is running it.
So when it came time to move towards ETH2 and reinventing it to figure out how it works in this new paradigm with validators,
one of the best changes they made was to re-architect Mevgeth, instead of it being like a forked client,
it's an optional middleware or more accurately a side care that all of the clients can add.
And rather than it being, you know, a feature that only one specific client has and the others don't,
it was something that they could all opt into.
And this, you know, massively de-risked and allowed it to be more accessible and didn't, you know,
prevent client diversity in any way.
And we had kind of a similar, like, issue when we were developing distributed validators as well.
The easiest kind of way to make an MVP is to make a standalone validator client that has, you know,
arbitrary power to sign.
You can build your distributed system to go and run validators this way.
Whereas, yeah, the kind of problem.
here is either you have one client that's super successful and it works or you have a spec
that everyone has to implement. But if you do that, you kind of really slow your roadmap. You kind of
have to get everyone to march along at the same time and you really don't have much optionality.
If a new, better version comes along, people can't switch to it very easily. So in the interest
of not causing harm while trying to do good when it comes to building distributed validators,
we realized that we could also build distributed validators as a middleware because of BLS signatures being aggregatable.
So right now, all of the existing validator clients, they can add our software into their stack and become a distributed validator more or less with no changes.
And this is super beneficial because it does allow, you know, you're not one client or, you know, replacing them all.
or if there is, you know, a better spec comes out or, you know,
oh, well, you know, goes to zero in the morning,
a new one can come along.
All the clients can just put in a new middleware.
And it's much more modular and it's much more like fault tolerant in that regards
that, you know, if something goes wrong, you know, no big deal, people can kind of pivot.
And this, you know, design idea is why we kind of went towards the middleware route.
And it's, you know, something that's kind of served as well in that regard.
But the last leg of it is, you know, just one implementation of the middleware is, you know,
sufficient from a safety perspective, but there could still be a liveliness risk if, you know,
the thing has a bug and it goes offline. That could be a problem. So they kind of fur the leg of the journey
is after you build it once, get some adoption, prove that it works, then it starts to become more
sensible to protocolize it, make a spec, have a couple implementations. But even if there is just one,
it is still modular replaceable and is this, you know, nice thing that if a better one comes along,
no big deal, you know, it can be swapped out. So,
I think that is kind of an important design decision versus why isn't this, you know, a spec that all of the clients implemented because you can iterate faster, you can try more things, you can have more optionality.
And that's kind of the way we've designed the software so far.
Okay, cool.
I guess we also wanted to sort of talk about how Obal interacts with other middlewares in the Ethereum stack.
So you already mentioned like MF Boost here.
I guess that might be a good place to start
where like in my imagination
right like I guess now we run this Obal cluster
like all of these
nodes in the cluster
also run MF Boost
do they have to make basically consensus
in terms of like what sort of block
you accept there or can you talk a little bit
about how this interaction works?
Yeah
so it's relatively straightforward
because Mave Boost
talks to the consensus clients
whereas we're almost a little
layer lower down talking to kind of validate your clients. And you ask a good question about do you
have to kind of come to consensus on what's provided? Mev Boost is a bit different to, you know,
a normal block proposal in that. The fear is that if you have, you know, a block that's extracting
a load of MEV, that's, you know, yeah, taking like an outsized reward. The searcher doesn't want to
show the proposer exactly the full block because then the proposal would be like, oh,
thanks for the alpha.
I'm just going to rewrite this to send it to my address and I'm just going to propose it.
And this was actually one of the reasons around the kind of redesign of Mev boost now
when it moved towards proof of stake was if we want this to be available to every proposer,
we needed to be low trust because, you know, in the ETH One world, there was only a handful of minors
so you could kind of trust them.
Whereas in, you know, ETH2, the hope is that there's a huge amount of validators.
So this does need to be low trust.
So what actually happens is the relay in the situation is actually the kind of trusted party.
They have the full block.
They know, you know, what it is.
And they promise, you know, not to rug or, like, undermine the searchers and, you know, screw with them.
So they just provide a hash, like, or like the header of a block to the actual distributed validator.
So at that point, yes,
the nodes do come to consensus on, you know, what one to pick, but there is, you know,
there's not too much they can go wrong. There's not a full block there. They can't reorder
transactions. They can just say, hey, you know, this is a block editor. It's going to pay us this
reward. Are we cool with it? And everyone says, yep, cool with it, sign it and sends it back to the
relay who then appends the real block and sends it onwards. So yeah, that would be how the
MV boost kind of integration works. Do we want to talk about some of the other ones, maybe?
Yeah, I guess the other thing we want to talk about here is also like sort of a trend that's kind of coming up in Ethereum, right?
Like restaking and eigenlayer like sort of reutilizing your collateral or the Ethereum validators to offer additional services, like let's say maybe Oracle or whatnot.
And I guess that's also like, first of all, it's very hyped, I guess, but also seems to interact with Obal on the front.
that, you know, yeah, how could restaking services be offered through OBO?
I guess that would be something that I'm personally actually very interested in.
For sure.
Yeah, it's a great question and something we're chatting quite a lot about at the moment these days.
So I talk about, so we're talking for the most part about a project called EigenLair,
who has introduced this idea of restaking.
And as far as how Obalt fits into the equation, there's kind of two-way.
that we fit in first as a staker, which is, you know, the kind of crux, these are the people
that are, you know, pointing the withdrawal address of their validator at a, you know, an
eigen-layer smart contract and opting in to be the restakers. Like, yes, you know, we will provide
economic security for some extra, I think, additional validated services, what they're calling
that the other thing. And from a staking perspective, we think distributed validators
and O-Bull is super important, because if you are trying to, you know, sell your restaking solution
to these other services that are, you know, buying economic security from you, you want to be
extremely sure that the stake underneath you is safe and secure because if something goes
wrong and they all get slashed, your economic security kind of disappears.
Technically, you know, they'll be able to, you know, charge the person that gets slashed and
they'll get penalized for it.
But if you're, you know, maybe running an Oracle or something and depending on this, if there's a mass slashing under you, all the validators get exited.
So your economic security just kind of disappears on you all of a sudden.
And that's, you know, something that you can't really or you really don't want to happen if you're trying to, you know, if you're, you know, additional validating service looking for, you know, economic security, you want to be sure that the validates beneath this rock solid.
So this is where distributed validated is really kind of add benefit for the kind of restaking paradigm.
It's like, yes, you know, this stake is run by groups of people.
The odds of it being slashed are quite low.
The odds of it going offline are quite low.
And that kind of gives you a more firm base to actually provide guarantees to extra services.
And then, you know, going a little further, Obal itself could reward and penalize people within a distributed validator
by using this kind of restaking and economic security,
and this is the kind of further version of, you know,
in the near-term mobile can be a staker for something like Eganleur,
but in the longer term, it can also be an additional validated service
that's, you know, being the one paying for economic security
or for restag to keep its, you know, protocol running.
So, yeah, we're, you know, quite bullish on the idea of having distributed validators
be a safe base to restake on.
Yeah, that's really cool.
I personally was thinking, you know, also since, you know, I guess there could be a lot of additional services that are offered through Eding layer, right?
And maybe for a single node operator, it can be hard to run so many additional services.
So I guess, do you think it's possible that, like, for example, a Oval cluster sort of shares these responsibilities?
Like one guy runs the Oracle, the other does 100%.
Right.
Yeah.
in the near term, there'd be a bit of trust involved.
If you're, you know, saying, hey, we will put our withdrawal contract, you know, towards
eigenlayer.
Well, trust Meher is going to run that Oracle for us.
He's not going to get us slashed.
You can do that absolutely.
And, you know, we could all be taking the risk of, you know, doing that extra additional
voluntary service each.
But going even further, we're working on the cryptography to make kind of proofs of, you know,
fair participation more easily provable.
and here's a sense that like the longer term goal is that you know one piece of this distributed validator could opt into this service for everyone and if they do screw up they're the one that takes the hit or eats you know most of blame rather than the whole group sharing in the like slashing if if you know one person who they like trusted to you know do some extra service for them doesn't like match but yes you you absolutely can have you know one person in your cluster doing all of these extra validating services for you.
you. Yeah, maybe this eigenlayer ecosystem will be so large that specialization of labor
will emerge, meaning some validators do some things and other validators do other things.
And if that kind of specialization of labor emerges, then Oval is kind of perfectly fit for
exploiting that specification specialization. But it's still early days, right? Like there's not a single
service that's actually running on this
restaking
model in production.
I think they announced their
no, not in production. They announced their spec and stuff
I think this week so shout out to them
for that. So yeah, they're working away in it.
We're keeping an eye.
Is the Obal concept only applicable
to Ethereum because of its
special staking model
or is it also applicable
to other chains?
Could you expand to
other chains and deliver utility?
Yeah, it's a good question.
Look, we've been thinking about DVT for Ethereum validators for three and a half years at this
point.
So I made it a pretty large effort at the end of last year to be like, hey, guys, it's time.
We have to go look at where else we can go, right?
Like this technology most likely will benefit other ecosystems or other layers of Ethereum,
for example.
So we went on this effort to kind of go find problems.
recently. So we spent the entire fourth quarter and a good portion of the first quarter on a
Cosmos effort, which the team at course, one, was very helpful with and a lot of the other
cosmos ecosystem as well. We've also, over time when we did like our first and second fundraising
round, kind of had this emphasis on like, what is the second ecosystem that we think there's
smart people working in that have the right mission and like the right vision? And that was Cosmos
investors. So the ecosystem. The ecosystem, the ecosystem that's the second. So the ecosystem. The ecosystem,
ecosystem itself already has a pretty actually large Cosmos presence in it today as Oval.
So we decided to double down on that fact because we had the best access to information
and we figured we could learn as quickly as possible, succeed quickly, fail quickly, kind of one of
those mindsets. So we went into Cosmos and we started looking for problems. We spoke to the validators.
We spoke to the liquid staking projects. We spoke to the ICF. We spoke to the Builders program. We kind
of did this whole loop of going through the whole cosmos ecosystem and speaking to people and looking
for problems. And we were able to identify, you know, a couple structural problems as to how
DBT could be useful for smaller validators, for example, in Cosmos to team up to get more
delegation to enter the active set. There's kind of like this larger portion of Cosmos validators
that are like chronically unprofitable, even like some of the largest, you know, Cosmos validators
aren't necessarily that profitable today either. So those.
smaller to middle tier ones, there's really no game for them, unless they could maybe team
up together, right, and attract delegation through, you know, means of using dbt.
So, yeah, we found some good problems in Cosmos.
We published a blog post on it recently.
You know, there's coordination difficulties to making it a reality.
There's protocol difficulties.
There's cryptography differences.
For example, Cosmos does not use BLS signatures.
But we also learned through that effort that if you wanted to get it done, you could.
And there's kind of like existing versions in the cosmos ecosystems that are about, you know, a third of what DVT is.
But there would be a lot of build inside a cosmos that would be necessary.
One of the most interesting things that we found about cosmos was actually the kind of social threshold of the value of DVT.
And inside of cosmos, what we realized is that there wasn't enough value at risk for people to,
think that it was something that shouldn't just be, you know, an open source primitive. They
couldn't believe that it needed to be a network with all these, you know, tools and education
and funding and ecosystem around it. But our take from that was, is that the cosmos ecosystem is
almost mature enough to have enough value at risk for everyone to say like, hey, we think this is a
good technology to adopt into our community. So for us, we believe that it's a little bit
early for DBT in Cosmos, but we are still actively working on implementation-based research now.
So we like stated the problems and now what we want to present to everyone, it's kind of like,
here's how it could be implemented.
So yeah, that's like one area of alternative layer ones outside of Ethereum, where our communities
aligned and we're experimenting to see if the Cosmos community values it.
And then we'll, you know, collaborate with it.
The other area, which is actually super exciting, Figment just put out a piece.
yesterday around distributed sequencer technology, a new term that we're trying to coin.
And this has been the other research area for us at Oval is just going up the stack and looking
at your other actors, right? So today we've spent all this time on the validator. Well, let's go up
to the block builder. Let's see if there's like centralization problems around block building.
People talk about MPC block building, right? All these different constructs. And then we've also
have been looking at the sequencer and like the Prover, the Verifier, all of these new actors that are
becoming more and more important in the core Ethereum infrastructure stack and looking at their
adoption cycles and saying like, is there a need or is there a reuse case for the work that
we've done for distributed validators and all the stuff that we've learned from an external
project building it, not from a foundation perspective actually, which is the reality of where
layer two is. They're a bit more mature in there. We sit outside the foundation.
we do our own things and we do whatever.
So collaborating with those new groups of people has been quite an interesting experience for us.
So, yeah, we're actively looking at two different growth strategies for DVT to see if it can be helpful elsewhere.
One of them is horizontal.
That's cosmos.
One of them is vertical.
That's going into the sequencer topic and seeing if our technology could be helpful there.
Yeah, very interesting.
I guess I would also like just like to hear your thoughts.
I guess you're like very close in the end to like both the validator ecosystem and sort of the other protocols.
And I would just like sort of like to hear how you currently think of the validator ecosystem and how do you see it evolving.
I mean, obviously sort of like one of the core ideas of Ethereum is that there are these homesdakers.
Like do you do you think that like there are much of those and will this grow?
obviously Oval potentially might help increase that,
but I guess just generally, what's your view of the space?
Yeah, I've been surprised thoroughly
at the number of like hybrid at-home validators,
I would call them, right?
They're basically like small groups of at-home validators
who are starting small companies together.
And there are tens and 20s,
and they're like growing by the number, right?
We kind of call them like the tail end validator segment.
And that's probably growing the most, right?
We're seeing a variety of new and small validator entities pop up.
We're starting to see people from other ecosystems come to Ethereum.
And by means of coming to Ethereum, they're doing so through DVT as actually their first knowledge base,
which is also like a very interesting thing.
So now we're onboarding validators from other ecosystems, not into the core configuration,
but into the DVT-based configuration from the very beginning.
But are probably some of the most bullish, honestly, and exciting conversations we have with people are in the like, hey, me and my three friends just got together. We started a company. We're looking to like, you know, get involved as a validator in Lido. We're looking to try to qualify. There's a variety of small and mid-tier validators that are trying to get voted into Lido. And by means of increasing their chances, they're spending a lot of time with Oval and DDT to make sure that they're proficient and make sure that they're educated by it.
So honestly, there's a lot of really good inertia at the smaller end of the validator of embracing dvT and like using it to advance themselves so they can become more professional.
And this has surprised me.
This is not, you know, like a segment that that I thought would be such a core user and like pusher of the protocol.
But I think it's really like a, it's a sign of like the ethos of Ethereum.
It's like kind of crazy to see actually, right?
that group of people feels empowered to go build companies.
So that's what they're doing.
And then they're using Oval as a further empowering mechanism to achieve those goals.
And it's quite fascinating.
So in the Ethereum roadmap, we have this, this bank sharding concept that's coming in,
where essentially these validators will end up becoming responsible for data availability.
So if you have like a single centralised validator,
it's kind of probably easy to understand.
We have to store some kind of data on our side.
But how does it interact with an Obal cluster
where there might be three different validators?
Does the data now need to be replicated across all three?
Good, really niche technical question, mayor.
So on this dank sharding is coming out in two-fitting.
basis, the first one being proto-danksharding and the second one being full-dank sharding.
In proto-dank sharding, they don't go to the extreme where they don't have every node in the
blockchain kind of keeping and gossiping like all the data availability. It's still, you know,
everyone sends everything everywhere. Eventually with, you know, full dank sharding, you're
signing data availability witnesses to prove that, yes, you know, I did see this data. It was available.
oh yeah over the longer run there starts to be you know trust assumptions between the the four nodes
trust them to strong word but they have to decide did we all see this or did at least any of us see this bit
of data blob so that we can include it in our block and like like sign to it but in the near term
in proto dang sharding when it first comes in it's more or less a best effort data availability it's
done at the consensus layer and it's not you know you won't be slashed for it to my understanding or at
at least they're not putting, they're not going to the scenario where, you know, all nodes don't store everything, which is the first way it comes in.
So, yeah, when it comes in at protodang sharding, not currently a problem.
When it goes to full dang sharding, the nodes do have to say, have we really seen this?
And, you know, we have to kind of prove you so at or, as you said, if you don't trust one another, everyone see the data availability before we sign off.
If you want to be more cautious rather than more trusting.
But yeah, it's in the near term, it should be a problem.
And then also with proposer builder separation,
a lot of the complexities of dank sharding is on the builder
and the proposals, it kind of keeps the proposers relatively simple.
They just have to kind of propose the really fancy block
that a builder made for them.
Maybe I'll end with kind of one of the conceptual doubts I have
with this entire Ethereum L2 roadmap,
which is that sometimes I feel that V, meaning like V validators,
and like you, all of these middleware builders.
We spent so much time and effort
building various products
to make the Ethereum validation system work
and be decentralized.
So many thought cycles have been spent here.
And now, when it comes to the question of scaling Ethereum,
most of our effort is kind of it feels useless.
Because it's going to be scaled through the L2s,
Now these L2s are going to be running sequences
and that's like a completely different
kind of validation stack that's being built.
And like now you have now, for example,
overall, instead of
reusing your work for Ethereum directly to scale Ethereum
in L2, you now have to go and think of this new
concept of a distributed sequence
and you're wrongly into it, right? New software, same for us,
etc.
do you in some sense think your
all our efforts are kind of like
not being wasted but by being underutilized by the Ethereum ecosystem.
You're throwing all my favorite questions, Maher,
because I actually would suggest that we don't have to throw stuff away
and build new software and stuff.
And this is the idea of Ethereum equivalence
that I think was kind of first coined again
by optimism, but they kind of astutely realize that if they stay as close to the Ethereum
execution architecture as possible, it's easier for, you know, to gain adoption and network effects
and be able to reuse all of the existing L1 stack. For example, their very first kind of MVP,
we'll call it with, you know, a modified get, not unlike, MVEGETs to some extent. And then recently,
or just in the coming weeks, they're moving over to Optimism Better.
rock and the main difference there is they've abstracted all of their kind of code for the optimism
kind of fraud proof game into what they're kind of calling a consensus client and they talk only
through the engine API, the standard one that all of the execution clients use. And adopting this
kind of standardized API for them has allowed them to go from having, you know, OP Getz to also having
O.P. Eragon and ultimately probably all of the existing L1 execution clients in the optimism
and world. And we recently announced a blog with Sigmund just yesterday on distributed sequences,
and the crux of the argument is exactly, yeah, L2s have already kind of competed on Ethereum
equivalence at the execution layer. The next step in our head is to add an Ethereum equivalent
beacon API to their code basis. So adopt BLS signatures, adopt that API, dumb it down a little. It doesn't
need attestations because it's not a proof of stake game, but just have, you know,
proposals using that standardized API. And the benefit of that is you do get to reuse more
less everything from the L1, particularly from the L1 staking side. The ones probably most important
is the private key management side of things. So, you know, private keys are always the most
important thing. So if you, if let's say optimism for simplicity, they add a beacon API to their
L2 like OP node stack, they can reuse Web3 signer, they can reuse Dirk. And, you know, the benefit
for Oval is they can reuse distributed validators because it uses the same API and they can more or less
use it all out of the box. So yeah, I hear where you're coming from being like, should L2 really invent
the wheel? And I agree with you, they probably shouldn't. They should, you know, copy the L1 APIs as much
as possible, and then they get to kind of tap into the network effects of code that's already
built and already been used. So, yeah, our pitch to them is you don't have to do all of the
decentralizing the sequencer work yourselves. You can keep it a little simpler, have it be kind of
a round-robin thing with BLS signatures, but then use distributed validators to actually go from, you know,
10 entities in round-robin to actually 10 distributed sequencers in round-robin type of setup.
So, yeah, I agree that, you know, L-2, I think, you know, generally,
will adopt this, they kind of see the network effects, trying to build something new and convince
everyone to come use your stuff as hard, trying to, you know, stay aligned with the existing
APIs. More work for them, but, you know, more network effects and easier adoption for their
customers and for everyone else. Right. So actually, actually this then seems very powerful for the
Obole network that you can effectively go to an L2 that has, let's say, a centralized sequencer,
and maybe your product is simply that instead of now this being a centralized sequencer, it's
distributed across five parties or six parties.
And then if they can work out how to go from one centralized sequencer to five,
and each of them now is an overall validator with like five or seven,
then you can easily get to 25 or 35 people running the infrastructure.
Exactly.
Right?
So all of your work done for Ethereum kind of scales, like probably is more useful on the L2.
layer than maybe even on the Ethereum main net.
Like that could be a future for,
that could happen for the overall project.
Yeah, that was actually something I also pitched
where like in the L1 is normally very small machines
and there's technically, you know, 500,000 validators,
but sequencers, they often will be running like very large machines.
So, you know, five very large machines run in a distributed
sequencer setup makes a lot of sense rather than, you know,
thousands of small machines. The L2s
mostly won't optimize
for running on consumer hardware.
They're meant to be a scaling solution,
but they do want to be as trust
minimized as possible. So, yeah,
big sequencers that are
run in a distributed validator type
of setup where if one of them becomes
malicious, nothing happens. They just, you know,
like stay online like a multi-sig.
Yeah, I actually agree. I think
it nearly makes more sense at L2 than L1
bar just the maturity side of
things. Perfect.
Yeah, awesome. Yeah, we went very deep. Thanks so much for a dank episode, Ocean and Colin. And yeah, thanks for coming on to Epicenter. Maybe, yeah, before we wrap up, you want to shout out where people can learn more about Obol or, yeah, how they can find you. And, yeah, again, thanks so much for coming on.
Yeah, thank you guys for having us. It's been a great discussion. Yeah, to find out more about Obol, best place to probably link into the communities through.
Twitter. You can just find us at Oble Network.
And we have a cool ambassador program.
We have a great Discord.
We just, well, tomorrow we're actually launching our research forum with the distributed
sequencer piece as the first one for people to get some debate on.
And then we'll actually build out the Cosmos section of the research forum.
So I think you guys are interested in get involved there in the next, well, this is probably,
well, yeah, in the coming days, you'll be able to see it.
And yeah, I'll pitch Doshin, but thank you guys for having us.
and really appreciate it.
Yeah, nothing major to add.
Thank you very much.
This has been super fun to talk
in the nitty-gritty of validators
because normally we don't get to talk about
bare metal versus cloud versus cosmos
or L2 and their architecture.
So I'm going to be grateful of this chance
to get really into the technical nitty-gritty-gritty in podcasts.
I hope they're all as technical as this one.
Thank you for joining us on this week's episode.
We release new episodes every week.
You can find and subscribe to the show on iTunes
Spotify, YouTube, SoundCloud, or wherever you listen to podcasts.
And if you have a Google Home or Alexa device, you can tell it to listen to the latest episode of the Epicenter podcast.
Go to epicenter.tv slash subscribe for a full list of places where you can watch and listen.
And while you're there, be sure to sign up for the newsletter, so you get new episodes in your inbox as they're released.
If you want to interact with us, guests or other podcast listeners, you can follow us on Twitter.
And please leave us a review on iTunes.
It helps people find the show, and we're always happy to read them.
So thanks so much, and we look forward to being back next week.
