Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Taiko: Scaling Ethereum in a Decentralised Manner - Joaquin Mendes
Episode Date: May 7, 2025Taiko is a decentralized, Ethereum-equivalent (type I) rollup scaling solution which uses ZK technology. Taiko's goal is to scale Ethereum efficiently while maintaining security and decentralization. ...Being a type I zkEVM, Taiko retains full Ethereum equivalence, which creates a seamless DevEx, although this comes at the expense of UX as slower proof generation is the main trade-off. Moreover, in order to stay true to its decentralised ethos, Taiko operates as a based rollup, meaning that transaction sequencing is performed by L1 validators.Topics covered in this episode:Joaquin’s backgroundLoopring and Taiko’s beginningsThe 4 types of zkEVMsTaiko's zk circuits vs. Polygon’sBased sequencingData availability and blob commitmentEthereum’s role in the futureThe L2 landscape and its compromisesSequencer security modelDealing with MEVBased preconfirmations & Taiko ecosystem UXEpisode links:Joaquin Mendes on XTaiko on XLoopring on XSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: one of the largest node operators worldwide, trusted by 175,000+ accounts across more than 60 networks, Chorus One combines institutional-grade security with the highest yields at - chorus.oneThis episode is hosted by Friederike Ernst.
Transcript
Discussion (0)
If you ask me, I think we are at 1% of the potential usage that I expect Ethereum and its roll-ups will happen in the future.
If we expect Ethereum to be the new internet, and I know this is like a password,
but if we expect that to happen and if we expect that the blockchain is completely abstracted,
like the internet is abstracted today, I think we need a lot of scalability on the layer one
in order to attend the demand that happens on the layer one,
but also to attend all the demand that comes from the layer two.
Based sequencing means that the sequencing is not done by by us, by Tycho.
We run one sequencer, but it's permissible as anyone can do it.
It's done by layer one validators.
Welcome to Epicenter, the show which talks about the technologies,
projects, and people driving decentralization and the blockchain revolution.
I'm Frederica Ernst, and today I'm speaking with Joaquin Mendez,
a COO at Tyco, which is a based layer two on Ethereum.
Before I talk with Jockin, let me quickly tell you about our sponsors this week.
If you're looking to stake your crypto with confidence, look no further than Corse 1.
More than 150,000 delegators, including institutions like BitGo, Pintera Capital and Ledger trust
Corace 1 with their acids.
They support over 50 blockchains and are leaders in governance or networks like Cosmos, ensuring
your stake, is responsibly managed.
Thanks to their advanced MEV research, you can also enjoy the highest staking reward.
You can stake directly from your preferred wallet, set up a white label note, restake your
assets on eigenayer or symbiotic, or use their SDK for multi-chain staking in your app.
Learn more at chorus.1 and start staking today.
This episode is proudly brought to by NOSIS, a collective dedicated to advancing a decentralized
future.
NOSIS leads innovation with circles, NOSIS pay, and Metri, reshaping, open banking, and money,
With Hashi and NOSIS VPN, they're building a more resilient privacy-focused internet.
If you're looking for an L1 to launch your project, Nosis chain offers the same development
environment as Ethereum with lower transaction fees.
It's supported by over 200,000 validators making NOSIS chain a reliable and credibly neutral
foundation for your applications.
NOSIS Dow drives NOSIS governance where every voice matters.
join the NOSIS community in the NOSISDAO forum today.
Deploy on the EVM-compatible NOSIS chain or secure the network with just one GNO and affordable hardware.
Start your decentralization journey today at NOSIS.
Hey, Jokin.
Thank you for coming on.
Hey, thanks for having me.
So I've been following Tyco for quite some time.
But before we kind of dive into kind of like the nitty-gritty technical details, tell me about yourself.
and how you came to be in the Web3 space.
Okay, all right.
So as you mentioned, my name is Joaquin.
I'm CEO at Taiko.
I've been working for Taiko for the last couple years.
Before, I used to be head of partnerships.
So on the early Tesla days, I started to boost up the ecosystem.
And recently, like half a year ago,
I jumped into the CEO position.
for which I essentially drive the business strategy,
also more boring stuff such as pure operations,
but mostly growth and strategic growth.
My journey in crypto started, I would say, five or six years ago,
not professionally, but more like as a hobby.
I guess like many people, I started just trading
and listening to podcasts and,
understanding the technology. But professionally, it started around three and a half or four years
ago. I started working at Polygon Labs doing mainly business development for a year and a half.
I did some chief of staff for one of the co-founders, but that was a short period.
Though it let me see some good things on how these kind of startups are driven. I came from
corporate world from PWC, which is like a big, big corporate world doing consultancy.
So actually, you know, this helped me understand how a crypto startup runs.
And it helped me on my current role, right?
So, yeah, one year and a half or two years in Polygon, then a little bit more than two years
in Tyco.
That's my journey.
Tell us about Tyco and how Tyco got started.
So, Tycho got started, I don't know the exact dates, actually I should know, but around three plus years ago.
But it actually, we can actually say it started earlier because the founders of Tyco are actually the founders of Luprin.
And Lupring for the audience that doesn't know about it was the first CK roll-up on Ethereum.
It wasn't a general purpose one.
It was an application specific for essentially.
It started as a Dex, right?
Exactly.
Correct.
We had them on ages ago.
I was super, I was, yeah, I was really bullish on the tech.
It was a very cool application.
Exactly.
At the time, it was, it was super noble.
It was, it was pushing the boundaries.
Actually, it was following in 2018.
So, so.
Yeah, we talked with them a lot because kind of like,
We build cow swap, so kind of which at the time was the Nosis Protocol.
So kind of obviously kind of like there was a lot of kind of similarities and kind of like
they ended up focusing on different things than we focused on.
But yeah, there was definitely a lot of mutual appreciation and admiration.
Interesting.
That's very interesting.
Yeah, it was wild times.
That's actually when I started in crypto more or less.
And yeah, it was very noble at the time, right?
Scaling Ethereum, no one was building this roll-up thing.
And so the team at Tyco, the founders of Taiko, have a lot of experience on the scaling space.
What they found out was that, well, then in 2020 or 2021, you know, some roll-ups and side chains, they started to pop up and scale.
up and scaling Ethereum was a normal thing.
And they realized that all the solutions that were coming up to the market for scaling
Ethereum were not really like super aligned with Ethereum, which is normal, right?
The EVM, the execution environment is quite complex.
So try to achieve compatibility or even equivalence with the EVM is really hard.
So some projects that started popping up at a time, they made some shortcuts, some compromises, which is, again, which is normal.
But Tyco was born with the idea of making, you know, as little compromises as possible to Ethereum when scaling it.
So it was born initially as a Type 1 CKBM.
Type 1 means the highest level of compatibility.
It's actually called Ethereum equivalence.
So the idea is that developers don't have to change their code
when they migrate from Ethereum to Tyco,
which has a lot of benefits, of course,
but has a lot of, you know, obstacles.
So when it comes to building their protocol,
for obvious reasons, I guess.
And so it was born as a Type 1, Siki.
Now we have the concept as the technology has evolved to.
And now we call ourselves a base roll-up, the first base roll-up on Ethereum, which TLDR,
and I am sure we will expand on that later, means we are using Ethereum layer one for sequencing,
instead of having a centralized sequence of doing all the job of, you know, ordering the transactions into blocks.
Yeah, absolutely. I think there was a very comprehensive introduction.
Maybe let's kind of dive into the type 1 ZKEVM thing first,
because that's kind of neatly self-contained.
And then we can go into the sequencing later.
Does that work for you?
Sure, sure.
So kind of ZKEVM, kind of what it does, kind of like,
and correct me if I'm mistaken here,
but kind of like it just takes all the upcodes that we have on Ethereum
and kind of just transposes the one to one to kind of like a ZK circuit, right?
And I remember kind of like back in the day when we thought this was impossible.
And then kind of Jody came along and kind of like he kind of talked about this at ECC and everyone was totally flashed.
So maybe tell us a little bit about the provenance of kind of like this idea behind kind of transposing all the known opcodes into ZK circuits.
and what the challenges and why we used to think this is not possible.
So, yeah, you defined very well.
Actually, when I was at Polygon, I had a chance to meet Georgi, Georgi violin.
And he's actually from Barcelona and from Madrid.
So we got along well.
One of the co-founders that I used to work with as chief of staff was Anthony Martin.
and he was actually one of the founders of Hermes,
which was then the Polyon CKEVM when they acquired it.
But that's another story.
So essentially, that's exactly what you mentioned, right?
There's four types of CK EVMs as that's simply a taxonomy that we tell it made.
From type 1 to type 4, actually this is not very much used.
as of today, and I will explain why.
But back in the day, there was considered there were four types.
There are some role-ups in between two types, but that's a different story again.
Type 4 means, for example, K-Sync was a type 4.
They made some changes to the ABM in order to make it more performant.
So they are not equivalent down to the up-code level.
They made these changes for an obvious reason.
The EDM was not created in the beginning to support the role of functions.
Let's put it this way.
So it's not friendly.
It's not friendly for a role-up development.
So the closer you get to the type 1, the more sacrifices you make on performance.
But the closer you are to Ethereum, the type 1 is the only one that,
is equivalent down to the up-code level, as you mentioned before.
So it was actually a trade-off.
I'm not saying being type 1 is better or being type 4 is better or beating between
is better.
It's just a trade-off.
If you get farther from the type 1 or closer to the type 4, you are optimizing on performance,
but sacrificing compatibility.
More performance, obviously, is better for some use cases and
some contexts and more compatibility is better for other stuff.
You can reuse the tooling.
You can, for some applications, you don't have to perform extra security audits when
you migrate from Ethereum.
So from a security standpoint, it's kind of like better, but then, you know, performance
is also important, especially on roll-ups.
So it is a trade-off.
I don't know if this answers your question.
Yeah, it does answer my question.
And so kind of the rationale behind this was that compute would be less of a bottleneck than composability.
Is that kind of like a fair one-line summary?
It's a fair one-line summary because I think the founders and when I learned about Tyco,
what we expected is that, you know, all the hardware and the software requirements would accelerate that much,
that compatibility, like even being on the closest end of compatibility,
is not going to be an issue in the future,
because scalability will be solved at the layer one,
and then the roll-ups that will win
are the ones that are truly, truly, truly aligned with Ethereum,
because it enables a lot of different stuff that we can talk about later.
Are there any edge case opcodes that were especially hard,
to replicate in a ZK circuit because kind of like I remember times when it wasn't even clear
whether you in principle could replicate every single op code in a ZK circuit.
So kind of like tell us about the levels of the varying degrees of difficulty and kind of like
transposing these upcodes.
I'm not aware of specific examples.
What I know about is that Tyco started like, as I mentioned, like three.
three plus years ago.
And if you think about it, like if you replicate an execution environment that already
exists, it seems like easier, right, than creating something new.
Like, for example, Siki Sink deep.
But I know for a fact, and I'm not a developer, but I speak every day with the developers
of Tyco and all their folks in the space that are on the more technical side.
I know for a fact that probably there were, if I had to mention one example, which I don't know technically, I know I would be leaving out a lot of others.
I know there were a lot, like quite a lot of challenges when it came to achieving the total equivalence down to the outcome level.
And that's why it took a little bit longer than expected to develop the role up.
And that's also why most of the roll-ups, and they also say publicly, I'm not putting words in anyone's mouth.
They like to start on higher types, on type 3, type 4 sometimes, even type 2 from the get-go, like I think scrolled it.
But they never start in the type 1.
They like to progressively go to type 1 because it's hard.
How do the Tycho ZK circuits compare to the ZK EVM that Polygon built?
So are they the same circuits?
Do you kind of reuse this?
Or do you kind of come up with your own circuits?
I believe a polygon created a different circuit than the others.
I don't remember exactly the name.
I think it was Circum, but I'm not entirely sure.
I'm sure that we don't use the same.
Like that's for sure.
Actually, okay, let me rephrase this.
We didn't use the same because we are no longer as CKEBM.
And I think this is important to mention.
Yeah, we get there in just a bit, yeah.
Yeah, we transition from CKEBM to CKVM for many reasons.
So we don't use the same circuit as I think we are unique in that sense.
perhaps we are similar to other type choose like scroll,
but certainly not like Polygon, which Frog was Polygon CKBM,
of course, not the POS, which is a site chain.
They were type 3 from the get-go, and we were a type 1, so not the same.
Can you describe to us what base sequencing is or what's kind of understood by this term, sorry?
Sure.
Okay. Based sequencing means that the...
Let's define the sequencing, right?
Sequencing means ordering the transactions and creating blocks.
Okay?
The power of sequencing is actually pretty big.
If we think about all the MED, you can extract.
Like, it's not trivial how transactions are ordered into a block, of course, right?
In most of the cases, if not all,
The sequencing is done on a centralized manner.
I'm talking about roll-ups.
Sequencing is done in a centralized manner
by a centralized party or hardware to bring away.
Base sequencing means that the sequencing is not done by bias, by Tyco.
We run one sequencer, but it's permissible as anyone can do it.
It's done by layer one validators.
So not anyone like layer one validators can sequence the transactions on the layer two.
And this has some pros and cons.
This unlocks some benefits, but also has some drawbacks.
You think it's fine if I go through those now?
Yeah, absolutely.
Maybe for that, tell us how the, is it an opt-in thing?
Kind of like do all layer one kind of validators kind of sequence blocks for Tyco?
or is it something that they kind of have to choose to do kind of like you would in restaking?
They have to choose to do.
They have to run the Taeko nodes.
And what percentage of Ethereum nodes run the Taiko node?
Not many now.
And I will explain now, like very, very few.
Actually, at our peak, we had like 70-something validators proposing blocks on Taiko.
But there's an explanation to it because maybe if I talk about the drawbacks now, I can explain why it happens.
Okay, so I would start with the drawbacks and then I will go through the benefits.
Main drawbacks of base sequencing are one of them is on profitability.
And profitability, we can unpack this in two manners.
First of all, Tyco doesn't capture the revenue from the fees.
There's other roll-ups that are making a lot of money, which is fine.
Out of sequencing the transactions, we don't capture that value.
So that's kind of like a drawback if you think of a base roll-up as a business.
It's harder for us to make a profit in that sense.
but also it's a little bit harder for proposers on Taiko
to make a profit.
And I will explain why.
The reason why Layer 1 validators haven't chimed in Taiko
to propose blocks is because it is hard to make a profit right now
out of proposing blocks on Tyco.
The main reasons why, without getting into much detail,
is TLDR, we use the Ethereum block space in a less efficient manner
compared to other roll-ups,
which means it's kind of like more expensive for us.
Like, all the roll-ups right now, they are batching a lot of,
of course, they batch a lot of transactions together
and create a roll-up out of them,
create a batch of transactions,
that's what a roll-up does.
But they also batch several blocks together
before they send them to Ethereum.
In our case, because we are layer one sequence,
we don't have the power to do that in a centralized manner,
and that complicates things.
Okay?
And so far, we have been essentially proposing each block separately to Ethereum.
So we are consuming more blob space from Ethereum.
And that makes the cost of proposing the blog.
Every time you propose a block on Tyco,
you gather the fees from the users on Tyco,
but then you have to pay the layer one.
And if you cannot batch blocks together,
it essentially means that you are paying more per blog.
You are paying more per transaction.
On Tyco.
And that's a very good question.
So the block time on Taiko right now is around, it varies, but it's around from 30 seconds, more or less, to, well, to 36 seconds to one minute, more or less.
So you have to kind of send an Ethereum transaction every half minute to a minute.
Correct. And that's, that is the second issue, right? There is, there is a UX problem.
It's not only that the block time is so big.
It's also that the users have to wait that long when they're using Taiko today.
They have to wait.
If you go to Taiko, you perform a simple swap.
You probably have to wait 30 plus seconds, which is really bad.
The reason why it's 30 plus seconds and not even more or not even less is
It's not, it's not, okay, so the minimum time we can wait is 12 seconds, which is the block time of Ethereum, because we are layer one sequence.
Okay, that's the minimum, but it actually takes longer. The reason why it takes longer is because two things.
Tyco, as I mentioned before, we run one of the proposers, right?
right now with the current set up
and with the current costs that we're paying
to the layer one,
the only way to make a profit from a block
is when the block is like completely,
completely full of transactions.
And we actually propose some very fat blocks,
like 1,500 transactions per block or something like that.
Right now the activity is a little bit lower,
but we have proposed that kind of fat blocks.
The time it takes to fill a block
with the correct activity,
we are averaging around
two million transactions per day.
It takes around one to two minutes,
which is even more than the 30 seconds
that I mention, right?
So if Tyco proposer wouldn't chime in,
it would mean that
we would have to wait one to two minutes
every time the block is proposed.
If we make it completely permissionless
and we let only the layer one validators
propose the blocks,
they will only do it when they're making
a profit. And they are only making a profit when the block is completely full of transactions. And
the block is only completely full of transactions after one minute or two minutes. So that means the
UX is even worse. Users would need to wait much more. While we're doing a tyco is we're sacrificing
ourselves, our treasury to put it in a way. And we are like a bot. We are from 30 plus seconds to one
minute, we just push blocks all the time, even if they are empty.
So some days we are losing money just to ensure that there is a decent UX.
So that's the main reason why layer one validators haven't chimed in yet, because only a few
days, not only a few days, but I would say less than half of the days of Tyco's existence.
Tyco has been on Maynett for almost one year.
Less than half of the days, it was naturally profitable.
when Ethereum was super cheap on the layer one,
and when we had a lot, a lot of transactions coming,
then they chime in and propose.
But if not, it's hard for them to make a profit.
Have you thought about kind of like raising fees on Tyco?
Because that's kind of like the other lever that you can move, right?
We have experimented with that,
but we can only raise fees on our proposer.
So for the blocks that are proposed by Tyco,
the users will have to pay the fee that we put
or the transaction would be sitting in the mainpool scenes forever.
But there's always a chance that a community proposers time in,
they can do whatever, it's fully permissionless,
and they can pick up those transactions that come at a cheaper priority fee
and get proposed.
So it's very tricky.
It's tricky.
But by the way, this will be solved.
It's got the same fee model as Ethereum, right?
We have the base fee and then you have the priority fee.
EIP, EIP 1559, correct.
Base fee gets burned.
In this case, it's not technically burned.
It gets sent to the TICO treasury.
That's one of the few ways that we capture value from the fees.
Base fee is like almost nothing compared to the priority fee.
Everything comes in the priority fee.
A priority fee is what goes to the proposer, aka Layer 1 validator.
all these problems that I mentioned before
will be solved very soon
but we will get to that.
Yeah, maybe kind of like to complete this
so you guys
kind of you commit
every block to Ethereum L1
and you actually use
the blob space for data availability
right? Kind of like tell us about
I mean obviously kind of these kind of
it would be weird to kind of
commit every block
to L1
directly and not use the blob space
But tell us about kind of the cost implications of that and kind of whether you have,
whether you have kind of an escalation mechanism if that gets too expensive, right?
So kind of because for the transaction that kind of you're sending to L1,
kind of that is a, that's a one-time fee or one-time per block fee that kind of you have to pay.
and obviously depends on what the gas cost is on Ethereum at that point.
Correct.
But kind of committing all the other transactions to blobs, that must be very costly.
Correct.
It is.
So to go part by part, indeed, we use Ethereum DA.
We actually use almost all the services that Ethereum offers possible, right?
If we remember the taxonomy, right, you use the settlement, you become a layer two,
you use data availability, you become a roll-up.
If you use an alt-data-availability layer, you're not a roll-up, actually.
You're a validium, right?
We also use the sequencing.
That's why we are a base roll-up.
We don't use the execution yet, which would make us a native roll-up,
because that's not possible with the current technology.
But to answer your question, yes.
We use the Ethereum-D-8, not only for the...
for the obvious that I already mentioned,
but also because if we didn't use EthereumDA,
we could be breaking the composability
that we are trying to achieve with Ethereum,
and we will get to that.
We have found some bottlenecks.
Actually, when Tycho launch in May, last year, end of May,
we started with low activity,
but then from the ecosystem side that I was leading at the time,
we did a lot of activations and we started to have like a lot of transactions.
Actually, we picked five million transactions in one day, which is a lot.
But considering how we use the blob space on Ethereum, that also meant that we were only
Tyco by ourselves.
We were using like half of the blob space.
And then the other half was distributed across all the rest of the roll-ups.
And then we were pushing the limit of the blob space.
and we were making it more expensive, right?
Well, we and all the rest of the roll-ups, of course.
In that time, what we did,
we offered the possibility of use either blobs or cold data.
Because sometimes cold data was even cheaper, right?
We don't think it's going to be the case anymore,
especially with the upgrades that are coming to Ethereum,
starting with Pectra, we're going to duplicate the blob space,
and this is just the beginning.
So, yeah, we plan to continue using a theorem DA.
We have faced situations where ThurMDA was way, way, way expensive.
We still used it.
And, yeah, we have the good news coming from the Thuron Foundation
that they are planning to scale the layer one,
which is going to be super beneficial for us and for our future.
Does that answer your question?
Kind of, but I want to push back there a little bit.
So you pass on all the costs to the user, right?
So kind of you don't subsidize the blob space, do you?
So, okay, not really.
Okay.
On our proposer, we didn't raise the fees to offset that.
What happened is we spent more.
We wasted more money.
Yeah.
Okay.
Yeah.
Kind of like when you zoom out,
What do you think Ethereum is going to be used for in, say, two, three, four, five years?
And what kind of level of throughput do you think Ethereum needs for that?
And kind of, if you kind of post all transactions on L2's two blobs,
do you think kind of that's sustainable?
Okay.
I think the layer one itself, there's a lot of debate around this.
There's no right answer to this question, right?
My opinion is that there's different ways of seeing this, right?
Like right now, unfortunately, 2025, still a blockchain is used by 95% DGENDS doing defy,
doing farming airdrops, you know, it's not mainstream, right?
So the current scalability that Ethereum, that roll-ups can offer seems like enough, right?
Like not many times we have pushed Ethereum to the limits.
But like the, if you ask me, I think we are at 1% of the potential usage that I
expect Ethereum and its roll-ups will happen in the future.
If we expect Ethereum to be the new Internet,
and I know this is like a buzzword,
but I really don't know how to say in a different manner.
I truly believe Ethereum can be the home for the Internet moving forward
in many, many different use cases, right?
If we expect that to happen,
if we expect that the blockchain is completely abstracted,
like the Internet is abstracted today,
I think we need a lot of scalability
on the layer one
in order not only in order to
attend the demand
that happens on the layer one
but also to attend all the demand
that comes from the layer two
which I think is going to be
what occupies
most of the block space
on the layer one. If you ask me
about potential use
cases that will happen on Ethereum and its
roll-ups
I think my answer
can be a little bit disappointing,
but this is what I truly believe.
I truly believe that tokenization is going to be a real thing.
And tokenization is so broad,
but that's how I envision the internet value,
the money on the internet,
and everything will happen on a blockchain via tokenized assets.
And not only for, you know, tokenized sounds like speculation all the time,
a tokenized can be much more than that.
can be purely efficiency compared to the to the current systems like easy
example a private bond how how you raise funds in a private manner issuing a bond if you
compare the web to experience versus web three there's is a no-brainer it's a
no brain it there's no way it doesn't happen in my opinion in the future but
also for example I think Ethereum will be Ethereum or blockchains however
we want to put it a big home for games.
And games is at its lowest point, I think, right now,
Web 3 Gaming in their reputation.
But I like to think it from a utility perspective,
more than a real data perspective.
Even though most of the transactions today in blockchain happen on games,
I think most of them are farming, pure farming.
That's not what the gaming wants, right?
You want to have fun.
and then if you can earn something, oh, cool.
That's on top.
But you don't play to earn.
I don't think that is sustainable in the long term.
But if I think about the future,
if we completely abstract the blockchain
and blah, blah, blah, all these, all these passwords,
if a user can have the same game in Web 2 and Web 3,
but in Web 3, they can have ownership
and the user experience is exactly the same.
I don't think anyone will choose to go
the Web 2 route, right?
So that's also why I think gaming
is one of the things that will, for sure,
and gaming industries is massive, right?
If we onboard all the gamers,
we need more scalability on layer 1,
like right away, for sure,
and also on the layer 2's.
I don't disagree.
So I think kind of like,
blockchains are 100% slated
to kind of be the substrate
for the internet of value
and kind of like,
I don't just mean kind of like financial
transactions, but also kind of things that kind of have value that kind of we don't technically
think of as financial transactions.
So kind of like things to do with identity and reputation and kind of games and so on.
100%.
Where I'm pushing back is, if you look at the Ethereum roadmap, do you see enough throughput
there to kind of cover this in any...
kind of like, even if you say kind of like, okay, and we won't take kind of like,
we won't settle all the transactions on Ethereum, but kind of like we still kind of want
this to be kind of like a significant part of the global settlement.
Do you think kind of like the Ethereum roadmap kind of is amenable to that?
Exactly.
So what I was getting is that if you want to have all this value from not only,
financial, but also
like all the
industries that can benefit from blockchain,
you definitely need a big
scalability. So if you ask me
about the roadmap, yes. I see
that Ethereum has a roadmap
that can fit all these
all this demand.
Are they going to be able to
make it a reality
from a technical standpoint? I don't know.
I'm not a developer. I don't know.
I don't work for the Ethereum Foundation, but if
we hear the latest, right?
Not only from Pectra, for SACA,
and the recent announcement
to potentially transition
from the EBM to RIS 5,
it seems what the
numbers that they are, that, you know,
the Justin Drake's of the
Theo Foundation are talking about around
10K TPS on the
layer one. I don't even
remember what they said in the layer to a million
TPS, something like that. I don't know.
But it's in the technical
roadmap to have enough
throughput to handle all these operations.
Will it happen or not?
I can tell. I don't know.
Okay. Maybe kind of like why we are kind of, I want to come back to Tyco in just a bit,
but maybe kind of like while we are on the bigger picture.
Let's talk about the L2 space as a whole.
So kind of, kind of you, we were talking about kind of like the Ethereum space and kind of
we meant, we included all the various L2s on Ethereum in that.
But kind of like we face a really difficult problem set here as well.
In that kind of, while we kind of talk about this as one big ecosystem, it really isn't.
It's kind of, it's very fragmented and liquidity is fragmented between them.
Kind of like if you want to exit an asset from, say, optimism and you want to take it to
arbitrium and then back, kind of like, it'll take you, it'll take you two weeks return, right?
Kind of like, you could, you could kind of like mail a USB stick in the same time.
So kind of that fragmentation and kind of this, this lack of interoperability between L2's on
Ethereum.
And I mean, that was very much kind of like the idea behind why we wanted L2's in the first days,
because in principle, you could have communication between the L2s that was, you know, at block times and trustless.
And kind of we don't have that.
What's the problem kind of like in your definition?
And how do we tackle this?
Do we tackle this?
That's a very good question.
First of all, I think if you look at a big picture, right, and how a theory.
has been evolving, I think it is a beautiful path in the sense that, okay, we have one problem,
we fix it in a way, but that fix generates a new problem that is perhaps less important
that the initial one, then this fix will generate another one and step by step, we keep fixing
until we get closer to the end game. I want to say that I want to give, before jumping into
the problems that this layer choose are generating,
I want to actually give a shout-out to them, right?
Because every single, from optimistic roll-up to secret role-ups,
and I'm very critical with the current setup of roll-ups
and with the optimisms and arbitrums on X,
I'm critical for a reason,
but I also appreciate a lot of the value that they have brought to them,
to the industry, to Ethereum's ecosystem.
Ethereum a while ago, we had a big problem of scalability.
We all remember the Cryptokitis thing
and how you needed to pay 100 bucks for a transaction in Ethereum.
So these protocols came in with the roll-ups.
They did it the best way they knew,
and they solved that scalability issue, and now, boom.
We can process transactions on Ethereum,
It's not on Ethereum, but more or less on top of Ethereum for cents.
Yes, that's a good solution.
But now the problem is that, what you mentioned, right?
We have a bunch of roll-ups that don't communicate to each other.
And it's not only that they communicate, don't communicate to each other
and that you have to wait a couple weeks in the case of optimistic roll-ups to come back and forth.
it's also that the liquidity is completely fragmented across all of them.
And that's a big problem, right?
Because, you know, if you go to Solana, you say, okay, where do I build?
I build in Solana.
Full stop.
That's it.
Easy to understand.
For the developer.
And easy to understand for the user.
And easy to understand for the trader.
That knows where the values capture.
Easy to understand for everywhere.
It's an easy narrative.
If you go to Ethereum, you have a big question.
Where do I build?
Do I build on Arbitrum?
Do I build on Optimism?
Do I build on any of the optimizing super-chchain cluster?
Do I build on Polygo?
Who captures the value?
Why Ethereum-Ler, one, is not capturing much value from this or from the other?
It is a problem, right?
But getting to the point and to answer your question,
I think we have solved the scalability issue.
Now we are facing the new problem,
which is the fragmentation.
Right now, all the roll-ups are trying to fix it within their own clusters.
So instead of having a bunch of smaller islands that don't communicate with each other,
they are trying to merge some of those islands into bigger ones, but still they are islands, right?
So we have the Arbitrum, I forgot the name, the orbit, orbit chains.
I think that's the name.
Optimism, super chains, CKSync, Elastic Chains.
They're trying to make interoperability within those clusters, which is very good, but still they don't communicate with each other.
And most importantly, and this is the main reason why I think this is not, why I'm critic, and I think this is not the long-term solution, is that they don't, they are not composable with the most important chain, which is Ethereum.
They are only interoperable within each other, but they are not interoperable or composable with Ethereum.
The only way to be, well, this is another password that Justin Drake has been saying a lot.
Universal synchronous composability.
Okay, why is that?
Synchronous composability is like the highest degree of interoperability that you can achieve.
The only way to have universal synchronous composability, you need two elements.
One of them is having a shared sequencer and the other one is having real-time S-K proofs.
Okay, if we go to the first one, Sersequencer, right?
You can achieve interoperability within clusters having the same search sequencer,
but if you are not a base roll-up, you will never share the sequencer with Ethereum,
which means you will never share the liquidity with Ethereum, you will never be composable
with Ethereum, you will never have the highest level of interoperability with Ethereum.
That's a big problem long-term.
I don't envision the Ethereum ecosystem long-term without synchronous components.
Composability with Ethereum. That's one thing. Second thing, instant or real-time CT proofs. This is the other
condition to achieve this synchronous composability. This is not
possible with today's technology. That's why in Universal synchronous
Composability doesn't exist today. So that's why these solutions that these roll-ups are proposing are very good for the short term because they are kind of fixing with other
standards, the interoperability, and optimism
recently announced how they are making all the super chains interoperable.
Please don't ask me about it because I don't know how they're doing it,
but I know they announce it.
But I consider all these solutions, short-term solutions.
The only solution long-term that I can think of
that can actually make Ethereum feel like a hole
solve the problem of A, you know,
I'm transacting on Ethereum, and I don't even know if I'm doing it on Tyco on Ethereum
or on another base roll-up that launch as part of the horizontal scaling that we are proposing.
The only way to make it feel like one single chain again is through base roll-up, plus native
relars, but that's a different story.
But base roll-ups are inherently more expensive than kind of like some of the others, right?
kind of like because kind of you use all the Ethereum infrastructure
and that kind of comes at it costs because kind of like it's scarce
and even if you make it scale it's still scarcer than kind of the equivalence
they kind of like you use kind of you use eigenDA
or kind of you use the ABC roll-up stack from gelato and so on
kind of like you'll always be cheaper than kind of using everything
Ethereum natively right so what what's your
what's your answer to kind of which kind of applications will have that highest level of security?
And for which applications is it fine to kind of compromise on the trustlessness and permissionlessness
that kind of, that you don't necessarily need, but then you also don't want to pay for.
So kind of like in my head, this kind of has to kind of play out as a spectrum where kind of I as a debt developer kind of have
to decide what's the security standard I need and what's the price point I'm willing to pay for
that. Yeah, that's a good point. I think there's some applications that are definitely willing to
compromise on that. For example, games, right? You don't need the maximum level of security,
censorship resistance, decentralization for in-game transactions, right? You may, but you may want
have it for defy transactions or or or for institutional use cases or for tokenization use
cases you don't want to you know you don't want to face the risk of being censored or or or for
a chain to halt you know when you have one sequence that that's everything you you you you have a
single point of failure right there are some use cases that you know where the the lower the
transaction value the less you care about about the change you you you launch it on
But to answer your first comment on base drops are more expensive, by definition, it seems like they are.
I want to jump into the pre-confirmations topic because it is unclear once base pre-confirmations arrive.
But it seems like they can be a little bit more expensive.
But look, even if in the darkest days where we were, you know, using more blob space than the Ethereum limit, like consistently,
I think users were paying less than 10 cents on Tyco, which is still cheap.
And this is just the beginning, right?
If we look, that's why I say that all these solutions that offer like sub-cent transactions
and sub-second confirmation times are short-term solutions.
But if we think in the long-term, Ethereum itself is going to, it's planning to scale massively,
not only on the gas limits, but also on the blob space.
And I think the price is not even going to be a conversation.
It's going to be like a commodity that no one will care about.
So, yeah, I don't think that limit is going to be a problem of the base roll-ups.
I think the benefits, especially the compost ability, is going to be a game changer,
a decision-maker for both users and developers.
Cool. Maybe let's loop back to kind of explaining the other half of the Tyco Tech stack.
right? So kind of we've talked about
the type 1 ZKEVM
pretty
exhaustively so let's talk about
the security model of the
sequence and kind of like how you
kind of sequence the blocks and kind of how
you commit them to
L1 because you have a very
interesting hybrid model here.
So
on the sequencing
side, from a technical
standpoint, I'm not
aware of anything like super special of how how the ordering happens in terms of security.
I think the beauty of or what brings security on the sequencing side is that it is,
is that it is permissionless. I think being permissionless is makes by definition, not by definition,
but can be can make by definition something more secure if it if it also comes with decentralization.
And decentralization also means, you know, eliminating single points of failure, right?
That's what makes something more secure.
But I would say, and Kat me if I'm not answering your question with this,
I think that the security model that Tycho offers comes more from the proving side than from the sequence inside.
And why do I say so?
because one of the things that we realized,
and not we, but also Vitalik has said it many times,
is that all these CK circuits are so complex.
There are so many lines of code that is so improbable.
It's almost close to 0% probability
that there will be no backs in the future.
it's very likely that within all those lines of code there will be bugs.
So that is a big issue, right?
Because you want to say that you leverage the security of Ethereum
and we indeed do probably more than other roll-ups
because of the characteristics of being a base roll-up.
But if one of the most important parts of the technology,
which is the circuit circuits, don't offer,
we say that there's the zero-churchase.
chance that there's no bugs, boom, all the narrative goes down, right?
So what we did is we transition, right?
From the CKVM, there's other reasons, right?
It's also on the complexity side, but we transition towards a CKVM approach.
And on the CKVM, we find some ease to offer our users higher security levels.
Why?
because there's different teams
building different
ways of generate proofs,
different minds working on different codes
and it's very unlikely
now, before it was very likely
there were bugs, now it's very unlikely
that the same buck
repeats across the code
of all these different teams
that are building different circuits, right?
So we leverage on a multi-proof system.
We use TEEs, which is not CK.
We use SGX, which is, you know,
Intel-based programming for generating proofs.
But we also use CK with succinct labs, SP1.
And we also use CK with risk zero.
So we alternate between these systems when we generate the proofs.
and we minimize the risk of having the same back across all of them.
How do you alternate?
Is it kind of like on a block by block basis or can block builders determine what they use or what are the parameters?
So it has been changing, right?
Because when we started doing this, the cost of generating the CK proofs was so high that we couldn't.
make a high percentage of blocks that were sickly proven.
As these teams, SP1, 30,
enhance and improve their technology and make it faster and cheaper,
we are trying to make mandatory
a higher percentage of blocks to be CK proven.
It sucks that I don't have the exact number now,
but I think around 20% of the blocks
proposed on Tyco are
CK proven now and we make it mandatory
like from time to time there is a block
that needs to be CK proven.
With our proposer,
the one that Tyco runs,
we now we propose 100% of the blog.
We prove 100% of the block with CK
but we don't make the community proposers.
We don't make them do that because it's really expensive.
If there's kind of like say 20% of the blocks
that are ZK proven,
does that give me
any guarantees for the blocks in between that are not ZK proven?
So kind of like if there's a ZK proven block after mine that's not ZK proven,
does that kind of confer the security guarantees of the ZK proven block to me
just because kind of like they were able to kind of build the ZK proof on that block
that kind of is downstream from my block?
That's correct.
You have less security guarantees on all those other blocks
because all those other blocks are using TEs, are using SDX,
which is also secure, I would say, more, maybe more or not more than fraudul.
I mean, it's a trusted execution environment, right?
So kind of like it's a hardware kind of thing.
Yeah.
Where kind of you trust in Intel that kind of like that thing is.
Exactly. Exactly.
Exactly.
But kind of like if I'm on one of the TEE blocks, does the ZK proof of the block after mine,
does that also implicitly validate my block and kind of,
give that CK?
I don't think so.
I don't think.
I'm not entirely sure, but I don't think so.
Though I want to mention that, yes, it's not ideal security-wise right now.
It's very than others, but worse than what we're trying to get.
But this is training wheels for us.
Our goal is to be a full CK.
Actually, we need it for being stage one.
So that's the beauty of the pressure, the layer two bits, layer to be puts on
all of us. This is training wheels for us. We're waiting for, and they're doing a fantastic job,
we're waiting for all these CKVMs to improve, to make it faster, cheaper, so we can leverage
on them and actually offer 100% CK coverage, but we cannot do that as of today. Okay. But then
kind of every, on every block that kind of gets, that a validator kind of puts forward, kind of, they,
they can decide the proving system
and the compensation for them also doesn't vary.
So kind of like if I kind of do the more
honorous kind of ZK proof,
I'm not rewarded more than kind of if I just do a TE proof.
So you raise a very nice topic.
The way that proofers are rewarded
is kind of like a free market.
And this is interesting because some games theory
play here. The proposer will get the fees from the users and they need to get enough fees for
doing two things. One of them for the proposed block on lay one and two for paying the
prover. The prover will put a price on the on the block. So they compete with each other, right?
Sometimes they can be off-chain agreements. We don't know. Or even the
same proposer can choose himself as a prover, which happens a lot of times. And that makes sense,
of course. But I'm not 100% aware of how the price varies. But I think it's common sense to
think that when it comes to generate a Siki proof, which is more expensive, the price the
proposal need to pay to the prover will be higher, right? Because the prouber need to make a profit.
But is it a double-sided marketplace?
So kind of like as a customer, say as a user who's transmitting a transaction
or probably more likely adapt developer who kind of deals with transactions from their users,
can I say I want to be on Tyco, but I only want to be included in ZK blocks?
No, no, that cannot happen.
But it doesn't change the fee for the user.
How the fee is scored it.
because when transactions sit on the mempool,
they can be picked up by whoever.
There's also a competition, right, for that.
So users don't need to change the fee that they pay every time.
I mean, I use Tycho consistently,
and I don't see spikes in the same day.
So it doesn't affect the user, to be honest, to be honest.
Okay.
One thing we haven't talked about at all,
kind of like in a word where kind of you don't have a sending,
centralized sequencer, you also don't have a good way of dealing with M.E.V. extraction
anymore. So kind of what's your philosophy on that at Tyco? So we don't extract the MEP
ourselves, which is one of the drawbacks that I mentioned in the beginning of the interview.
The layer one validators are the one that captures the MEP. The philosophy
for us is, well, first of all, we knew this
when we decided to go this route.
We knew it was going to be harder for us to make a profit,
which is fine, honestly.
I don't think, like, Ethereum itself,
they don't make profit like base, for example, right?
And they are the strongest chain.
And we want to replicate Ethereum also,
even also on the business model,
even though it's a little bit more solid on our side
because we capture the base fee, right?
But all the MEP, we give it up.
We give it to the layer one by theators,
and we're fine with that
because there is this conversation on crypto Twitter
of, should we tax the layer choose?
No, please, let's not make human interference
in a free market, right?
If the layer one offer better services,
like better DA, better settlement, better everything,
then all the roll-ups will decide not to go to all the A's.
They will happily pay Ethereum for that.
We don't need to tax them.
In our case, we're already paying the tax via this sacrificing this MEP.
And we are also happy with that because we help Ethereum be more valuable, right?
We kind of give another source of revenue for validators.
And again, I mentioned like 10, 20 minutes ago that not many validators are chiming in.
This is going to change.
We will get to that with basic pre-confirmations.
We will give another source of revenue to validators.
And it also makes being a validator more profitable, which can very indirectly impact the asset.
So we are happy with the trade of them we're making here.
Okay.
we're almost at time.
So maybe before we kind of,
before we roaned up,
tell us about the Tyco ecosystem.
So what kind of depths are on Tyco?
And how do you see that changing in the future?
Okay.
Do we have time to go to the base pre-coms as well?
Base pre-confirmations?
Yeah.
Let's quickly talk about the base pre-confirmations
and then we'll talk about the ecosystem after.
Okay.
Yeah, because I think it's much more.
exciting, to be honest. So
base pre-confirmations
because I think I've been introducing
him for a lot of time. I will be very quick,
okay? Base-pre-confirmations, what they
make is that
all these problems that I mentioned of user
experience on Tyco,
you have to wait a lot, proposals don't
make much money, we have to propose
blocks like a bot, blah, blah, blah.
All of that get fixed with base pre-confirmations.
What base pre-confirmations does,
they connect
layer 1 validators with users.
In a way that those layer 1 validators make a promise to the user
that they will include their transaction in a future block.
This is what happens in centralized sequences all the time,
but this is done in a decentralized manner.
And it's very cool, right?
For the user, what it means is that they get instant confirmation of the transaction,
which means Tyco don't need to push empty blocks,
every 30 seconds anymore to guarantee a decent
UX. The UX
is guaranteed already. So all
the blocks that will be proposed will be
profitable by design because we can wait
for them to be
proposed, which will be a call to
action for layer one validators.
That's one thing, right? So it fixes
the UX for
on-based roll-ups, which is very important,
but it does it in a
decentralized manner. And how?
Because every layer one
validator can offer himself or itself as a pre-confirmer, they will get a small tip from the user.
They will need to put a bond like a stake in a restake model, like a eigenlayer style.
And if they don't fulfill the promise to include the transaction in a future block, they will
get slas.
So the beauty of it is that it is economically secure.
It's not like a centralized pre-confirmation system.
It's like a trust me, bro, security guarantee.
In this case, it's economically secure.
And this way we are achieving, like, again, this sounds super scummy,
but it's actually true, right?
We are achieving like the decentralization, the security,
and the throughput in a decentralized manner, right?
Again, I hate to say to talk about the Trilema because it sounds super scummy, but it's actually true, right?
So this is coming in less than one month.
Justin Reg Thune Foundation shot out to them because they have been leading this effort, of course, alongside Tyco, because this is useful for base roll-ups essentially.
And yeah, we're super excited about it because it fixes most of the problems that we have coming in less than a month.
Super nice, yeah, looking forward to that.
Maybe to come back to my ecosystem question.
So let's kind of round it up by talking about the sort of daps that live on top of Tyco.
Okay, we have around 140 live daps on Taiko, which is not much but not bad,
considering how hard it's been to grow the ecosystem with the current UX.
Most of them, I would say, are Defy and Gaming.
gaming because I mentioned before
that gaming may not be the best suited for base roll-up
but even though we are huge believers in gaming
and we believe that they will be suited
for our roll-ups or any others
and they provide the largest bunch of transactions.
So we have a bunch of games that are quite good
and we have a bunch of new ones coming.
On DFI, we have everything you can imagine, of course,
dexes, lending and borrowing.
Like all the basics,
we don't have strong
perp dexes. Reason being,
without a good UX, you
cannot have a perp dex, which you
can lose a lot of money. You have
to wait one minute to get
your transaction through. So we're going to
push harder on
on perpnexx.
Moving forward, and to finish up on
ecosystem, if I have 30 seconds
more, I like to mention...
Go ahead. Yeah. I like to mention something that is
called Tyco Takeoff. Tyco Takeoff is like our biggest bet on ecosystem growth moving forward.
We have realized that even the biggest ecosystems, the biggest chains that have like thousands
of applications, you know, at the end of the day, the users don't use a thousand applications.
They use 20, 30 or even 50 of them in the best cases, right?
So what we're trying to do here with Tyco takeoff is we are selecting a few heroes on Tyco
and we accelerate in very, very, very, very heavily
in every manner that you can imagine.
I'm not going to get into details,
but we try to make heroes in our ecosystem.
What we ask in return is that when they launch their token,
we usually select like early stage projects.
When they launch on Tyco, they need to launch their token natively on Tyco, of course,
so that we generate like organic TDL without having to spend so much on incentives,
which is, you know, trust me,
like a lot of money to maintain decent levels of liquidity.
And we also ask them to air-drop our community in several ways.
But one of the ways is to air-drop the TICO holders.
So we're trying to make also a business use case for TICO
through this TICO take-off program.
And it's quite exciting because, you know,
getting alignment between the holders, the community,
and the applications that are heavily accelerated by,
us, we believe is the way to grow on ecosystems is what we're going to do moving forward.
Super cool. So where can we send people who are interested in Tyco or building on Tyco or becoming a
DAP user on Tyco? Okay, Tyco.X.C is the website. The Twitter is, especially, I would send it to
the website because you have everything in its node there. Twitter is TycoX.YC as well, I was checking.
all the updates happened there.
And yeah, I think those.
We also have a YouTube channel where we post some videos of the last base
Rolab Summit that we did in San Francisco.
We're doing another one in Cannes with a lot of good names coming.
So you can also go to a YouTube channel and check out.
And our Discord, and our Discord, of course.
Cool. Thank you so much for coming on, Gerardin.
It's been a pleasure.
Yeah, thanks a lot. It's been a pleasure for me too.
