Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies - Solana: From On-Chain Nasdaq to the Pump Fun Craze - Anatoly Yakovenko
Episode Date: March 15, 2025Solana needs no introduction. Ever since its inception, it pushed throughput scaling on a single chain, without the need of sharding or rollups. Despite its ups and downs that culminated at the bottom... of the bear market after the FTX crash, it managed to not only survive, but build a vibrant community around crypto's (arguably) most prominent PMF (thus far).Join us for a fascinating discussion and learn about Anatoly's take on controversial topics such as MEV, concurrent block leaders (the equivalent of Ethereum's PBS proposal), L2 rollups, Solana economics, how to tackle potential exploits and more.Topics covered in this episode:How the original Solana vision turned outWhat makes blockchains valuableMEV & program writable accountsConcurrent block proposersCurrent bottlenecks for scaling SolanaMainnet vs. L2 rollupsFiredancer upgradeHalting the network vs. rollbacksSolana’s scaling roadmapDoubleZeroWorst hacks on SolanaUI exploits, Bybit hack and smart contract securitySolana economics and the SIMD-0228 proposalFuture improvementsUse cases for blockchainsSolana mobileEpisode links:Anatoly Yakovenko on XSolana on XSolana Mobile on XSponsors:Gnosis: Gnosis builds decentralized infrastructure for the Ethereum ecosystem, since 2015. This year marks the launch of Gnosis Pay— the world's first Decentralized Payment Network. Get started today at - gnosis.ioChorus One: one of the largest node operators worldwide, trusted by 175,000+ accounts across more than 60 networks, Chorus One combines institutional-grade security with the highest yields at - chorus.oneThis episode is hosted by Brian Fabian Crain & Martin Köppelmann.
Transcript
Discussion (0)
That was weird. Like, I didn't expect NFTs to take off. I didn't expect meme coins to take off.
I think the big innovation in blockchain is actually that you can create programmatic market makers through AMMs and all these kind of clever curves that eliminate a lot of the layers that you have on Tradfai that are necessary to get to run Tradfai-based trading.
My feeling with MIV was that we need to maximize competition. So users always have the option to go to the best source.
for their trade, whether that means that
validator is maximum
sandwiching and then giving everybody a rebate.
That could be one model that actually works.
If there's truly an exploit
and you continue running the chain,
even if you allow defyne liquidations to run,
they're mixed in with exploited transactions.
As Ethereum folks, I don't know if you're around
for the Dow hack,
the only reason they were able to deal with a hard fork
is because it was locked.
All those funds were locked up in a smart contract
that couldn't exit.
if they started getting mixed with a whole bunch of things, like, it's just impossible to unravel.
You can't roll back the real world. There's action that's staking in the real world based on the
chain state. Circle is sending funds out based on like mint and burns, right? I think once you have
four clients, you could say that the probability of a bug in three is so much smaller than the
probability in bug in two that it's fine for one to be down. And then you can do this kind of
maintain some blindness while one is down and rotate. Welcome to every cent.
the show which talks about the technologies, projects,
and people driving decentralization
and the blockchain revolution.
I'm Brian Crane.
And today I'm joined with my guest host,
Martin Kruppelman, who is the founder of Gnosis.
And we have a very special returning guest today,
Anatoly, the co-founder of Solana.
Of course, Solana needs no introduction.
So we're excited today to talk,
you know, get into the wheeze a little bit
of where Solana is at, what's coming for Solana.
some of the challenges.
So I'm really excited for that.
And now, just before we go into it with Anatoly,
we want to share a few words from our sponsors this week.
If you're looking to stake your crypto with confidence,
look no further than Corse 1.
More than 150,000 delegators,
including institutions like BitGo,
Pintera Capital and Ledger trust Coros 1 with their acids.
They support over 50 blockchains
and are leaders in governance or networks like Cosmos
ensuring your stake is responsibly managed.
Thanks to their advanced MEV research,
you can also enjoy the highest staking rewards.
You can stake directly from your preferred wallet,
set up a white label note,
restake your assets on eigenayer or symbiotic,
or use the SDK for multi-chain staking in your app.
Learn more at chorus.1 and start staking today.
This episode is proudly brought to by NOSIS,
a collective dedicated to advancing a decentralized future.
NOSIS leads innovation with circles,
NOSIS pay, and Metri,
reshaping, open banking, and money. With Hashi and NOSIS VPN, they're building a more resilient
privacy-focused internet. If you're looking for an L1 to launch your project, Nosis chain offers
the same development environment as Ethereum with lower transaction fees. It's supported by
over 200,000 validators making NOSIS chain a reliable and credibly neutral foundation for your
applications. NOSIS DAO drives NOSIS governance where every voice matters.
Join the NOSIS community in the NOSISDAO forum today.
Deploy on the EVM-compatible NOSIS chain or secure the network with just one GNO and affordable hardware.
Start your decentralization journey today at NOSIS.I.O.
Cool. Well, thanks so much for coming on, Anteol. It's really great to have you back.
Yeah, thanks for having me.
So I wanted to start off with, so the original Solana vision, right, from, you know, now it's like seven years ago or so,
was to have blockchain at NASDAQ speed.
And, you know, at the time Solana was really pretty alone in trying to a blockchain, you know,
a single chain, high throughput, maximum performance speed.
At the time sharding was like the hot thing.
when it came to how to scale blockchains,
and you guys were kind of alone.
Of course, today, you know, Salana has gotten lots of traction.
And also the idea of single throughput chain
is something that's gotten a lot more traction
with a lot of other companies pursuing something similar.
But just sort of zooming out,
like when you think back to your vision at the beginning
and where it's at right now,
how do you feel about it?
What do you think are some things that, you know,
happened like you planned?
what are some things that maybe haven't worked out?
Yeah, I mean, a lot of stuff kind of came true that I thought would,
but just in different order.
Like the order was unpredictable,
and the applications are also were unpredictable.
You know, we thought that trading was going to be a really important use case,
but we didn't think it was going to be the most important use case.
I think to me that's pretty obvious,
that like execution of launching assets
and execution of the trades between them
is what's driving most of the volumes
and revenues across all the applications
and layer ones and layer twos.
This is where all the fees come from.
And if you don't have revenues,
no matter how many tokens you launch,
you can't afford to pay all the L6 engineers.
Somebody somewhere has to figure out
how to make revenues.
and I think that came true.
What surprised me was that how slow Tradfai
that was adopting this stuff,
I expected like stocks and all of these things
to be the main usage.
But right now it's basically like gaming.
Like it's NFTs and meme coins are like,
if you look at Twitch streamers that are streaming trading,
it's exactly the same content as when they're streaming mobile gaming.
any kind of game. It is effectively gaming. It's just kind of
PVP loopbox. This is what I kind of how I think of it. That was weird. Like I
didn't expect NFTs to take off. I didn't expect meme coins to take off. But they're
using the same technology. It's the same kind of escrow auction processes on
Shane. I thought it's like Central Limit Order books would be the killer way to
run markets on Shane. But I think the big
innovation in blockchain is actually that you can create programmatic market makers through
AMMs and all these kind of clever curves that eliminate a lot of the layers that you have
in Tradfai that are necessary to get to run like Tradfai-based trading. And that's pretty
interesting. And even if the AMMs are less efficient, the costs that you spent on professional
market makers like Citadel's and jumps are.
so large that you might actually, as a early stage company, you might be better off using
an AMM, to be honest, versus like the more professional trad-fi approach. That was unexpected,
and I think that's a legitimate innovation. I think where you see the explosion of assets
and how they're traded, it's primarily through automatic-based kind of market-making approaches,
like curves and bonding curves and AMMs and stuff like that.
I would like to jump in on a topic.
You hinted that and I think people have very different opinions about it.
What makes blockchain valuable?
Where does, or you mentioned fees and said, yeah, somewhere there needs to be fees.
And I think at least in Ethereum land, there is kind of this, this, this,
between should ether be just money and somehow that's where the value comes from or should
there be a transaction fees? It seems like you have a very clear answer here.
I'm traditional conservator. You have, you know, discounted future cash flows. That would give stuff
value. There's exceptions to that, but I think they're the outliers. You can't engineer for them.
This is the problem that I have with like, oh, Bitcoin is so successful. Therefore,
or we must build a better Bitcoin.
I don't understand the engineering reasons
why Bitcoin is successful.
So therefore, I cannot build a better Bitcoin.
So would it be fair to say the number one,
or one absolutely crucial success metric for Solana
is overall fees?
Yeah, I think the network needs to have enough fees
to pay for all the engineering effort
and the validators and all this stuff.
If it doesn't have that, I think it will eventually die.
That's kind of my belief.
So where do you think fees will, what will be the scarce thing that people will pay fees for?
So, I mean, generally speaking, is it just general block space, or you would say block space in general will be more or less free?
And it's specifically congestion block space or congested block space?
Yeah, I think what the value that these systems provide is they have state that has an economic opportunity with cost and the prioritization to access that state.
And this is where you can charge more than the hardware and the bandwidth and all the nuts and bolts.
And if you're purely selling hardware, you know, a thousand replicated amount of storage or whatever, you can only charge so many.
more multiples off the cost of the hardware, like maybe 10x.
Otherwise, a competitor will just underbid you.
And you will lose this way because hardware just constantly keeps getting
cheaper and cheaper and your dominant costs are always going to be the people.
Right?
Like, it's just at the end of the day, you got to pay for all the engineering stuff.
So this is where I think, like, probably again,
why Salamis has such a different roadmap than Ethereum
is that I think that the network cannot survive
without the execution layer also paying for all the
costs to build the data availability and all this other stuff.
The execution layer is where you can charge real fees
based on content and everything else has to support that.
So the, I mean, what you just said,
kind of priority access to
economically important
content
kind of MEP
right? Yeah, absolutely.
So one
theory
people have been discussing is that
MEV
was definitely in Ethereum for a while
it was very much captured by the chain
and by validators
but to some extent now it seems to be moving more towards applications
or in principle applications have the power to add extra rules on top
and kind of protect their MIVE.
Do you see that as a danger for Solana in that case?
No, no, I think the goal for Solana is to
So there's designs for application sequencing.
I actually proposed one a while back
called program writable accounts.
I don't know, Brian, if you were involved
as shooting down, that's empty.
But basically, the idea is that I felt that it's important
for apps to be able to set the cost
to take a write lock on their state.
So that economically important state,
that's important to access,
if the application can say, hey, to be able to write to the state,
because access really means right to modify it, to take that offer,
I want to set the fee and have that fee be application-specific
and dynamic and controlled by the business logic of the act.
That would eat into some of the fees that the chain captures,
but what's good for the goose is good for the gander, basically.
That system would preserve atomic composition between applications,
and the cross-atomic, all the arbitrage between all the apps that are extremely profitable,
that you cannot remove, right?
If each application then moves off chain and runs their own L1 or L2 or whatever,
their own separate environment per app,
then you lose that atomic composition and you effectively lose those revenues.
So what I want to see is the L1, Solana, this giant, one giant atomic state machine,
its revenue to be mostly driven by the opportunities that arise from having everything in one one's giant state machine that's synchronous, that's fast, that's as cheap as possible, and then give the application the tools to capture whatever fees they want.
And they can tune them, set them higher, lower, distribute them to the token holders, distributed to the makers instead of takers, you know, prioritize cancels instead of other orders.
So a bunch of whatever they want to do, I want to leave that to the application layer.
And this is, I think, where Max Resnick and I really mind-melded.
Because at a gut level, I thought that we need multiple block producers at the same time
to create a dynamic market that's competitive for people to accept these transactions.
And he kind of came from a more economics research angle,
where he saw that if you don't have multiple proposers,
then applications cannot really build these value capturing systems at all
because the validator will be able to effectively censor.
And if you're censoring all the inputs
that are going into this application-specific sequencing thing,
then you can effectively control the fees that the app sets anyways.
So what you need is like you need both of those pieces.
You need the hooks in the system for apps to be able to set their own fees.
and you need the competition between block producers.
And I'm 100% on board with all those changes,
even though on the surface they seem to give up some of the economics of the apps.
But the goal is if we have all the apps in one giant system,
the economics between all the cross-application arbitrages are going to be way more than enough.
So with regards to MEP, I think you've always had,
you know, in many ecosystems, people have been like, oh, MEPV is a bad thing.
You need to minimize MEP.
I think you've always had a more positive view on MEP or like, how do you view MEP?
And how do you think MEP is going to develop with those changes in the future?
Look, a market maker submitting their cancel before everyone else is MEP.
They're getting priority access to state ahead of everyone else.
They're paying for it to the exchange or to the whoever, right?
that is MEV, but that creates better markets because then the market makers can have tighter spreads.
So my feeling with MEV was that we need to maximize competition.
So users always have the option to go to the best source for their trade.
And whether that means that the validator is maximum sandwiching and then giving everybody a rebate,
that could be one model that actually works.
That's fine, as long as it's competitive and the user.
users can make that decision.
And I suspect that each individual users
are unlikely to make that decision unless they're a pro.
But like Phantom, that wants to serve the best possible
blockchain to their users
that's competing with SoulFlare with Backpack
will make the informed decision to go with a particular
solution for how they route their transactions and stuff.
But we need competition for all that to work out.
So my view is that like the stuff isn't
all that different than
And I'm selling block space.
I'm buying block space for me to get the best offer.
I need a healthy market of buyers and sellers.
That's kind of the basics that we need.
So in Ethereum, proposed builder separation, right?
It was I chosen a few years ago and it's the state today.
What do you think about Solana?
Do you think we will also end up with proposed builder separation?
And is it something desirable?
I don't know if there's anything we need to enshrine for that to work.
Like it's kind of happening right now.
I don't know if there's changes to the, like if you need like enshrined bundles,
I don't have any objections to those.
What I do care about is concurrent block producers.
So multiple concurrent proposers or multiple concurrent leaders that we have like two validators,
one in Singapore, one in New York
that are running there, that are both
accepting transactions, and the user
has the option. Do I send to the
closest one in New York, or maybe
one that's further away that's offering a better deal?
And if I'm
a market maker and I need my cancel to
land, what
that means is that I can
send to the one that isn't censoring,
and the network,
the actual enshrined protocol
needs to support an ordering
mechanism where I can pay the network
within this block, I'm paying a fee to go first.
So does that make sense?
Yeah, I think that's definitely one of the topics.
I know there's a huge focus for you.
And I think currently in the Solana developer thing is this multiple concurrent proposer.
So for you, the main reason why you want that is because you want competition between validators
and you want basically less ability for validators to kind of, you know,
gouge the users or like it's also a beautiful thing because this is a way for a truly
decentralized global blockchain to beat tradfi to beat nasdaq on actual pricing and this is a very
subtle thing that the changes here is that if like the how like you think of the network or
any exchange or any of these systems as reflecting of the world state and it has there's an error
between that state and the real world.
And constantly people are trying to reduce that error by submitting trades.
They see something, a price that's offered, they have information in the real world.
They're trying to close that error.
And if like some event happens in Singapore and there's a local leader there in Singapore,
the latency for me to submit that trade is much, much shorter than to send it all the way to New York.
So for Solana as a blockchain to be with NASDAQ, we need a block producer.
in every spot in the world that has any economic activity,
so then the user can submit that transaction locally and cut that latency.
But I want to understand that because, or at least in Ethereum,
you have for an upcoming block, you have exactly one proposal.
And that one might sit in New York.
They might sit in Singapore.
So in Solana, it sounds different,
or is it still the same concept?
Right now that's exactly how Solano works,
but there's no reason where you can't have two proposers or N.
Okay.
But then let's say, okay, so now on the same slot height, block height,
you have now two, one in New York, one in Singapore.
So now, let's say they would have two conflicting transactions here in one.
Then how is this merged or how is this resolved this conflict?
There's a deterministic merge.
So we need that.
It has to have that property.
And two, if I want to go first, I should burn some soul to be ahead of somebody else.
This merge will be on the whole block level or on individual state.
I mean, probably.
You can think of it as a block that is a chunk of data, right?
One megabyte, two megabytes.
And the first half is written by Proposer A.
The second half is written by Proposer B.
the network receives the entire block
and then runs a deterministic merge
to compute the results.
I see.
But you can have any number of these proposers.
The main constraint is basically,
we think, we don't know until we test,
is how many shreds we can propagate through the network.
And shreds is our term for erasure-coded chunks of the block.
But it also means that if you hit
if the transaction hits your local proposal,
at that moment you don't know exactly what happens.
You then need to wait for kind of some second round
or the merge round essentially to...
It's not around.
Once you receive the data from the blog...
From the others, right, then you can locally...
Yeah.
I see, yeah.
So the network could effectively...
The latency there isn't really lost
because as soon as the network receives the data,
it can vote on confirming that the data is landed.
So as soon as you see the confirmation,
you can compute the results.
And you don't need to wait for everybody
to compute the result and confirm that
because that part is deterministic.
You obviously, like, if you're a professionally, like system,
like that's like a custodian or a bridge,
you might want to run different boxes
with different, one fire dancer, one onza,
make sure all of them agree on the results.
Essentially,
essentially the blockchain would then become a deck, right?
Or, I mean, what is it directed, a cyclic graph?
No, it's different from a DAG.
Dags resolve their ordering based on the next producer
that then decides the ordering of the previous transactions.
This, you have literally concurrent leaders.
It's no different.
It is like Leader 1,
the first page later to writes the second page,
and then you mix them together.
The shuffling here is enshrined and deterministic,
and that's different from the tag.
It's not dependent on the next producer.
So if you have these two proposers
and let's say there's some arbitrage opportunity,
and now people,
with both proposals,
put in transactions that both try to capture
the same arbitrage opportunity,
then basically it will get,
merged and then whichever is the first one actually gets executed.
Yep.
So the highest burned one, the one with the highest burned for fees will execute first.
Okay, okay, okay.
What's the hardest thing about making multiple concurrent proposals happen?
What are the problems that you need to solve?
The stability of consensus, basically, like, it's just the implementation is, like,
like prone to outages, I would say.
I don't think it's design-wise any worse for the worst case,
because consensus mechanisms all deal with a single leader
that produces two different blocks.
They all have to resolve that.
Even if you slash that leader, right,
you still don't want to have an outage in case that you have a bad leader.
So the worst thing that you want to do is have a performance segregation.
and slashing for performance segregation is fine
because then you reduce them in practice.
But you cannot, you still have to deal with it no matter what.
So in an environment where you have two leaders,
the network will see four different views potentially
of the results of the block.
Only leader A, only leader B, neither or both.
And the deterministic shuffle will be different for each one, right?
If you only see data from leader A,
then you just take Leader A's block.
If you see both, then you shuffle.
If you see neither, then you skip.
And this is the complex part.
What you want to do is you want to make sure that there is no partition in these views.
So every time they vote, they take the happy path.
It doesn't matter what partition they see,
as long as everybody in the network sees the same thing.
If everybody sees that Leader B failed, that's great.
It means that they all agree on the vote
and that you continue doing
on the happy path of the
consensus rules.
And Salana, like,
we built our consensus.
It's very complicated and the pain in the S to work with.
But the properties that it has
that previous systems didn't have
is that it was bandwidth efficient,
that you have a block on every point in time,
that there was no stalls waiting for rounds
to finish before the next block starts.
You start, we need,
now see systems like
I think Apdos and Sui
have gotten that to work with more modern
consensus algorithms. This is a whole team
from Zurich optimizing and
basically getting rid
of all my technical debt
to have like a better
version of consensus on Solana.
And again,
we can maintain a bandwrench efficiency
and we can deal with all the predictions.
Then talking about
efficiencies, what are
currently the bottlenecks?
What's currently the bottleneck for scaling in Solan?
It is basically,
you've seen, like, I think, like Eclipse
and a bunch of other kind of Solana SVM-based layer 2s
tune up the compute units.
So basically the bottlenecks is making sure that...
You need to help me here.
So say again.
There's a whole bunch of not Salana layer.
two's, but layer two is that use SVM.
Some may even be Salana layer two's.
I don't even know what the look at marketing.
I don't know.
I don't even know which state route they're looking at.
Is it Ethereums or Salinas?
It doesn't matter.
I treat them all as competitors.
Because if the fees accrued there and not accruing on main that,
they're not paying for Maine that development.
So, and they're fine.
We're all, it's all good.
We're all building open source software.
so it's like Redhead versus Ubuntu.
It's a healthy competition.
But they've been able to increase their block capacity,
I think, by factors of five to ten.
So it is basically the blockers are, is just testing.
It's just making sure that when we,
when Onza or in FireDancer increase the capacity,
that there isn't some,
I call them like,
They're denial of service attacks.
There isn't some metering problem in the VM or in the block or whatever
that an attacker can exploit and create a block that takes 10 times more to process than normal.
These are basically like the worst case kind of scenarios that are pain in the ask to find.
Let me just repeat because I'm not that deep into Solana.
So you're saying, yes, there are.
There are versions essentially of Solana, the SVM, that already run at 5x capacity of
what Solana does right now and it kind of works,
but you are slightly more conservative and...
We have to be, right?
Those are not decentralized, right?
They're basically roll-up-like.
But even... It doesn't matter.
Are there earlier they can take more risk?
That's the difference.
If we were just starting out,
I would tune the network to 10x performance
and deal with the fires.
Like, that's the difference.
So, I mean, the other point you were making is that you would say kind of L2s are not really interesting, you would say, to Sulana from an economic or fee perspective.
They're not interesting to me.
Yes.
There's people that they're interesting too, and that's fine.
But what I want is MENAT to succeed, right?
So to me, what matters is activity on Mainat, trading and Mainat.
So essentially the whole idea that,
kind of the Ethereum idea that somehow you would provide a blob space
or kind of data availability or transaction ordering
and then are they use it for other space that you don't think.
If you could do ordering, then yes, but there isn't,
I think the based roll-up thing is still not fully defined.
And if the chain is doing ordering and DA, the based roll-up, the only difference is then like a different virtual machine or something like that.
I do very much agree with here.
So I am also in the, I'm kind of in the Ethereum camp of if roll-ups, then it should be based roll-ups.
Because I feel like if Ethereum is trying to sell something, it cannot just be generic data availability because that's,
commodity. It needs to be specifically
transaction ordering.
We already have based roll-ups then.
Like there's there's already like ZK-proved
Merkel trees or
like classically proved
Merkel trees, both leaves
for spade. To me, the interesting
thing about based roll-ups is
that they kind of
still have this atomic
composability or you can have
atomic transactions that touch L1
and L2 state. And
the advantage
is, I think it's fair to say,
okay, you still have some validators
that have all the state and then it almost doesn't matter
whether that's on the L2 or on the L1.
But if you care about being able to have lower perform,
or kind of have validators,
I mean, well, here's my debt node sitting next to me
that runs Ethereum, and I can still do this from home.
If you care about that, then it does make sense to say,
okay, some state that kind of exists,
but it doesn't have to be available
to anyone who is
running a validator on the L1.
There's already sort of stuff like that
running on Solana. So
Metaplex built this thing called compression.
I don't know if you saw this terrible marketing
name. I came up about.
But it's basically a Merkel route
that's on chain and you can
atomically prove the
data exists in this tree.
and another piece of data exists in a totally separate tree,
push them to the L1 state,
run a computation on them through a program,
like a token swap,
and then push them back into the trees,
all of this in a single atomic transaction.
So the state that is effectively colds can be offloaded
and run programmatically in these Merkel trees.
And the primary thing that they're used for is like NFT,
like you mint your 20,000 NFTs,
you can instantly like mint them in one batch or one Merkel tree
and you just register the leaves with the chain
so wallets and stuff can pick him up.
And there's now a ZK-based approach
to make the proof smaller and a bit more composable.
But that part is like the easy part.
I think the based roll-up is a bit more
bigger pieces to acknowledge you than that.
It involves developers defining their virtual machines.
and the state transition function that connects their VM to the state route on the L1.
My problem with those is that there's just no need for that many reams.
Like, I'm sorry.
Frankly, frankly, I would also kind of for the base roll-ups, say, just do EVM and kind of just...
But then what's the point to have like a dozen different EVM-based roll-ups?
The point might be to reduce, to kind of still allow people to, I mean, run a relatively lightweight L1 validators.
And kind of say, if you want to do the full thing, then you run the L1 and the L2s and have the full state available.
But how is that any different from like I have a program that has like a dedicated start?
circuit, like a ZK circuit, and I just prove that program.
Like, I don't need any VM, general purpose EVM.
I have, like, my minimize whatever the, like, Lego piece of a smart contract.
I just prove that smart contract only, and now I have a way to route data through that thing,
through the L1.
So you have your transaction, approves the state, calls the contract, right?
With the prove, calls another one, and then like a series.
and then it's done.
So that's effectively like the old stateless, like, design, right?
Like, what is the difference between that and a based rollout?
Like, why aren't based roll-ups just simply smart contracts?
I think you can view it as that.
Yeah, then, like, those already, if I kind of exist with some traction.
like they're useful for I think um like large uh like if you're trying to air drop to a very large user base
and that kind that kind of thing people have found traction without but they haven't found traction with like
like what what i think is is kind of interesting that you see that works well is like jupiter uh radium
like even pump all these things are all really really tight
together in the execution.
And you can see in the transaction that a single transaction will hit like five different
markets all at the same time.
And we haven't seen like anyone be able to build a ZK based or something that is truly
off-chain that still plugs into the atomic execution piece for trading.
So let's talk about FireDancer and sort of Salon our clients, right?
So FireDancers has been in the works for a long time.
aiming to speed up and remove a lot of performance bottlenecks in the client.
What are your thoughts on, you know, what the impact will be on the network?
And you see in the future, you want to see a bunch of different clients running in Mainet at the same time?
Do we need a bunch?
I think four is what people, is like four or five is like, you kind of need, technically you should have
five for BFT so you can do maintenance on one and then you still have reliability of one going
down without a liveness failure. That's like the dream, but you just get an astronomical
improvement on safety when you have two. Like because it's just such a, this is the thing that
keeps me up at night is like some bike that auditors and testing and all that stuff missed,
that's a critical vulnerability,
that's a zero day that could steal everyone's funds.
That's a scary part.
You have two separate teams that built the same code.
It's very unlikely that they would have the same bug
in both code bases that can be executed the same way
at the same time.
So you have some redundancy there
that's just really, really critical for these systems.
But if fire the answer now is, you know,
can process more transactions,
then, I mean,
I would expect that all of the validators
will switch to that, no,
because they will earn more fees.
No, the limits
are set network-wide to make sure that
both clients can run it.
Okay.
Agave is not a slow client.
You can actually just
you can literally just remove our limits
and then run it at like 10x capacity
right now.
There's no
the limits are there
to kind of slowly
like there's no fire to increase block space
because even like during the crazy
the worst case day with like
Trump coin launching the fees
the median fee was like
15 cents of transaction and this is when the network
is doing 40 billion so that's high
and if that was sustained
they would become fire but that being the
worst case
at like peak demand that was
you know 10% of NASDAX
volume, that's actually really, really good in terms of fees.
So there isn't a fire.
There is a, like, there's a reason to increase block space that I think is more long-term.
And what I care more about is that like every release and bumps block space by 20, 40%
versus them getting 10x in two years.
Like, as long as the developers are pushing themselves, like, hey, what is the, what is
a bottleneck that is keeping them up at night, that could be exploited or whatever.
They write the test, they get comfortable with it, and then they tune that parameter up.
This is what I want to see more so than like, let's 10x and then see what breaks and have
like a bunch of weekends.
Everyone has to go like fight fires.
You mentioned the security improvements from having a second client.
So how would the network behave in different scenarios?
Let's say 10% of the validators would run,
let's say FireDense is new and only 10% run it.
We need more than 33% run by the minority clients.
So it doesn't matter where, but as long as the smallest client is more than 33%,
then the network would halt.
Right, it would hold.
Okay, I see.
And that's okay.
We're not at the area.
Yeah, yeah.
And you see this as preferable.
So you would say if there's a buck, then it should hold.
Yeah.
Yeah. And people can quickly fix it and it's a naga on everyone's face and it sucks and it should never happen. And there's a lot of effort to put into make sure it never happens. But if there is an exploit, then yeah, please halt. Right away. And then people can go fix it. That's the preferable outcome to anything else. Because like, I mean, if there's truly an exploit and you continue running the chain, even
if you allow like defiant liquidations to run, they're mixed in with exploited transactions.
Fucking that state, the resulting that state is a nightmare.
It's worse than an 18-hour outage or whatever.
Like, as Ethereum folks, I don't know if you're around for the Dow hack,
there was the only reason they were able to deal with the Dow with a state, like with a hard fork,
is because it was locked.
All those funds were locked up
in a smart contract that couldn't exit.
Once, if they started getting mixed
with a whole bunch of things,
like it's just impossible to unravel.
You have like,
somebody launches a meme coin,
the attacker launches a meme coin,
buys it with the funds.
It's mixed in with a whole bunch of liquidity.
Like, you cannot like really untangle that
in any sane way.
You have to like actually roll back, I think,
you know, and reset from an earlier state,
probably. Yeah, to me
that is far worse than just a
hard-livenous failure.
Because
that, like,
not, you can't roll back the real world.
There's action that's staking in the real world
based on the chain state.
Like, Circle is sending funds
out or based on, like,
mint and burns, right?
You can't, like, tell them, hey, go
roll back these transcers.
It's better to halt.
This is, like, I think,
in all these cases, like, it's basically better to halt.
I think once you have four clients,
you could say that like the probability of a bug in three
is so much smaller than the probability
in, you know, bug in two that it's fine for one to be down.
And then you can do this kind of, maintain some blindness while one is down and rotate.
But yeah, the terrifying.
This is like the, this is like the,
The worst nightmare.
Well, I want to ask another, maybe a little bit more on the scaling thing before we go to security.
What do you think is possible here?
Like if you think of like, I don't know, five, ten years ahead, where do you want to see Salana in terms of throughput?
And do you think it's feasible to basically have Salana scale to.
such a level that it can absorb, you know, kind of like, or you can satisfy like all the
demand of the world in terms of block space. Our blocks right now are like two to four
megabytes in size. The New York Times website is like 20 megabytes. My very mediocreed goal is to get
blocks to the size of the New York Times website. Like you don't even need the crazy 2D erasure
code sampling for light clients when your blocks are the size of the New York Times website.
People can just download full blocks to their phone and nobody, like, you don't need these like
next generation technologies yet. So, and that would be like a 10x increase. And is that enough
to handle the entire world? I think somewhere between 10, 10x to like, my belief is that like,
you look at Google,
when their estimated searches
per second is somewhere like around
50,000 to 80,000,
that's a fully globally scaled
web application that is
fully permeated the entire world that people
use constantly.
And people use finance
a lot less often than that.
So like 100,000
TPS in a single 1
would cover the 99%
of the most important
financial transactions for sure.
So somewhere between like, you know,
one 10x and another 10x,
I think that is actually probably as far as
crypto needs to scale.
It sucks to put limits on it and people feel like,
oh, you're being so pessimistic.
But like,
I pray that all my competitors are designing
for infinite demand.
That is like the worst
engineering trap.
And I don't even tell lots of people to 10x the capacity.
What I tell them is like just 2x this year.
Just think incremental improvements a year that you can ship confidently.
And then 2X it again next year.
And just like that's where you get to like the scale that isn't needed for demand.
So one, I know one project has gotten a little bit of attention.
I think still still very, very early is double zero.
So where they're trying to create this, you know, private fiber.
network, do you think that's going to be needed?
Yeah, it's not really fiber.
And this is maybe I can explain to folks how it works and why it's not scary.
Basically, there's nothing you need to change about fiber.
The cables are all basically the same for like the last, you know, 30 years.
And light is very fast.
What the differences are is the signal processing in each end.
the switches that like handle the that load the way they're designed for throughput is with big buffers
and those buffers increase latency and finance and us right like effectively part of the finance world
we want to have the lowest latency possible so what we need is different kinds of switches so every
data center in the world you can just go and tell them hey give me this switch I want to
super low latency switch. I want this dedicated lane. And somebody has to go do that work to go do
those deals, talk to the people and stuff. But in and of itself, it can be very decentralized.
All these switches are in different parts of the world and different data centers under different
ISPs that can all be owned by different providers or whatever. So that part could actually be very
decentralized. The fiber, no one's going to change that. It's already laid. So a bunch of the stuff
is very much
like
in spirit of the internet
and crypto I think
but it is one protocol
and the goal is to use
double zero as an overlay
just to have a if the network is
running in the fast happy path
that all the messages like
when we need to send out votes
that can all go through the multicast
super fast path for double zero
if that fails
they're still going to arrive
over the internet
that, you know, four, 500 milliseconds later.
But what we want is for the happy path
when everything is working for things to be as fast as possible.
Cool.
On the topic of then security,
so on Ethereum, over the many years,
there have been kind of a number of huge hacks
and ways where people lost money.
How so far has this been on solar?
what has been the biggest security incidents?
I think Wormhole was the biggest one,
and that was a really unfortunate bug.
This bug didn't exist in World Home 1,
and then in the second version,
they introduced this bug where you could fake the proof
that there was a mint on the other side,
and an attacker exploited it.
Right when they posted the fix.
for it in their public GitHub.
So the attacker was watching their
GitHub and waiting for the balances
and wormhole to increase
up to the mat, you know, as much as
they could before they
did the bug. So this is like a
professional attacker. I don't remember
for sure, but it might have been
Lazaro's group.
That sucks. Like, this
stuff sucks.
It's hard to
really fix.
There's formal verification
companies that I think similar
approach to how they formally verify
stuff on Ethereum is
typically you
recompile the code through LVM
and you can use the intermediate
output and run, there's formal
provers that can run on top of that to
test some properties.
I have thought
of
like adding
the nice thing about
Solana is the code is separated from state
so you could actually
load, in theory, separate programs that implement the exact same state transition function,
run them both, and then see if they disagree and then abort.
Do we need that level of redundancy?
So we have redundant implementations for smart contracts to,
and obviously the user need to pay for twice as much compute and stuff,
but like, compute is one of the easiest, much easier to scale than bandwidth.
So, like, to me, that's almost like a no-brainer if it would help.
So that's scary.
I don't think Solana is any safer than Ethereum for new code.
One advantage that's kind of weird is that because of some idiosynchronies of SVM,
people don't really write interfaces for smart contracts.
So there's no ERC20 interface.
There's an implementation of the token program, and everybody uses that program.
So it's like as if you add one canonical ERC20.
So that means that if you're a defyp protocol and you only accept SBL tokens,
there's no way for the attacker to create a bad implementation of your C20
and steal funds from your users.
So that has reduced a bunch of the kind of attacks that you see
sometimes with like pool exploits or like bugs in the ERC20s or legit.
legitimately like people making bad air C-20s to trick users.
And that has reduced, I think, a lot of the composability friction because then, like,
you can build a company that doesn't write any smart contracts at all.
You simply just reuse all these already pre-built Lego pieces and they're all
composable because everyone accepts the exact same implementations.
And that's been interesting to see.
So I don't know if Pomp wrote their own bonding curve.
They might have.
But they didn't build the AMM, they didn't write the token contract,
all those had been standard.
Like I think they use a radium AMM.
So that's kind of like one of the examples of that.
Yeah, in Ethereum, I think we had seen first this class of
or kind of a bunch of smart contract hacks.
But very recently there was kind of a new, very, very large hack
that was not related to smart contracts.
directly, but rather to interfaces being compromised.
And because, I mean, the reality is if you interact with a somewhat more complex program,
then go to some interface.
It essentially triggers a transaction that kind of on the interface promises you to do something.
But of course, unfortunately, if the website or the interface is hacked in some form,
it is absolutely,
or at least in Ethereum,
and I kind of would be curious about...
You're talking about Bybit?
For example, yeah.
Or I mean, for sure, yeah.
You're talking with UI interface, right?
The user interface, not the programmatic interface of the company.
Yes, user interface shows you're doing transaction A,
but to the wallet, it's sending some malicious payload.
And the question is now,
do you have any chance to understand in the wallet
what the state change of the transaction will be.
And, yeah, kind of, is there a realistic chance to?
I don't actually think that,
I imagine this should be possible to do on Ethereum,
but maybe easier on Solana,
is that, like, because you know which accounts
or token accounts and the implementation of those tokens,
they're all the same.
What I've been recommending people,
and there's a project called Lighthouse Protocol
is they add guard instruction that aborts
if the resulting state
at the end of the transaction
is different than the user expects.
So this would also protect you from like
the, you write a transaction,
do you think you're hitting an AMM,
attacker does a program upgrade
that just steals your coins, right?
So they fake the simulation.
Like this is kind of like a classic attack.
You simulate your transaction looks fine,
you submit it,
but then something changes in the chain.
Attacker sets a bit, right, and does something different.
So to protect against that, you can add that guard transaction.
And then your cold storage system should have effectively rules and policies
implemented in the cold storage, not relying on the human to go look at the trusted display
and parse that string that actually checks for that guard transaction and checks that
the spending limits and all the policies that you want are not exceeded.
This is what I have been pushing people to do.
And you can kind of get there, I think, with Keystone Wallet,
they have a developer API where you can programmatically set some rulesets and stuff.
But yeah, this is, I think this particular UI hack issues are soluble
through kind of more robust security policies around cold signing and stuff.
I don't know if simulation is much more complicated in Ethereum,
but if you know this is a cold storage system,
you know that it should only be doing simple transfers.
You know the accounts passed to it should only be token accounts.
You can then add guard transactions that assert all that and the stripped of balances.
And then when you hit the chain, it'll abort.
Attacker did something or screwed up.
The thing aborts, page your duty goes off.
Everyone figures out, just what happened, right?
I think that that is soluble
the technology. I think
stuff that's really hard, I think, is just
smart contracts in general
because the
more interesting ones are like
risk systems like
AVE or PURPS or any of these systems
that are not just purely trading.
They're managing risk. They have a lot of inputs like
oracles and
the attack vectors there are not
obvious, right? You have
for like there's latency games and exit games
between the liquidity bots and the capital and the contracts.
Those are very, very hard to get right.
And approving systems can't help there.
But I think that these kind of high-level smart contracts
that manage risk, if they can scale,
they're probably the most destructive part to traditional finance
because this is the entire function of any bank or any fund
or anything is managing risk.
If you can automate it, and you can scale it up,
I think that's very, very disruptive and very valuable.
So with regards to smart contract security,
I think today in Solana,
a lot of smart contracts are upgradable
and is also pretty common for smart contracts to be closed source.
Don't use those.
Don't use them.
don't do it
you have the power as a user
to not use those
but if you do have to use them
there's a difference between being an LP
into a closed source smart contract
versus trading in it
if you're trading in it then use light protocol
which will guard your transaction
if that close source contract does something wacky
or there's an upgrade that happens
in the millier transaction
you can actually protect yourself against that.
If you're an LP, you're moving your funds into that thing
and you're letting it custody your funds.
You should know who the hell you're giving your funds to.
Don't do that.
Look for open source contracts.
Look for formally verified smart contracts.
Look for like multisigs with time locks.
If they have to have a multi-sick for upgrade,
make sure there's time locks on those things.
Like there's a whole bunch of things you can do
to defend yourself and you should ask the companies that provide these services to go do that
and advertise that they are. But yeah, like, I think that part sucks and a part of it is, I think,
developers being lazy or probably not lazy, they're just, you know, limited runway, limited time,
trying to get traction. But I think as protocols mature, like, I think you really need to demand
for them to kind of put in the work to level up their opsac and the security and the
rest they expose their users to.
Do you think there's anything that can be done, I don't know, from your side or the client,
or, you know, like what can be done to try to accelerate this and try to get more open source
immutable contracts?
Or is it just a user demand type thing that, you know, it's hopefully comes
with time and maturity.
It'd be good if there were like groups in the ecosystem that could kind of make a list
of all the best practices who's following them.
Neodym tried this with like security.t.comt, and the, in the GitHub and that had some
limited positive impact.
Yeah, it's probably should be like a group ecosystem effort.
it's hard to maintain those systems and like keep them up to date.
So it needs, I think, like, input from a lot of folks.
So we talked about economics a little bit before.
But, you know, right now I feel like we've seen actually the most vibrant sort of governance discussion that at least I'm aware of in Solana with this SIMT 228, right, that multi-coin proponent.
post a change to the inflation, which would make it dependent on how much is being staked.
Because I think right now, right, we've had this inflation that kind of gradually, slowly goes down
with time. What are your thoughts, first of all, on this specific proposal?
Yeah. So I think the perfect way to have inflation is that like the network needs to get
some stakers to run boxes and pay for the boxes running.
And the only thing that the network can really sell is more tokens.
So this is what the inflation piece.
So if you were to run an auction,
you can take the top best bids that are sufficient
for you to run a secure network.
And you don't know what those are.
You kind of guess what that number is.
But effectively, you can kind of run this algorithm.
You start with 50%.
You take the top best 50%
bids to stake, and only those,
and you offer everybody the same price around like a Dutch auction.
And if that price is below zero,
because people can bid a negative interest rate,
they're willing to burn some of their tokens
for the right to make blocks.
You then increase the amount of stake that you want.
So you're targeting zero.
And if the bid is above zero,
then you decrease within some limits that at a high level people think are safe.
We've seen Ethereum run at like 20 to 30 percent stake without any problems at all like in
the security side.
So I think having the lowest limit of 20 would be reasonable.
And I don't see any marginal benefit above like 80%.
So between 20 and 80 percent, right, this thing is bouncing around and people are bidding.
those bids effectively create a curve
that looks very similar to the curve proposed in 228.
Now, there's an error there because the curve in 228 is fixed.
There is this curve that, you know,
Max and a bunch of other smart people thought
would be the best approximation.
But that error between the market rate
in all these dynamic environments
and the proposed curve
is going to show up as the network overpaying for stake.
It's much, much smaller.
than the current setup.
So my view is that like running these auctions and the complexity of telling users,
oh, you need to pick a price and maybe you need to pick a negative price because there's
a lot of block rewards, it's just so complicated that there's no way to really scale
that up outside of like a few small professionals.
And it's a pain in the ass to run.
Imagine you as a validator that your stakers constantly have to bid like every epoch,
no matter how long it is, right?
Every three months even.
could be the validator's bidding, no?
Sure.
It could be the validator's bidding,
but it's still like a pain to run and manage that.
I think my view is like,
don't let the perfect be the enemy of the good.
The proposed curve is a pretty good approximation
of that process,
and it's much, much simpler to implement.
There's no auction.
There's no, like,
auctions themselves have a whole bunch of complexity.
That's not only,
and running,
them is gnarly. So in general, I'm very supportive to 228. What I want to emphasize is that, like,
Solana's never been a money. Like, the point of soul is to disintent incentivize spam. If that we have another
97% negative downturn in the market, and it's the bottom of the bear, and this curve is not
working out, people will change it. That's okay. Like that. Right? I think people need to understand
that there isn't like this Bitcoin-esque.
We don't care about her grandfather.
The sins of our grandfather don't bother us.
It doesn't matter what Bitcoin did in their monetary policy.
Salana can do a whole bunch of stuff that I think Ethereum is still stuck on
in this trying to be money and compete with Bitcoin.
That the Solana ecosystem is not.
So I think this curve is an improvement.
And I'm supportive.
and I'm encouraging people to vote for it.
And I think it'll reduce emissions, and emissions are, in a perfect world, emissions don't matter
because there is no taxes and there's no middlemen that can take a cut.
But in the reality, you know, like, if you look at fees from custodians and centralized exchanges
and the rewards, it's just like a huge subsidy.
And that's not even counting the average global tax rate.
on those earnings.
Maybe on this note,
would you agree with the statement to say
emissions are attacks on everyone who's not staking?
Oh yeah.
Emissions themselves, it's just money moving around the black box.
But it's not like the group of users that are staking and not staking are
different people.
You can literally be both at the same time.
Right.
Well, yeah, it should certainly.
in Ethereum, there are some people that have
a strong feeling there should be reasons
for ISA to not be staked and
just be held in ESA
and they see kind of staked
Issa potentially as a threat to the
whatever moniness of
those things again, you don't
really care about or yeah.
So would you say
90 or even close to
100% Solana
staked or be in the form of
some staked Solana or something like
that tokenized stake Solana that you wouldn't see as an issue?
If there's like UX issues around it, like that's what I would care more,
I think I don't like when things hit the limit.
Like it seems like there's a bug somewhere, incentives-wise.
Like you don't need 100% state.
Sure.
Yeah, yeah.
You don't need it, but the question is, wouldn't it be rational?
or if staked,
kind of a tokenized state version
is just so easy to access,
then why not get the addition to 1% essentially
or whatever it is?
Well, then why isn't the rate negative?
That would be my question.
Like, if it's 100%, then people should be,
then somebody's overpaying, right?
So to me, it bugs my market,
like free market thing more than like whether it's usable as money or not.
It's like, it seems like the whatever curve you picked, it has a bug in it and has caused
it to hit the limit point, right?
So that's what you want to avoid.
You want the parameters to be set somewhere where you're not at zero or 100%.
But like in general, like, I don't think there's any threat to the mightiness of it.
And I don't really care if Sol is used as money.
If, you know, if there's 10 billion per day volume of ETH versus everything.
else on Salana, it's great, we hit PMF.
There's real activity.
Like, why would anyone be upset?
Trade Marie, I don't care.
Cool.
I think you started with
saying that
some things came as expected, other things,
specifically the use cases did not
came as expected.
So maybe making an outlook
for the next
a couple of years,
do you think that those
finance use cases
that you initially expected,
or what are the blockers there?
And how do you see that?
Or too difficult to guess?
These unexpected use cases
surfaced a lot of problems
that I think are real.
I think there's,
like, I think we need
multiple concurrent leaders
and like the ability
for applications
to prioritize market maker cancels before takers,
like all that stuff,
that needs to be implemented before TradFi really takes these systems seriously
because the, like, not even, like, the launches themselves,
like I think are like a BD problem.
Like you've seen a lot of like really extractive launches,
launched by like terrible people that should all be new from orbit.
Like that stuff I feel like is like business development.
You just need more robust platforms.
that have better enforcement and stuff like that.
But on the network layer itself,
there's a whole bunch of microstructure problems
with the way that transactional land
and how MEV works
that I think need to be fixed and addressed
for me to feel comfortable
with people's 401Ks running in these things.
Like that's like, I think with meme coins and NFTs,
it's like the stakes are medium.
Like the volumes are real
and the numbers are real, and there's real money at stake,
and people really need to take it seriously.
But we're not yet, like, dealing with, like, people's savings, right?
And I want the network to be so robust that I can tell, like, you should, you know,
don't trade on NASDAQ, trade on Solana.
Like, it's going to be better pricing for people.
It's more fair and more transparent.
That's the ultimate goal.
Because, but, like, there's a whole, I was more, and I was more,
and I thought it would take four years ago.
I would have said we're ready in a year.
Now I'm like, shit, this is going to take five years to fix.
Just those problems are so hard.
Then maybe just one more pushback on the thesis of,
it needs to be faster and it needs to be.
I have an alternative thesis that,
I think there is pretty good literature.
that shows that at some point, yeah, faster systems do not lead to more efficiency,
but that there are alternative market structures, namely batch auctions or micro-batch auctions.
And to me, that seems like a very interesting approach.
Oh, I agree with you.
approach that I have always been pushing for blockchain because if you have a block,
then in a way that is kind of, you can see it as a batch and you can try to do things
like single price clearing in this, in this block.
So that is, in my view, kind of a counter route to this.
We just need to be faster to solve things.
I actually agree with you, but I think my batch auction is 100 milliseconds.
Fair.
Not 12 seconds.
Yeah.
But I mean, I think my counterpoint would be how many real world events really happen in 12 seconds.
So, I mean, at these is theoretical argument that if, well, to be NASDAQ, I think we need like the latency to be basically the round trip around the world.
And for the inclusion latency to be shorter, so local block producers that are concurrently within the 120 millisecond auction.
I think at that point you can reasonably argue that it's as good as price discovery,
assuming there's no like mev extraction, we solve that.
Throw some salt over a knock on wood.
If we solve those problems, I think it's very hard to argue that a faster system is more than marginally better.
I think at 12 seconds you have like this information games that are that are,
still harder to solve because you literally can go to NASDAQ, take the trade, and then
argue against the chain. And like that, that's the opportunity that I want to, like, once you
eliminate that for the most part, then trader that sees that newswire and Bloomberg, they'll
look at the chain price or NASDAQ price as the same price. Then it's as good, you know.
But like, I agree with you. You don't need to go like below that latency for a global system.
because the world
you still have like the event
that happens in Singapore still has to go
travel to New York right so you have
still these embedded latencies that are on the order
of you know the run trip time around the world
yeah and I mean
I think with regards to
the whole you know finance
coming on chain and stuff
I do think there's a huge
upside even with all of this
you know meme coin defy stuff
is that you know in the end
a lot of the the
infrastructure, right, the applications, the ability to compose different protocols to
collateralize one thing and the other. And I think there's just so much happening there and so
much progress there. And then I think once you plug in more traditional financial assets in
there, you'll just have a very powerful system. Yeah, on the plus side, if your risk engine
survives meme coins and crypto, it can deal with like,
real world assets that are much more stable.
That's the hope at least.
Is it fair to say, for you, blockchain means financial applications,
or do you see other things?
It's a good question.
I think, like, I think this, it is, like, kind of melted with the internet.
Like, if you have truly globally, global payments with no friction,
everyone can pay everywhere, that's very different than the,
the way the internet works now because you effectively,
because you don't have money on the internet,
you have all these subscriptions and all this kind of friction
that created these silos.
And ads as a business model instead of just,
yeah.
So like, and that model silos everything, right?
YouTube wants its own silo, like Facebook wants its own silo,
and they're very, they're fighting each other, right,
for market share.
You still have some of that, but I feel like just making it easy,
to pay everywhere and everyone could start unraveling it in a more open way.
And I like the web.
I like the idea of, like, you know, minimize the cost of publishing and, like, let the
free world's free market figure it out, right?
Like, I think that's cool and kind of a really cool, beautiful thing.
I think the finance will unlock a lot of other use cases that I'm hoping are disruptive
to the big kind of big tech monopolies.
we'll see.
Cool.
You want to share anything else that's like on your mind at the moment?
Salonimoble is coming out soon.
So stay tuned.
Oh, yeah.
The Seeker.
We built a phone.
Yeah, Seeker.
We'll try to disrupt the duopoly.
What's your hope for the phone and like what you see is the role of the phone?
So the goal is to get enough crypto people in the same ecosystem.
ecosystem to where developers have like a distribution channel. And there's like 150,000 pre-orders.
That's a reasonably sized group. For a crypto app, that's actually like 50,000 daily active
users is huge. So if we were able to capture the right kind of audience and there's a high
probability that we were because people had to pay 500 bucks for a crypto phone, those are kind
of the users that you want as a developer in crypto. So they have money and they're already
crypto native. If you can build apps for them that keep their attention and they're like super cool,
you're saving money on the 20 to 30 percent rake that Apple and Google charge. You know,
we talk about disrupting finance. They charge basis points. Apple and Google charge thousands of
basis points. What is that? Like 3,000 basis points, right? Or whatever. It's absurd rake. It's
crazy that for like 20, you know, 20 years now that they've been able to get away with it.
So to me, that like massive, massive rake that they take seems like an opportunity because
if devs have can distribute to crypto users and there's enough high spending users in that
distribution channel, they will actually look at their like top 1% users that are the
spendiest users in iOS or Android and literally give them the phone for free or like offer
them incentives like through air drops or whatever.
that's the hope that we can get that flywheel working.
If it works, then that's a huge opportunity.
How strong will be the connection between the phone and the chain?
I mean, I assume there will be by default a wallet on the phone.
Yeah, there's an embedded wallet and an enclave to run the seed phrase,
and there's a bunch of work to kind of unify the experience
that people don't have to switch UI.
It's more like Apple Pay.
That's the goal.
I'm actually not opposed to adding Ethereum support
or anything else in the future.
These are like iteration times with hardware so long.
It seems like we've been added for a while,
but this is just like the second release.
It's got to keep the team focused and ship.
I assume technically it's based on Android or?
Yeah, it's Android.
Vanilla Android with Enclave kind of wallet signings
feature out of it.
Yeah, no, I mean, good luck with this initiative.
Certainly think the duopoly reserves, yeah, should get more competition.
If we lose because they compressed their fees, then that's the beauty of capitalism.
Then that worked, right?
Like, a competitor was able to change the equilibrium in the market, and that's an awesome
outcome.
Cool.
Well, thank you so much for.
coming on and totally. It was really great to have you and I was super excited about the Solana
roadmap and the pace at which the ecosystem is moving and you know all of the things that are ahead.
Thanks for having me. This was super fun.
